* [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work
@ 2009-11-17 17:03 James Bottomley
2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
0 siblings, 2 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
To: linux-arch, linux-parisc; +Cc: James Bottomley
Here's version three of the patch. This one makes sure the invalidate
works correctly. I verified it on parisc by making my system print out
the virtual addresses it was invalidating and matching up with the ones
that were initially flushed, but since invalidate is a nop on parisc, I
can't verify live that the issue is fixed. I'd really appreciate someone
from arm and sh testing here.
Thanks,
James
---
James Bottomley (6):
mm: add coherence API for DMA to vmalloc/vmap areas
parisc: add mm API for DMA to vmalloc/vmap areas
arm: add mm API for DMA to vmalloc/vmap areas
sh: add mm API for DMA to vmalloc/vmap areas
block: permit I/O to vmalloc/vmap kernel pages
xfs: fix xfs to work with Virtually Indexed architectures
arch/arm/include/asm/cacheflush.h | 10 ++++++++++
arch/parisc/include/asm/cacheflush.h | 8 ++++++++
arch/sh/include/asm/cacheflush.h | 8 ++++++++
fs/bio.c | 20 ++++++++++++++++++--
fs/xfs/linux-2.6/xfs_buf.c | 20 ++++++++++++++++++++
include/linux/highmem.h | 6 ++++++
6 files changed, 70 insertions(+), 2 deletions(-)
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work
2009-11-17 17:03 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
@ 2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
1 sibling, 0 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
To: linux-arch, linux-parisc; +Cc: James Bottomley
Here's version three of the patch. This one makes sure the invalidate
works correctly. I verified it on parisc by making my system print out
the virtual addresses it was invalidating and matching up with the ones
that were initially flushed, but since invalidate is a nop on parisc, I
can't verify live that the issue is fixed. I'd really appreciate someone
from arm and sh testing here.
Thanks,
James
---
James Bottomley (6):
mm: add coherence API for DMA to vmalloc/vmap areas
parisc: add mm API for DMA to vmalloc/vmap areas
arm: add mm API for DMA to vmalloc/vmap areas
sh: add mm API for DMA to vmalloc/vmap areas
block: permit I/O to vmalloc/vmap kernel pages
xfs: fix xfs to work with Virtually Indexed architectures
arch/arm/include/asm/cacheflush.h | 10 ++++++++++
arch/parisc/include/asm/cacheflush.h | 8 ++++++++
arch/sh/include/asm/cacheflush.h | 8 ++++++++
fs/bio.c | 20 ++++++++++++++++++--
fs/xfs/linux-2.6/xfs_buf.c | 20 ++++++++++++++++++++
include/linux/highmem.h | 6 ++++++
6 files changed, 70 insertions(+), 2 deletions(-)
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
2009-11-17 17:03 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
2009-11-17 17:03 ` James Bottomley
@ 2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` James Bottomley
` (2 more replies)
1 sibling, 3 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
To: linux-arch, linux-parisc; +Cc: James Bottomley
On Virtually Indexed architectures (which don't do automatic alias
resolution in their caches), we have to flush via the correct
virtual address to prepare pages for DMA. On some architectures
(like arm) we cannot prevent the CPU from doing data movein along
the alias (and thus giving stale read data), so we not only have to
introduce a flush API to push dirty cache lines out, but also an invalidate
API to kill inconsistent cache lines that may have moved in before
DMA changed the data
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
include/linux/highmem.h | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 211ff44..eb99c70 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
static inline void flush_kernel_dcache_page(struct page *page)
{
}
+static inline void flush_kernel_dcache_addr(void *vaddr)
+{
+}
+static inline void invalidate_kernel_dcache_addr(void *vaddr)
+{
+}
#endif
#include <asm/kmap_types.h>
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
2009-11-17 17:03 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
@ 2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 2/6] parisc: add mm " James Bottomley
2009-11-18 14:38 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas Ralf Baechle
2 siblings, 0 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
To: linux-arch, linux-parisc; +Cc: James Bottomley
On Virtually Indexed architectures (which don't do automatic alias
resolution in their caches), we have to flush via the correct
virtual address to prepare pages for DMA. On some architectures
(like arm) we cannot prevent the CPU from doing data movein along
the alias (and thus giving stale read data), so we not only have to
introduce a flush API to push dirty cache lines out, but also an invalidate
API to kill inconsistent cache lines that may have moved in before
DMA changed the data
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
include/linux/highmem.h | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 211ff44..eb99c70 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
static inline void flush_kernel_dcache_page(struct page *page)
{
}
+static inline void flush_kernel_dcache_addr(void *vaddr)
+{
+}
+static inline void invalidate_kernel_dcache_addr(void *vaddr)
+{
+}
#endif
#include <asm/kmap_types.h>
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 2/6] parisc: add mm API for DMA to vmalloc/vmap areas
2009-11-17 17:03 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
2009-11-17 17:03 ` James Bottomley
@ 2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 3/6] arm: " James Bottomley
2009-11-18 14:38 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas Ralf Baechle
2 siblings, 1 reply; 25+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
To: linux-arch, linux-parisc; +Cc: James Bottomley
We already have an API to flush a kernel page along an alias
address, so use it. The TLB purge prevents the CPU from doing
speculative moveins on the flushed address, so we don't need to
implement and invalidate.
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
arch/parisc/include/asm/cacheflush.h | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 7243951..2536a00 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -90,6 +90,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
{
flush_kernel_dcache_page_addr(page_address(page));
}
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+ flush_kernel_dcache_page_addr(addr);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+ /* nop .. the flush prevents move in until the page is touched */
+}
#ifdef CONFIG_DEBUG_RODATA
void mark_rodata_ro(void);
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 3/6] arm: add mm API for DMA to vmalloc/vmap areas
2009-11-17 17:03 ` [PATCH 2/6] parisc: add mm " James Bottomley
@ 2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 4/6] sh: " James Bottomley
0 siblings, 2 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
To: linux-arch, linux-parisc; +Cc: James Bottomley
ARM cannot prevent cache movein, so this patch implements both the
flush and invalidate pieces of the API.
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
arch/arm/include/asm/cacheflush.h | 10 ++++++++++
1 files changed, 10 insertions(+), 0 deletions(-)
diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
index 1a711ea..1104ee9 100644
--- a/arch/arm/include/asm/cacheflush.h
+++ b/arch/arm/include/asm/cacheflush.h
@@ -436,6 +436,16 @@ static inline void flush_kernel_dcache_page(struct page *page)
if ((cache_is_vivt() || cache_is_vipt_aliasing()) && !PageHighMem(page))
__cpuc_flush_dcache_page(page_address(page));
}
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+ if ((cache_is_vivt() || cache_is_vipt_aliasing()))
+ __cpuc_flush_dcache_page(addr);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+ if ((cache_is_vivt() || cache_is_vipt_aliasing()))
+ __cpuc_flush_dcache_page(addr);
+}
#define flush_dcache_mmap_lock(mapping) \
spin_lock_irq(&(mapping)->tree_lock)
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 3/6] arm: add mm API for DMA to vmalloc/vmap areas
2009-11-17 17:03 ` [PATCH 3/6] arm: " James Bottomley
@ 2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 4/6] sh: " James Bottomley
1 sibling, 0 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
To: linux-arch, linux-parisc; +Cc: James Bottomley
ARM cannot prevent cache movein, so this patch implements both the
flush and invalidate pieces of the API.
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
arch/arm/include/asm/cacheflush.h | 10 ++++++++++
1 files changed, 10 insertions(+), 0 deletions(-)
diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
index 1a711ea..1104ee9 100644
--- a/arch/arm/include/asm/cacheflush.h
+++ b/arch/arm/include/asm/cacheflush.h
@@ -436,6 +436,16 @@ static inline void flush_kernel_dcache_page(struct page *page)
if ((cache_is_vivt() || cache_is_vipt_aliasing()) && !PageHighMem(page))
__cpuc_flush_dcache_page(page_address(page));
}
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+ if ((cache_is_vivt() || cache_is_vipt_aliasing()))
+ __cpuc_flush_dcache_page(addr);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+ if ((cache_is_vivt() || cache_is_vipt_aliasing()))
+ __cpuc_flush_dcache_page(addr);
+}
#define flush_dcache_mmap_lock(mapping) \
spin_lock_irq(&(mapping)->tree_lock)
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 4/6] sh: add mm API for DMA to vmalloc/vmap areas
2009-11-17 17:03 ` [PATCH 3/6] arm: " James Bottomley
2009-11-17 17:03 ` James Bottomley
@ 2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
1 sibling, 2 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
To: linux-arch, linux-parisc; +Cc: James Bottomley
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
arch/sh/include/asm/cacheflush.h | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h
index 4c5462d..3cb8824 100644
--- a/arch/sh/include/asm/cacheflush.h
+++ b/arch/sh/include/asm/cacheflush.h
@@ -48,6 +48,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
{
flush_dcache_page(page);
}
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+ __flush_wback_region(addr, PAGE_SIZE);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+ __flush_invalidate_region(addr, PAGE_SIZE);
+}
#if defined(CONFIG_CPU_SH4) && !defined(CONFIG_CACHE_OFF)
extern void copy_to_user_page(struct vm_area_struct *vma,
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 4/6] sh: add mm API for DMA to vmalloc/vmap areas
2009-11-17 17:03 ` [PATCH 4/6] sh: " James Bottomley
@ 2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
1 sibling, 0 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
To: linux-arch, linux-parisc; +Cc: James Bottomley
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
arch/sh/include/asm/cacheflush.h | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h
index 4c5462d..3cb8824 100644
--- a/arch/sh/include/asm/cacheflush.h
+++ b/arch/sh/include/asm/cacheflush.h
@@ -48,6 +48,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
{
flush_dcache_page(page);
}
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+ __flush_wback_region(addr, PAGE_SIZE);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+ __flush_invalidate_region(addr, PAGE_SIZE);
+}
#if defined(CONFIG_CPU_SH4) && !defined(CONFIG_CACHE_OFF)
extern void copy_to_user_page(struct vm_area_struct *vma,
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages
2009-11-17 17:03 ` [PATCH 4/6] sh: " James Bottomley
2009-11-17 17:03 ` James Bottomley
@ 2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures James Bottomley
2009-11-18 10:10 ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages FUJITA Tomonori
1 sibling, 2 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
To: linux-arch, linux-parisc; +Cc: James Bottomley
This updates bio_map_kern() to check for pages in the vmalloc address
range and call the new kernel flushing APIs if the are. This should
allow any kernel user to pass a vmalloc/vmap area to block.
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
fs/bio.c | 20 ++++++++++++++++++--
1 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/fs/bio.c b/fs/bio.c
index 7673800..0cf7b79 100644
--- a/fs/bio.c
+++ b/fs/bio.c
@@ -1120,6 +1120,14 @@ void bio_unmap_user(struct bio *bio)
static void bio_map_kern_endio(struct bio *bio, int err)
{
+ void *kaddr = bio->bi_private;
+
+ if (is_vmalloc_addr(kaddr)) {
+ int i;
+
+ for (i = 0; i < bio->bi_vcnt; i++)
+ invalidate_kernel_dcache_addr(kaddr + i * PAGE_SIZE);
+ }
bio_put(bio);
}
@@ -1138,9 +1146,12 @@ static struct bio *__bio_map_kern(struct request_queue *q, void *data,
if (!bio)
return ERR_PTR(-ENOMEM);
+ bio->bi_private = data;
+
offset = offset_in_page(kaddr);
for (i = 0; i < nr_pages; i++) {
unsigned int bytes = PAGE_SIZE - offset;
+ struct page *page;
if (len <= 0)
break;
@@ -1148,8 +1159,13 @@ static struct bio *__bio_map_kern(struct request_queue *q, void *data,
if (bytes > len)
bytes = len;
- if (bio_add_pc_page(q, bio, virt_to_page(data), bytes,
- offset) < bytes)
+ if (is_vmalloc_addr(data)) {
+ flush_kernel_dcache_addr(data);
+ page = vmalloc_to_page(data);
+ } else
+ page = virt_to_page(data);
+
+ if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes)
break;
data += bytes;
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures
2009-11-17 17:03 ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
@ 2009-11-17 17:03 ` James Bottomley
2009-11-18 10:10 ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages FUJITA Tomonori
1 sibling, 0 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-17 17:03 UTC (permalink / raw)
To: linux-arch, linux-parisc; +Cc: James Bottomley
xfs_buf.c includes what is essentially a hand rolled version of
blk_rq_map_kern(). In order to work properly with the vmalloc buffers
that xfs uses, this hand rolled routine must also implement the flushing
API for vmap/vmalloc areas.
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
fs/xfs/linux-2.6/xfs_buf.c | 20 ++++++++++++++++++++
1 files changed, 20 insertions(+), 0 deletions(-)
diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index 965df12..320a6e4 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -1132,12 +1132,26 @@ xfs_buf_bio_end_io(
xfs_buf_t *bp = (xfs_buf_t *)bio->bi_private;
unsigned int blocksize = bp->b_target->bt_bsize;
struct bio_vec *bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
+ void *vaddr = NULL;
+ int i;
xfs_buf_ioerror(bp, -error);
+ if (is_vmalloc_addr(bp->b_addr))
+ for (i = 0; i < bp->b_page_count; i++)
+ if (bvec->bv_page == bp->b_pages[i]) {
+ vaddr = bp->b_addr + i*PAGE_SIZE;
+ break;
+ }
+
do {
struct page *page = bvec->bv_page;
+ if (is_vmalloc_addr(bp->b_addr)) {
+ invalidate_kernel_dcache_addr(vaddr);
+ vaddr -= PAGE_SIZE;
+ }
+
ASSERT(!PagePrivate(page));
if (unlikely(bp->b_error)) {
if (bp->b_flags & XBF_READ)
@@ -1202,6 +1216,9 @@ _xfs_buf_ioapply(
bio->bi_end_io = xfs_buf_bio_end_io;
bio->bi_private = bp;
+ if (is_vmalloc_addr(bp->b_addr))
+ flush_kernel_dcache_addr(bp->b_addr);
+
bio_add_page(bio, bp->b_pages[0], PAGE_CACHE_SIZE, 0);
size = 0;
@@ -1228,6 +1245,9 @@ next_chunk:
if (nbytes > size)
nbytes = size;
+ if (is_vmalloc_addr(bp->b_addr))
+ flush_kernel_dcache_addr(bp->b_addr + PAGE_SIZE*map_i);
+
rbytes = bio_add_page(bio, bp->b_pages[map_i], nbytes, offset);
if (rbytes < nbytes)
break;
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* Re: [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages
2009-11-17 17:03 ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
2009-11-17 17:03 ` [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures James Bottomley
@ 2009-11-18 10:10 ` FUJITA Tomonori
2009-11-18 13:50 ` James Bottomley
1 sibling, 1 reply; 25+ messages in thread
From: FUJITA Tomonori @ 2009-11-18 10:10 UTC (permalink / raw)
To: James.Bottomley; +Cc: linux-arch, linux-parisc
On Tue, 17 Nov 2009 11:03:51 -0600
James Bottomley <James.Bottomley@suse.de> wrote:
> This updates bio_map_kern() to check for pages in the vmalloc address
> range and call the new kernel flushing APIs if the are. This should
> allow any kernel user to pass a vmalloc/vmap area to block.
>
> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
> ---
> fs/bio.c | 20 ++++++++++++++++++--
> 1 files changed, 18 insertions(+), 2 deletions(-)
Do we need this?
Buffers that xfs_buf.c passes to block doesn't go to bio_map_kern()?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages
2009-11-18 10:10 ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages FUJITA Tomonori
@ 2009-11-18 13:50 ` James Bottomley
2009-11-18 13:50 ` James Bottomley
2009-11-18 14:15 ` FUJITA Tomonori
0 siblings, 2 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-18 13:50 UTC (permalink / raw)
To: FUJITA Tomonori; +Cc: linux-arch, linux-parisc
On Wed, 2009-11-18 at 19:10 +0900, FUJITA Tomonori wrote:
> On Tue, 17 Nov 2009 11:03:51 -0600
> James Bottomley <James.Bottomley@suse.de> wrote:
>
> > This updates bio_map_kern() to check for pages in the vmalloc address
> > range and call the new kernel flushing APIs if the are. This should
> > allow any kernel user to pass a vmalloc/vmap area to block.
> >
> > Signed-off-by: James Bottomley <James.Bottomley@suse.de>
> > ---
> > fs/bio.c | 20 ++++++++++++++++++--
> > 1 files changed, 18 insertions(+), 2 deletions(-)
>
> Do we need this?
>
> Buffers that xfs_buf.c passes to block doesn't go to bio_map_kern()?
For completeness, yes ... because xfs *should* be passing its buffers to
bio_map_kern() ... it just happens to roll its own.
James
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages
2009-11-18 13:50 ` James Bottomley
@ 2009-11-18 13:50 ` James Bottomley
2009-11-18 14:15 ` FUJITA Tomonori
1 sibling, 0 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-18 13:50 UTC (permalink / raw)
To: FUJITA Tomonori; +Cc: linux-arch, linux-parisc
On Wed, 2009-11-18 at 19:10 +0900, FUJITA Tomonori wrote:
> On Tue, 17 Nov 2009 11:03:51 -0600
> James Bottomley <James.Bottomley@suse.de> wrote:
>
> > This updates bio_map_kern() to check for pages in the vmalloc address
> > range and call the new kernel flushing APIs if the are. This should
> > allow any kernel user to pass a vmalloc/vmap area to block.
> >
> > Signed-off-by: James Bottomley <James.Bottomley@suse.de>
> > ---
> > fs/bio.c | 20 ++++++++++++++++++--
> > 1 files changed, 18 insertions(+), 2 deletions(-)
>
> Do we need this?
>
> Buffers that xfs_buf.c passes to block doesn't go to bio_map_kern()?
For completeness, yes ... because xfs *should* be passing its buffers to
bio_map_kern() ... it just happens to roll its own.
James
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages
2009-11-18 13:50 ` James Bottomley
2009-11-18 13:50 ` James Bottomley
@ 2009-11-18 14:15 ` FUJITA Tomonori
2009-11-18 14:15 ` FUJITA Tomonori
2009-11-18 14:21 ` James Bottomley
1 sibling, 2 replies; 25+ messages in thread
From: FUJITA Tomonori @ 2009-11-18 14:15 UTC (permalink / raw)
To: James.Bottomley; +Cc: fujita.tomonori, linux-arch, linux-parisc
On Wed, 18 Nov 2009 08:50:40 -0500
James Bottomley <James.Bottomley@suse.de> wrote:
> On Wed, 2009-11-18 at 19:10 +0900, FUJITA Tomonori wrote:
> > On Tue, 17 Nov 2009 11:03:51 -0600
> > James Bottomley <James.Bottomley@suse.de> wrote:
> >
> > > This updates bio_map_kern() to check for pages in the vmalloc address
> > > range and call the new kernel flushing APIs if the are. This should
> > > allow any kernel user to pass a vmalloc/vmap area to block.
> > >
> > > Signed-off-by: James Bottomley <James.Bottomley@suse.de>
> > > ---
> > > fs/bio.c | 20 ++++++++++++++++++--
> > > 1 files changed, 18 insertions(+), 2 deletions(-)
> >
> > Do we need this?
> >
> > Buffers that xfs_buf.c passes to block doesn't go to bio_map_kern()?
>
> For completeness, yes ... because xfs *should* be passing its buffers to
> bio_map_kern() ... it just happens to roll its own.
Ok, you mean that we will convert XFS to use bio_map_kern().
But adding another trick to bio_map_kern() to handle a vmalloc/vmap
area is a good move? Only XFS do such, right?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages
2009-11-18 14:15 ` FUJITA Tomonori
@ 2009-11-18 14:15 ` FUJITA Tomonori
2009-11-18 14:21 ` James Bottomley
1 sibling, 0 replies; 25+ messages in thread
From: FUJITA Tomonori @ 2009-11-18 14:15 UTC (permalink / raw)
To: James.Bottomley; +Cc: fujita.tomonori, linux-arch, linux-parisc
On Wed, 18 Nov 2009 08:50:40 -0500
James Bottomley <James.Bottomley@suse.de> wrote:
> On Wed, 2009-11-18 at 19:10 +0900, FUJITA Tomonori wrote:
> > On Tue, 17 Nov 2009 11:03:51 -0600
> > James Bottomley <James.Bottomley@suse.de> wrote:
> >
> > > This updates bio_map_kern() to check for pages in the vmalloc address
> > > range and call the new kernel flushing APIs if the are. This should
> > > allow any kernel user to pass a vmalloc/vmap area to block.
> > >
> > > Signed-off-by: James Bottomley <James.Bottomley@suse.de>
> > > ---
> > > fs/bio.c | 20 ++++++++++++++++++--
> > > 1 files changed, 18 insertions(+), 2 deletions(-)
> >
> > Do we need this?
> >
> > Buffers that xfs_buf.c passes to block doesn't go to bio_map_kern()?
>
> For completeness, yes ... because xfs *should* be passing its buffers to
> bio_map_kern() ... it just happens to roll its own.
Ok, you mean that we will convert XFS to use bio_map_kern().
But adding another trick to bio_map_kern() to handle a vmalloc/vmap
area is a good move? Only XFS do such, right?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages
2009-11-18 14:15 ` FUJITA Tomonori
2009-11-18 14:15 ` FUJITA Tomonori
@ 2009-11-18 14:21 ` James Bottomley
2009-11-18 14:21 ` James Bottomley
1 sibling, 1 reply; 25+ messages in thread
From: James Bottomley @ 2009-11-18 14:21 UTC (permalink / raw)
To: FUJITA Tomonori; +Cc: linux-arch, linux-parisc
On Wed, 2009-11-18 at 23:15 +0900, FUJITA Tomonori wrote:
> On Wed, 18 Nov 2009 08:50:40 -0500
> James Bottomley <James.Bottomley@suse.de> wrote:
>
> > On Wed, 2009-11-18 at 19:10 +0900, FUJITA Tomonori wrote:
> > > On Tue, 17 Nov 2009 11:03:51 -0600
> > > James Bottomley <James.Bottomley@suse.de> wrote:
> > >
> > > > This updates bio_map_kern() to check for pages in the vmalloc address
> > > > range and call the new kernel flushing APIs if the are. This should
> > > > allow any kernel user to pass a vmalloc/vmap area to block.
> > > >
> > > > Signed-off-by: James Bottomley <James.Bottomley@suse.de>
> > > > ---
> > > > fs/bio.c | 20 ++++++++++++++++++--
> > > > 1 files changed, 18 insertions(+), 2 deletions(-)
> > >
> > > Do we need this?
> > >
> > > Buffers that xfs_buf.c passes to block doesn't go to bio_map_kern()?
> >
> > For completeness, yes ... because xfs *should* be passing its buffers to
> > bio_map_kern() ... it just happens to roll its own.
>
> Ok, you mean that we will convert XFS to use bio_map_kern().
>
> But adding another trick to bio_map_kern() to handle a vmalloc/vmap
> area is a good move? Only XFS do such, right?
Well, it's more a question of how we want the Linux APIs to look.
Should passing vmalloc/vmap areas into the I/O routines be wrong? Right
at the moment it doesn't work but xfs is the only consumer.
There are definite reasons to say yes: greater flexibility for handling
large buffers which logging filesystems seem to need.
My position is either xfs is right and we should handle them correctly
(and hence all the APIs should handle them correctly including
bio_map_kern) or xfs is wrong and we should try and make it work with
current APIs (which would necessitate a large contiguous physical
allocation ... with all the associated problems). I chose the former
with this patch.
James
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages
2009-11-18 14:21 ` James Bottomley
@ 2009-11-18 14:21 ` James Bottomley
0 siblings, 0 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-18 14:21 UTC (permalink / raw)
To: FUJITA Tomonori; +Cc: linux-arch, linux-parisc
On Wed, 2009-11-18 at 23:15 +0900, FUJITA Tomonori wrote:
> On Wed, 18 Nov 2009 08:50:40 -0500
> James Bottomley <James.Bottomley@suse.de> wrote:
>
> > On Wed, 2009-11-18 at 19:10 +0900, FUJITA Tomonori wrote:
> > > On Tue, 17 Nov 2009 11:03:51 -0600
> > > James Bottomley <James.Bottomley@suse.de> wrote:
> > >
> > > > This updates bio_map_kern() to check for pages in the vmalloc address
> > > > range and call the new kernel flushing APIs if the are. This should
> > > > allow any kernel user to pass a vmalloc/vmap area to block.
> > > >
> > > > Signed-off-by: James Bottomley <James.Bottomley@suse.de>
> > > > ---
> > > > fs/bio.c | 20 ++++++++++++++++++--
> > > > 1 files changed, 18 insertions(+), 2 deletions(-)
> > >
> > > Do we need this?
> > >
> > > Buffers that xfs_buf.c passes to block doesn't go to bio_map_kern()?
> >
> > For completeness, yes ... because xfs *should* be passing its buffers to
> > bio_map_kern() ... it just happens to roll its own.
>
> Ok, you mean that we will convert XFS to use bio_map_kern().
>
> But adding another trick to bio_map_kern() to handle a vmalloc/vmap
> area is a good move? Only XFS do such, right?
Well, it's more a question of how we want the Linux APIs to look.
Should passing vmalloc/vmap areas into the I/O routines be wrong? Right
at the moment it doesn't work but xfs is the only consumer.
There are definite reasons to say yes: greater flexibility for handling
large buffers which logging filesystems seem to need.
My position is either xfs is right and we should handle them correctly
(and hence all the APIs should handle them correctly including
bio_map_kern) or xfs is wrong and we should try and make it work with
current APIs (which would necessitate a large contiguous physical
allocation ... with all the associated problems). I chose the former
with this patch.
James
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
2009-11-17 17:03 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 2/6] parisc: add mm " James Bottomley
@ 2009-11-18 14:38 ` Ralf Baechle
2009-11-18 14:38 ` Ralf Baechle
2009-11-18 15:13 ` James Bottomley
2 siblings, 2 replies; 25+ messages in thread
From: Ralf Baechle @ 2009-11-18 14:38 UTC (permalink / raw)
To: James Bottomley; +Cc: linux-arch, linux-parisc
On Tue, Nov 17, 2009 at 11:03:47AM -0600, James Bottomley wrote:
> On Virtually Indexed architectures (which don't do automatic alias
> resolution in their caches), we have to flush via the correct
> virtual address to prepare pages for DMA. On some architectures
> (like arm) we cannot prevent the CPU from doing data movein along
> the alias (and thus giving stale read data), so we not only have to
> introduce a flush API to push dirty cache lines out, but also an invalidate
> API to kill inconsistent cache lines that may have moved in before
> DMA changed the data
The API looks right for MIPS and trivial to implement based on existing
code, so feel free to throw in my Ack on the generic parts.
The new APIs deserve documentation in Documentation/cachetlb.txt.
Ralf
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
2009-11-18 14:38 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas Ralf Baechle
@ 2009-11-18 14:38 ` Ralf Baechle
2009-11-18 15:13 ` James Bottomley
1 sibling, 0 replies; 25+ messages in thread
From: Ralf Baechle @ 2009-11-18 14:38 UTC (permalink / raw)
To: James Bottomley; +Cc: linux-arch, linux-parisc
On Tue, Nov 17, 2009 at 11:03:47AM -0600, James Bottomley wrote:
> On Virtually Indexed architectures (which don't do automatic alias
> resolution in their caches), we have to flush via the correct
> virtual address to prepare pages for DMA. On some architectures
> (like arm) we cannot prevent the CPU from doing data movein along
> the alias (and thus giving stale read data), so we not only have to
> introduce a flush API to push dirty cache lines out, but also an invalidate
> API to kill inconsistent cache lines that may have moved in before
> DMA changed the data
The API looks right for MIPS and trivial to implement based on existing
code, so feel free to throw in my Ack on the generic parts.
The new APIs deserve documentation in Documentation/cachetlb.txt.
Ralf
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
2009-11-18 14:38 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas Ralf Baechle
2009-11-18 14:38 ` Ralf Baechle
@ 2009-11-18 15:13 ` James Bottomley
1 sibling, 0 replies; 25+ messages in thread
From: James Bottomley @ 2009-11-18 15:13 UTC (permalink / raw)
To: Ralf Baechle; +Cc: linux-arch, linux-parisc
On Wed, 2009-11-18 at 15:38 +0100, Ralf Baechle wrote:
> On Tue, Nov 17, 2009 at 11:03:47AM -0600, James Bottomley wrote:
>
> > On Virtually Indexed architectures (which don't do automatic alias
> > resolution in their caches), we have to flush via the correct
> > virtual address to prepare pages for DMA. On some architectures
> > (like arm) we cannot prevent the CPU from doing data movein along
> > the alias (and thus giving stale read data), so we not only have to
> > introduce a flush API to push dirty cache lines out, but also an invalidate
> > API to kill inconsistent cache lines that may have moved in before
> > DMA changed the data
>
> The API looks right for MIPS and trivial to implement based on existing
> code, so feel free to throw in my Ack on the generic parts.
>
> The new APIs deserve documentation in Documentation/cachetlb.txt.
True (mutter, hate doing docs, mutter).
How about this?
James
---
diff --git a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt
index da42ab4..7d1055c 100644
--- a/Documentation/cachetlb.txt
+++ b/Documentation/cachetlb.txt
@@ -377,3 +377,27 @@ maps this page at its virtual address.
All the functionality of flush_icache_page can be implemented in
flush_dcache_page and update_mmu_cache. In 2.7 the hope is to
remove this interface completely.
+
+For machines where aliasing can be a problem, there exist two
+additional APIs to handle I/O to vmap/vmalloc areas within the
+kernel. These are areas that have two kernel mappings, one the regular
+page offset map through which the page has likely been previously
+accessed and the other, the new contiguous map in the kernel virtual
+map area. This dual mapping sets up aliasing within the kernel and,
+in particular since all kernel flushing goes through the offset map,
+must be handled separately for I/O. to declare your architecture as
+needing to use these functions, you must define
+ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE in asm/cacheflush.h and add two API
+helpers (usually as static inlines in cacheflush.h). The two new APIs
+are:
+
+ void flush_kernel_dcache_addr(void *addr)
+ Flush a single page through the vmap alias for addr. This is
+ usually executed prior to performing I/O on the page to make
+ sure the underlying physical page is up to date.
+
+ void invalidate_kernel_dcache_addr(void *addr)
+ Invalidate the page after I/O has completed. This is necessary
+ on machines whose cache mechanisms might trigger cache movein
+ during I/O. If you can ensure architecturally that this movein
+ never occurs, this function can be empty on your architecture.
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work
@ 2009-09-17 23:06 James Bottomley
2009-09-17 23:06 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
0 siblings, 1 reply; 25+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
To: linux-arch, linux-fsdevel, linux-parisc
Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley
From: James Bottomley <James.Bottomley@HansenPartnership.com>
Here's version three of the patch. This one makes sure the invalidate
works correctly. I verified it on parisc by making my system print out
the virtual addresses it was invalidating and matching up with the ones
that were initially flushed, but since invalidate is a nop on parisc, I
can't verify live that the issue is fixed. I'd really appreciate someone
from arm and sh testing here.
Thanks,
James
---
James Bottomley (6):
mm: add coherence API for DMA to vmalloc/vmap areas
parisc: add mm API for DMA to vmalloc/vmap areas
arm: add mm API for DMA to vmalloc/vmap areas
sh: add mm API for DMA to vmalloc/vmap areas
block: permit I/O to vmalloc/vmap kernel pages
xfs: fix xfs to work with Virtually Indexed architectures
arch/arm/include/asm/cacheflush.h | 10 ++++++++++
arch/parisc/include/asm/cacheflush.h | 8 ++++++++
arch/sh/include/asm/cacheflush.h | 8 ++++++++
fs/bio.c | 20 ++++++++++++++++++--
fs/xfs/linux-2.6/xfs_buf.c | 20 ++++++++++++++++++++
include/linux/highmem.h | 6 ++++++
6 files changed, 70 insertions(+), 2 deletions(-)
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
2009-09-17 23:06 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
@ 2009-09-17 23:06 ` James Bottomley
2009-09-17 23:06 ` [PATCH 2/6] parisc: add mm " James Bottomley
0 siblings, 1 reply; 25+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
To: linux-arch, linux-fsdevel, linux-parisc
Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
James Bottomley
From: James Bottomley <jejb@external.hp.com>
On Virtually Indexed architectures (which don't do automatic alias
resolution in their caches), we have to flush via the correct
virtual address to prepare pages for DMA. On some architectures
(like arm) we cannot prevent the CPU from doing data movein along
the alias (and thus giving stale read data), so we not only have to
introduce a flush API to push dirty cache lines out, but also an invalidate
API to kill inconsistent cache lines that may have moved in before
DMA changed the data
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
include/linux/highmem.h | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 211ff44..eb99c70 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
static inline void flush_kernel_dcache_page(struct page *page)
{
}
+static inline void flush_kernel_dcache_addr(void *vaddr)
+{
+}
+static inline void invalidate_kernel_dcache_addr(void *vaddr)
+{
+}
#endif
#include <asm/kmap_types.h>
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 2/6] parisc: add mm API for DMA to vmalloc/vmap areas
2009-09-17 23:06 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
@ 2009-09-17 23:06 ` James Bottomley
2009-09-17 23:06 ` James Bottomley
0 siblings, 1 reply; 25+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
To: linux-arch, linux-fsdevel, linux-parisc
Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
James Bottomley
From: James Bottomley <jejb@external.hp.com>
We already have an API to flush a kernel page along an alias
address, so use it. The TLB purge prevents the CPU from doing
speculative moveins on the flushed address, so we don't need to
implement and invalidate.
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
arch/parisc/include/asm/cacheflush.h | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 7243951..2536a00 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -90,6 +90,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
{
flush_kernel_dcache_page_addr(page_address(page));
}
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+ flush_kernel_dcache_page_addr(addr);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+ /* nop .. the flush prevents move in until the page is touched */
+}
#ifdef CONFIG_DEBUG_RODATA
void mark_rodata_ro(void);
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 2/6] parisc: add mm API for DMA to vmalloc/vmap areas
2009-09-17 23:06 ` [PATCH 2/6] parisc: add mm " James Bottomley
@ 2009-09-17 23:06 ` James Bottomley
0 siblings, 0 replies; 25+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
To: linux-arch, linux-fsdevel, linux-parisc
Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
James Bottomley
From: James Bottomley <jejb@external.hp.com>
We already have an API to flush a kernel page along an alias
address, so use it. The TLB purge prevents the CPU from doing
speculative moveins on the flushed address, so we don't need to
implement and invalidate.
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
arch/parisc/include/asm/cacheflush.h | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 7243951..2536a00 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -90,6 +90,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
{
flush_kernel_dcache_page_addr(page_address(page));
}
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+ flush_kernel_dcache_page_addr(addr);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+ /* nop .. the flush prevents move in until the page is touched */
+}
#ifdef CONFIG_DEBUG_RODATA
void mark_rodata_ro(void);
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work
@ 2009-09-09 15:52 James Bottomley
2009-09-09 15:52 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
0 siblings, 1 reply; 25+ messages in thread
From: James Bottomley @ 2009-09-09 15:52 UTC (permalink / raw)
To: linux-arch, linux-fsdevel, linux-parisc
Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley
Here's version two of the patch set. It actually compiles on both x86
and parisc. I could do with someone to test it on arm and sh.
The key test is how xfs behaves. What I did to recreate the problem
on parisc was simply create an 8GB xfs filesystem, use cp -a to pump
about a GB of data into it from my git trees, then unmount and run
xfs_check. Before the patches, xfs_check reports the whole fs to be
corrupt. After the patches it reports everything to be OK.
James
---
arch/arm/include/asm/cacheflush.h | 10 ++++++++++
arch/parisc/include/asm/cacheflush.h | 8 ++++++++
arch/sh/include/asm/cacheflush.h | 8 ++++++++
fs/bio.c | 19 +++++++++++++++++--
fs/xfs/linux-2.6/xfs_buf.c | 10 ++++++++++
include/linux/highmem.h | 6 ++++++
6 files changed, 59 insertions(+), 2 deletions(-)
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
2009-09-09 15:52 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
@ 2009-09-09 15:52 ` James Bottomley
2009-09-09 15:52 ` [PATCH 2/6] parisc: add mm " James Bottomley
0 siblings, 1 reply; 25+ messages in thread
From: James Bottomley @ 2009-09-09 15:52 UTC (permalink / raw)
To: linux-arch, linux-fsdevel, linux-parisc
Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley
On Virtually Indexed architectures (which don't do automatic alias
resolution in their caches), we have to flush via the correct
virtual address to prepare pages for DMA. On some architectures
(like arm) we cannot prevent the CPU from doing data movein along
the alias (and thus giving stale read data), so we not only have to
introduce a flush API to push dirty cache lines out, but also an invalidate
API to kill inconsistent cache lines that may have moved in before
DMA changed the data
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
include/linux/highmem.h | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 211ff44..eb99c70 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
static inline void flush_kernel_dcache_page(struct page *page)
{
}
+static inline void flush_kernel_dcache_addr(void *vaddr)
+{
+}
+static inline void invalidate_kernel_dcache_addr(void *vaddr)
+{
+}
#endif
#include <asm/kmap_types.h>
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 2/6] parisc: add mm API for DMA to vmalloc/vmap areas
2009-09-09 15:52 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
@ 2009-09-09 15:52 ` James Bottomley
2009-09-09 15:52 ` James Bottomley
0 siblings, 1 reply; 25+ messages in thread
From: James Bottomley @ 2009-09-09 15:52 UTC (permalink / raw)
To: linux-arch, linux-fsdevel, linux-parisc
Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley
We already have an API to flush a kernel page along an alias
address, so use it. The TLB purge prevents the CPU from doing
speculative moveins on the flushed address, so we don't need to
implement and invalidate.
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
arch/parisc/include/asm/cacheflush.h | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 7243951..2536a00 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -90,6 +90,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
{
flush_kernel_dcache_page_addr(page_address(page));
}
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+ flush_kernel_dcache_page_addr(addr);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+ /* nop .. the flush prevents move in until the page is touched */
+}
#ifdef CONFIG_DEBUG_RODATA
void mark_rodata_ro(void);
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread* [PATCH 2/6] parisc: add mm API for DMA to vmalloc/vmap areas
2009-09-09 15:52 ` [PATCH 2/6] parisc: add mm " James Bottomley
@ 2009-09-09 15:52 ` James Bottomley
0 siblings, 0 replies; 25+ messages in thread
From: James Bottomley @ 2009-09-09 15:52 UTC (permalink / raw)
To: linux-arch, linux-fsdevel, linux-parisc
Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley
We already have an API to flush a kernel page along an alias
address, so use it. The TLB purge prevents the CPU from doing
speculative moveins on the flushed address, so we don't need to
implement and invalidate.
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
arch/parisc/include/asm/cacheflush.h | 8 ++++++++
1 files changed, 8 insertions(+), 0 deletions(-)
diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 7243951..2536a00 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -90,6 +90,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
{
flush_kernel_dcache_page_addr(page_address(page));
}
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+ flush_kernel_dcache_page_addr(addr);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+ /* nop .. the flush prevents move in until the page is touched */
+}
#ifdef CONFIG_DEBUG_RODATA
void mark_rodata_ro(void);
--
1.6.3.3
^ permalink raw reply related [flat|nested] 25+ messages in thread
end of thread, other threads:[~2009-11-18 15:13 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-11-17 17:03 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 2/6] parisc: add mm " James Bottomley
2009-11-17 17:03 ` [PATCH 3/6] arm: " James Bottomley
2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 4/6] sh: " James Bottomley
2009-11-17 17:03 ` James Bottomley
2009-11-17 17:03 ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
2009-11-17 17:03 ` [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures James Bottomley
2009-11-18 10:10 ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages FUJITA Tomonori
2009-11-18 13:50 ` James Bottomley
2009-11-18 13:50 ` James Bottomley
2009-11-18 14:15 ` FUJITA Tomonori
2009-11-18 14:15 ` FUJITA Tomonori
2009-11-18 14:21 ` James Bottomley
2009-11-18 14:21 ` James Bottomley
2009-11-18 14:38 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas Ralf Baechle
2009-11-18 14:38 ` Ralf Baechle
2009-11-18 15:13 ` James Bottomley
-- strict thread matches above, loose matches on Subject: below --
2009-09-17 23:06 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
2009-09-17 23:06 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
2009-09-17 23:06 ` [PATCH 2/6] parisc: add mm " James Bottomley
2009-09-17 23:06 ` James Bottomley
2009-09-09 15:52 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
2009-09-09 15:52 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
2009-09-09 15:52 ` [PATCH 2/6] parisc: add mm " James Bottomley
2009-09-09 15:52 ` James Bottomley
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).