linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures
  2009-09-09 15:52         ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
@ 2009-09-09 15:52           ` James Bottomley
  2009-09-09 17:55             ` Christoph Hellwig
  2009-09-11 21:57             ` James Bottomley
  0 siblings, 2 replies; 10+ messages in thread
From: James Bottomley @ 2009-09-09 15:52 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley

xfs_buf.c includes what is essentially a hand rolled version of
blk_rq_map_kern().  In order to work properly with the vmalloc buffers
that xfs uses, this hand rolled routine must also implement the flushing
API for vmap/vmalloc areas.

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 fs/xfs/linux-2.6/xfs_buf.c |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index 965df12..62ae977 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -1138,6 +1138,10 @@ xfs_buf_bio_end_io(
 	do {
 		struct page	*page = bvec->bv_page;
 
+		if (is_vmalloc_addr(bp->b_addr))
+			invalidate_kernel_dcache_addr(bp->b_addr +
+						      bvec->bv_offset);
+
 		ASSERT(!PagePrivate(page));
 		if (unlikely(bp->b_error)) {
 			if (bp->b_flags & XBF_READ)
@@ -1202,6 +1206,9 @@ _xfs_buf_ioapply(
 		bio->bi_end_io = xfs_buf_bio_end_io;
 		bio->bi_private = bp;
 
+		if (is_vmalloc_addr(bp->b_addr))
+			flush_kernel_dcache_addr(bp->b_addr);
+
 		bio_add_page(bio, bp->b_pages[0], PAGE_CACHE_SIZE, 0);
 		size = 0;
 
@@ -1228,6 +1235,9 @@ next_chunk:
 		if (nbytes > size)
 			nbytes = size;
 
+		if (is_vmalloc_addr(bp->b_addr))
+			flush_kernel_dcache_addr(bp->b_addr + PAGE_SIZE*map_i);
+
 		rbytes = bio_add_page(bio, bp->b_pages[map_i], nbytes, offset);
 		if (rbytes < nbytes)
 			break;
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures
  2009-09-09 15:52           ` [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures James Bottomley
@ 2009-09-09 17:55             ` Christoph Hellwig
  2009-09-11 21:57             ` James Bottomley
  1 sibling, 0 replies; 10+ messages in thread
From: Christoph Hellwig @ 2009-09-09 17:55 UTC (permalink / raw)
  To: James Bottomley
  Cc: linux-arch, linux-fsdevel, linux-parisc, Russell King,
	Christoph Hellwig, Paul Mundt

On Wed, Sep 09, 2009 at 10:52:16AM -0500, James Bottomley wrote:
> xfs_buf.c includes what is essentially a hand rolled version of
> blk_rq_map_kern().  In order to work properly with the vmalloc buffers
> that xfs uses, this hand rolled routine must also implement the flushing
> API for vmap/vmalloc areas.

It's not really a handcrafted version of blk_rq_map_kern because it
can add discontinuous into a single bio.

The patches look fine to (not pretty but fine :)), and I'll make sure
they get added once those two helpers made it, probably augmented by
a comment explaining what's going on here.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures
  2009-09-09 15:52           ` [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures James Bottomley
  2009-09-09 17:55             ` Christoph Hellwig
@ 2009-09-11 21:57             ` James Bottomley
  1 sibling, 0 replies; 10+ messages in thread
From: James Bottomley @ 2009-09-11 21:57 UTC (permalink / raw)
  To: linux-arch
  Cc: linux-fsdevel, linux-parisc, Russell King, Christoph Hellwig,
	Paul Mundt

On Wed, 2009-09-09 at 10:52 -0500, James Bottomley wrote:
> xfs_buf.c includes what is essentially a hand rolled version of
> blk_rq_map_kern().  In order to work properly with the vmalloc buffers
> that xfs uses, this hand rolled routine must also implement the flushing
> API for vmap/vmalloc areas.
> 
> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
> ---
>  fs/xfs/linux-2.6/xfs_buf.c |   10 ++++++++++
>  1 files changed, 10 insertions(+), 0 deletions(-)
> 
> diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
> index 965df12..62ae977 100644
> --- a/fs/xfs/linux-2.6/xfs_buf.c
> +++ b/fs/xfs/linux-2.6/xfs_buf.c
> @@ -1138,6 +1138,10 @@ xfs_buf_bio_end_io(
>  	do {
>  		struct page	*page = bvec->bv_page;
>  
> +		if (is_vmalloc_addr(bp->b_addr))
> +			invalidate_kernel_dcache_addr(bp->b_addr +
> +						      bvec->bv_offset);

OK, so this invalidation logic is completely wrong.  For large vmalloc
buffers, xfs will split them up over several bios.  The only way I can
think to fix this is below ... comments?

If everyone is OK, I'll reroll the patches with this built in.

James

---

diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index 62ae977..320a6e4 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -1132,15 +1132,25 @@ xfs_buf_bio_end_io(
 	xfs_buf_t		*bp = (xfs_buf_t *)bio->bi_private;
 	unsigned int		blocksize = bp->b_target->bt_bsize;
 	struct bio_vec		*bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
+	void			*vaddr = NULL;
+	int			i;
 
 	xfs_buf_ioerror(bp, -error);
 
+	if (is_vmalloc_addr(bp->b_addr))
+		for (i = 0; i < bp->b_page_count; i++)
+			if (bvec->bv_page == bp->b_pages[i]) {
+				vaddr = bp->b_addr + i*PAGE_SIZE;
+				break;
+			}
+
 	do {
 		struct page	*page = bvec->bv_page;
 
-		if (is_vmalloc_addr(bp->b_addr))
-			invalidate_kernel_dcache_addr(bp->b_addr +
-						      bvec->bv_offset);
+		if (is_vmalloc_addr(bp->b_addr)) {
+			invalidate_kernel_dcache_addr(vaddr);
+			vaddr -= PAGE_SIZE;
+		}
 
 		ASSERT(!PagePrivate(page));
 		if (unlikely(bp->b_error)) {



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work
@ 2009-09-17 23:06 James Bottomley
  2009-09-17 23:06 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
  0 siblings, 1 reply; 10+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley

From: James Bottomley <James.Bottomley@HansenPartnership.com>

Here's version three of the patch.  This one makes sure the invalidate
works correctly.  I verified it on parisc by making my system print out
the virtual addresses it was invalidating and matching up with the ones
that were initially flushed, but since invalidate is a nop on parisc, I
can't verify live that the issue is fixed.  I'd really appreciate someone
from arm and sh testing here.

Thanks,

James

---

James Bottomley (6):
  mm: add coherence API for DMA to vmalloc/vmap areas
  parisc: add mm API for DMA to vmalloc/vmap areas
  arm: add mm API for DMA to vmalloc/vmap areas
  sh: add mm API for DMA to vmalloc/vmap areas
  block: permit I/O to vmalloc/vmap kernel pages
  xfs: fix xfs to work with Virtually Indexed architectures

 arch/arm/include/asm/cacheflush.h    |   10 ++++++++++
 arch/parisc/include/asm/cacheflush.h |    8 ++++++++
 arch/sh/include/asm/cacheflush.h     |    8 ++++++++
 fs/bio.c                             |   20 ++++++++++++++++++--
 fs/xfs/linux-2.6/xfs_buf.c           |   20 ++++++++++++++++++++
 include/linux/highmem.h              |    6 ++++++
 6 files changed, 70 insertions(+), 2 deletions(-)


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas
  2009-09-17 23:06 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
@ 2009-09-17 23:06 ` James Bottomley
  2009-09-17 23:06   ` [PATCH 2/6] parisc: add mm " James Bottomley
  0 siblings, 1 reply; 10+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

On Virtually Indexed architectures (which don't do automatic alias
resolution in their caches), we have to flush via the correct
virtual address to prepare pages for DMA.  On some architectures
(like arm) we cannot prevent the CPU from doing data movein along
the alias (and thus giving stale read data), so we not only have to
introduce a flush API to push dirty cache lines out, but also an invalidate
API to kill inconsistent cache lines that may have moved in before
DMA changed the data

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 include/linux/highmem.h |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 211ff44..eb99c70 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page
 static inline void flush_kernel_dcache_page(struct page *page)
 {
 }
+static inline void flush_kernel_dcache_addr(void *vaddr)
+{
+}
+static inline void invalidate_kernel_dcache_addr(void *vaddr)
+{
+}
 #endif
 
 #include <asm/kmap_types.h>
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/6] parisc: add mm API for DMA to vmalloc/vmap areas
  2009-09-17 23:06 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
@ 2009-09-17 23:06   ` James Bottomley
  2009-09-17 23:06     ` [PATCH 3/6] arm: " James Bottomley
  0 siblings, 1 reply; 10+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

We already have an API to flush a kernel page along an alias
address, so use it.  The TLB purge prevents the CPU from doing
speculative moveins on the flushed address, so we don't need to
implement and invalidate.

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 arch/parisc/include/asm/cacheflush.h |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 7243951..2536a00 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -90,6 +90,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
 {
 	flush_kernel_dcache_page_addr(page_address(page));
 }
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+	flush_kernel_dcache_page_addr(addr);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+	/* nop .. the flush prevents move in until the page is touched */
+}
 
 #ifdef CONFIG_DEBUG_RODATA
 void mark_rodata_ro(void);
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/6] arm: add mm API for DMA to vmalloc/vmap areas
  2009-09-17 23:06   ` [PATCH 2/6] parisc: add mm " James Bottomley
@ 2009-09-17 23:06     ` James Bottomley
  2009-09-17 23:06       ` [PATCH 4/6] sh: " James Bottomley
  0 siblings, 1 reply; 10+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

ARM cannot prevent cache movein, so this patch implements both the
flush and invalidate pieces of the API.

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 arch/arm/include/asm/cacheflush.h |   10 ++++++++++
 1 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
index 1a711ea..1104ee9 100644
--- a/arch/arm/include/asm/cacheflush.h
+++ b/arch/arm/include/asm/cacheflush.h
@@ -436,6 +436,16 @@ static inline void flush_kernel_dcache_page(struct page *page)
 	if ((cache_is_vivt() || cache_is_vipt_aliasing()) && !PageHighMem(page))
 		__cpuc_flush_dcache_page(page_address(page));
 }
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+	if ((cache_is_vivt() || cache_is_vipt_aliasing()))
+		__cpuc_flush_dcache_page(addr);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+	if ((cache_is_vivt() || cache_is_vipt_aliasing()))
+		__cpuc_flush_dcache_page(addr);
+}
 
 #define flush_dcache_mmap_lock(mapping) \
 	spin_lock_irq(&(mapping)->tree_lock)
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/6] sh: add mm API for DMA to vmalloc/vmap areas
  2009-09-17 23:06     ` [PATCH 3/6] arm: " James Bottomley
@ 2009-09-17 23:06       ` James Bottomley
  2009-09-17 23:07         ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
  0 siblings, 1 reply; 10+ messages in thread
From: James Bottomley @ 2009-09-17 23:06 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <James.Bottomley@HansenPartnership.com>

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 arch/sh/include/asm/cacheflush.h |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h
index 4c5462d..3cb8824 100644
--- a/arch/sh/include/asm/cacheflush.h
+++ b/arch/sh/include/asm/cacheflush.h
@@ -48,6 +48,14 @@ static inline void flush_kernel_dcache_page(struct page *page)
 {
 	flush_dcache_page(page);
 }
+static inline void flush_kernel_dcache_addr(void *addr)
+{
+	__flush_wback_region(addr, PAGE_SIZE);
+}
+static inline void invalidate_kernel_dcache_addr(void *addr)
+{
+	__flush_invalidate_region(addr, PAGE_SIZE);
+}
 
 #if defined(CONFIG_CPU_SH4) && !defined(CONFIG_CACHE_OFF)
 extern void copy_to_user_page(struct vm_area_struct *vma,
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages
  2009-09-17 23:06       ` [PATCH 4/6] sh: " James Bottomley
@ 2009-09-17 23:07         ` James Bottomley
  2009-09-17 23:07           ` [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures James Bottomley
  0 siblings, 1 reply; 10+ messages in thread
From: James Bottomley @ 2009-09-17 23:07 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

This updates bio_map_kern() to check for pages in the vmalloc address
range and call the new kernel flushing APIs if the are.  This should
allow any kernel user to pass a vmalloc/vmap area to block.

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 fs/bio.c |   20 ++++++++++++++++++--
 1 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/fs/bio.c b/fs/bio.c
index 7673800..0cf7b79 100644
--- a/fs/bio.c
+++ b/fs/bio.c
@@ -1120,6 +1120,14 @@ void bio_unmap_user(struct bio *bio)
 
 static void bio_map_kern_endio(struct bio *bio, int err)
 {
+	void *kaddr = bio->bi_private;
+
+	if (is_vmalloc_addr(kaddr)) {
+		int i;
+
+		for (i = 0; i < bio->bi_vcnt; i++)
+			invalidate_kernel_dcache_addr(kaddr + i * PAGE_SIZE);
+	}
 	bio_put(bio);
 }
 
@@ -1138,9 +1146,12 @@ static struct bio *__bio_map_kern(struct request_queue *q, void *data,
 	if (!bio)
 		return ERR_PTR(-ENOMEM);
 
+	bio->bi_private = data;
+
 	offset = offset_in_page(kaddr);
 	for (i = 0; i < nr_pages; i++) {
 		unsigned int bytes = PAGE_SIZE - offset;
+		struct page *page;
 
 		if (len <= 0)
 			break;
@@ -1148,8 +1159,13 @@ static struct bio *__bio_map_kern(struct request_queue *q, void *data,
 		if (bytes > len)
 			bytes = len;
 
-		if (bio_add_pc_page(q, bio, virt_to_page(data), bytes,
-				    offset) < bytes)
+		if (is_vmalloc_addr(data)) {
+			flush_kernel_dcache_addr(data);
+			page = vmalloc_to_page(data);
+		} else
+			page = virt_to_page(data);
+
+		if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes)
 			break;
 
 		data += bytes;
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures
  2009-09-17 23:07         ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
@ 2009-09-17 23:07           ` James Bottomley
  0 siblings, 0 replies; 10+ messages in thread
From: James Bottomley @ 2009-09-17 23:07 UTC (permalink / raw)
  To: linux-arch, linux-fsdevel, linux-parisc
  Cc: Russell King, Christoph Hellwig, Paul Mundt, James Bottomley,
	James Bottomley

From: James Bottomley <jejb@external.hp.com>

xfs_buf.c includes what is essentially a hand rolled version of
blk_rq_map_kern().  In order to work properly with the vmalloc buffers
that xfs uses, this hand rolled routine must also implement the flushing
API for vmap/vmalloc areas.

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
---
 fs/xfs/linux-2.6/xfs_buf.c |   20 ++++++++++++++++++++
 1 files changed, 20 insertions(+), 0 deletions(-)

diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index 965df12..320a6e4 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -1132,12 +1132,26 @@ xfs_buf_bio_end_io(
 	xfs_buf_t		*bp = (xfs_buf_t *)bio->bi_private;
 	unsigned int		blocksize = bp->b_target->bt_bsize;
 	struct bio_vec		*bvec = bio->bi_io_vec + bio->bi_vcnt - 1;
+	void			*vaddr = NULL;
+	int			i;
 
 	xfs_buf_ioerror(bp, -error);
 
+	if (is_vmalloc_addr(bp->b_addr))
+		for (i = 0; i < bp->b_page_count; i++)
+			if (bvec->bv_page == bp->b_pages[i]) {
+				vaddr = bp->b_addr + i*PAGE_SIZE;
+				break;
+			}
+
 	do {
 		struct page	*page = bvec->bv_page;
 
+		if (is_vmalloc_addr(bp->b_addr)) {
+			invalidate_kernel_dcache_addr(vaddr);
+			vaddr -= PAGE_SIZE;
+		}
+
 		ASSERT(!PagePrivate(page));
 		if (unlikely(bp->b_error)) {
 			if (bp->b_flags & XBF_READ)
@@ -1202,6 +1216,9 @@ _xfs_buf_ioapply(
 		bio->bi_end_io = xfs_buf_bio_end_io;
 		bio->bi_private = bp;
 
+		if (is_vmalloc_addr(bp->b_addr))
+			flush_kernel_dcache_addr(bp->b_addr);
+
 		bio_add_page(bio, bp->b_pages[0], PAGE_CACHE_SIZE, 0);
 		size = 0;
 
@@ -1228,6 +1245,9 @@ next_chunk:
 		if (nbytes > size)
 			nbytes = size;
 
+		if (is_vmalloc_addr(bp->b_addr))
+			flush_kernel_dcache_addr(bp->b_addr + PAGE_SIZE*map_i);
+
 		rbytes = bio_add_page(bio, bp->b_pages[map_i], nbytes, offset);
 		if (rbytes < nbytes)
 			break;
-- 
1.6.3.3

^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2009-09-17 23:07 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-09-17 23:06 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
2009-09-17 23:06 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
2009-09-17 23:06   ` [PATCH 2/6] parisc: add mm " James Bottomley
2009-09-17 23:06     ` [PATCH 3/6] arm: " James Bottomley
2009-09-17 23:06       ` [PATCH 4/6] sh: " James Bottomley
2009-09-17 23:07         ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
2009-09-17 23:07           ` [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures James Bottomley
  -- strict thread matches above, loose matches on Subject: below --
2009-09-09 15:52 [PATCH 0/6] fix xfs by making I/O to vmap/vmalloc areas work James Bottomley
2009-09-09 15:52 ` [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas James Bottomley
2009-09-09 15:52   ` [PATCH 2/6] parisc: add mm " James Bottomley
2009-09-09 15:52     ` [PATCH 3/6] arm: " James Bottomley
2009-09-09 15:52       ` [PATCH 4/6] sh: " James Bottomley
2009-09-09 15:52         ` [PATCH 5/6] block: permit I/O to vmalloc/vmap kernel pages James Bottomley
2009-09-09 15:52           ` [PATCH 6/6] xfs: fix xfs to work with Virtually Indexed architectures James Bottomley
2009-09-09 17:55             ` Christoph Hellwig
2009-09-11 21:57             ` James Bottomley

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).