From mboxrd@z Thu Jan 1 00:00:00 1970 From: James Bottomley Subject: [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas Date: Tue, 17 Nov 2009 11:03:47 -0600 Message-ID: <1258477432-2513-2-git-send-email-James.Bottomley@suse.de> References: <1258477432-2513-1-git-send-email-James.Bottomley@suse.de> Return-path: In-Reply-To: <1258477432-2513-1-git-send-email-James.Bottomley@suse.de> Sender: linux-parisc-owner@vger.kernel.org To: linux-arch@vger.kernel.org, linux-parisc@vger.kernel.org Cc: James Bottomley List-Id: linux-arch.vger.kernel.org On Virtually Indexed architectures (which don't do automatic alias resolution in their caches), we have to flush via the correct virtual address to prepare pages for DMA. On some architectures (like arm) we cannot prevent the CPU from doing data movein along the alias (and thus giving stale read data), so we not only have to introduce a flush API to push dirty cache lines out, but also an invalidate API to kill inconsistent cache lines that may have moved in before DMA changed the data Signed-off-by: James Bottomley --- include/linux/highmem.h | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 211ff44..eb99c70 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page static inline void flush_kernel_dcache_page(struct page *page) { } +static inline void flush_kernel_dcache_addr(void *vaddr) +{ +} +static inline void invalidate_kernel_dcache_addr(void *vaddr) +{ +} #endif #include -- 1.6.3.3 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cantor2.suse.de ([195.135.220.15]:45753 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754367AbZKQRDz (ORCPT ); Tue, 17 Nov 2009 12:03:55 -0500 From: James Bottomley Subject: [PATCH 1/6] mm: add coherence API for DMA to vmalloc/vmap areas Date: Tue, 17 Nov 2009 11:03:47 -0600 Message-ID: <1258477432-2513-2-git-send-email-James.Bottomley@suse.de> In-Reply-To: <1258477432-2513-1-git-send-email-James.Bottomley@suse.de> References: <1258477432-2513-1-git-send-email-James.Bottomley@suse.de> Sender: linux-arch-owner@vger.kernel.org List-ID: To: linux-arch@vger.kernel.org, linux-parisc@vger.kernel.org Cc: James Bottomley Message-ID: <20091117170347.GsFRLFQxwHUZObica0YqY4dAF2pv7Qx2NgCvHMfYH9Q@z> On Virtually Indexed architectures (which don't do automatic alias resolution in their caches), we have to flush via the correct virtual address to prepare pages for DMA. On some architectures (like arm) we cannot prevent the CPU from doing data movein along the alias (and thus giving stale read data), so we not only have to introduce a flush API to push dirty cache lines out, but also an invalidate API to kill inconsistent cache lines that may have moved in before DMA changed the data Signed-off-by: James Bottomley --- include/linux/highmem.h | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 211ff44..eb99c70 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -17,6 +17,12 @@ static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page static inline void flush_kernel_dcache_page(struct page *page) { } +static inline void flush_kernel_dcache_addr(void *vaddr) +{ +} +static inline void invalidate_kernel_dcache_addr(void *vaddr) +{ +} #endif #include -- 1.6.3.3