From: Akinobu Mita <akinobu.mita@gmail.com>
To: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, arnd@arndb.de
Cc: Akinobu Mita <akinobu.mita@gmail.com>,
Mikael Starvik <starvik@axis.com>,
Jesper Nilsson <jesper.nilsson@axis.com>,
linux-cris-kernel@axis.com
Subject: [PATCH] cris: use asm-generic/cacheflush.h
Date: Thu, 20 Jan 2011 20:32:16 +0900 [thread overview]
Message-ID: <1295523136-4277-4-git-send-email-akinobu.mita@gmail.com> (raw)
In-Reply-To: <1295523136-4277-1-git-send-email-akinobu.mita@gmail.com>
The implementation of the cache flushing interfaces on the cris
is identical with the default implementation in asm-generic.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: linux-cris-kernel@axis.com
---
arch/cris/include/asm/cacheflush.h | 23 +----------------------
1 files changed, 1 insertions(+), 22 deletions(-)
diff --git a/arch/cris/include/asm/cacheflush.h b/arch/cris/include/asm/cacheflush.h
index 36795bc..fa698e7 100644
--- a/arch/cris/include/asm/cacheflush.h
+++ b/arch/cris/include/asm/cacheflush.h
@@ -1,31 +1,10 @@
#ifndef _CRIS_CACHEFLUSH_H
#define _CRIS_CACHEFLUSH_H
-/* Keep includes the same across arches. */
-#include <linux/mm.h>
-
/* The cache doesn't need to be flushed when TLB entries change because
* the cache is mapped to physical memory, not virtual memory
*/
-#define flush_cache_all() do { } while (0)
-#define flush_cache_mm(mm) do { } while (0)
-#define flush_cache_dup_mm(mm) do { } while (0)
-#define flush_cache_range(vma, start, end) do { } while (0)
-#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
-#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0
-#define flush_dcache_page(page) do { } while (0)
-#define flush_dcache_mmap_lock(mapping) do { } while (0)
-#define flush_dcache_mmap_unlock(mapping) do { } while (0)
-#define flush_icache_range(start, end) do { } while (0)
-#define flush_icache_page(vma,pg) do { } while (0)
-#define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
-#define flush_cache_vmap(start, end) do { } while (0)
-#define flush_cache_vunmap(start, end) do { } while (0)
-
-#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
- memcpy(dst, src, len)
-#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
- memcpy(dst, src, len)
+#include <asm-generic/cacheflush.h>
int change_page_attr(struct page *page, int numpages, pgprot_t prot);
--
1.7.3.4
next prev parent reply other threads:[~2011-01-20 11:31 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-01-20 11:32 [PATCH] s390: use asm-generic/cacheflush.h Akinobu Mita
2011-01-20 11:32 ` [PATCH] x86: " Akinobu Mita
2011-01-21 15:36 ` [tip:x86/urgent] x86: Use asm-generic/cacheflush.h tip-bot for Akinobu Mita
2011-01-20 11:32 ` [PATCH] h8300: use asm-generic/cacheflush.h Akinobu Mita
2011-01-20 11:32 ` Akinobu Mita [this message]
2011-01-20 12:11 ` [PATCH] cris: " Jesper Nilsson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1295523136-4277-4-git-send-email-akinobu.mita@gmail.com \
--to=akinobu.mita@gmail.com \
--cc=arnd@arndb.de \
--cc=jesper.nilsson@axis.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-cris-kernel@axis.com \
--cc=linux-kernel@vger.kernel.org \
--cc=starvik@axis.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).