From mboxrd@z Thu Jan 1 00:00:00 1970 From: gmbnomis@gmail.com (Simon Baatz) Date: Sun, 7 Oct 2012 13:29:12 +0200 Subject: [PATCH V3 2/2] ARM: Handle user space mapped pages in flush_kernel_dcache_page In-Reply-To: <1349609352-6408-1-git-send-email-gmbnomis@gmail.com> References: <1349609352-6408-1-git-send-email-gmbnomis@gmail.com> Message-ID: <1349609352-6408-3-git-send-email-gmbnomis@gmail.com> To: linux-arm-kernel@lists.infradead.org List-Id: linux-arm-kernel.lists.infradead.org Commit f8b63c1 made flush_kernel_dcache_page() a no-op assuming that the pages it needs to handle are kernel mapped only. However, for example when doing direct I/O, pages with user space mappings may occur. Thus, do lazy flushing like in flush_dcache_page() if there are no user space mappings. Otherwise, flush the kernel cache lines directly. Signed-off-by: Simon Baatz Cc: Catalin Marinas Cc: Russell King --- Changes: in V3: - Followed Catalin's suggestion to reverse the order of the patches in V2: - flush_kernel_dcache_page() follows flush_dcache_page() now, except that it does not flush the user mappings arch/arm/include/asm/cacheflush.h | 4 ++++ arch/arm/mm/flush.c | 42 +++++++++++++++++++++++++++++++++++++ 2 files changed, 46 insertions(+) diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index e4448e1..eca955f 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -307,6 +307,10 @@ static inline void flush_anon_page(struct vm_area_struct *vma, #define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE static inline void flush_kernel_dcache_page(struct page *page) { + extern void __flush_kernel_dcache_page(struct page *); + /* highmem pages are always flushed upon kunmap already */ + if (!PageHighMem(page)) + __flush_kernel_dcache_page(page); } #define flush_dcache_mmap_lock(mapping) \ diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index 5c474a1..59ad4fc 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -192,6 +192,48 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page) page->index << PAGE_CACHE_SHIFT); } +/* + * Ensure cache coherency for the kernel mapping of this page. + * + * If the page only exists in the page cache and there are no user + * space mappings, we can be lazy and remember that we may have dirty + * kernel cache lines for later. Otherwise, we need to flush the + * dirty kernel cache lines directly. + * + * Note that we disable the lazy flush for SMP configurations where + * the cache maintenance operations are not automatically broadcasted. + * + * We can assume that the page is no high mem page, see + * flush_kernel_dcache_page. + */ +void __flush_kernel_dcache_page(struct page *page) +{ + struct address_space *mapping; + + /* + * The zero page is never written to, so never has any dirty + * cache lines, and therefore never needs to be flushed. + */ + if (page == ZERO_PAGE(0)) + return; + + mapping = page_mapping(page); + + if (!cache_ops_need_broadcast()) { + if ((mapping && !mapping_mapped(mapping)) || + (!mapping && cache_is_vipt_nonaliasing())) { + clear_bit(PG_dcache_clean, &page->flags); + return; + } + } + + __cpuc_flush_dcache_area(page_address(page), PAGE_SIZE); + if (mapping && !cache_is_vivt()) + __flush_icache_all(); + set_bit(PG_dcache_clean, &page->flags); +} +EXPORT_SYMBOL(__flush_kernel_dcache_page); + static void __flush_dcache_aliases(struct address_space *mapping, struct page *page) { struct mm_struct *mm = current->active_mm; -- 1.7.9.5