From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx204.postini.com [74.125.245.204]) by kanga.kvack.org (Postfix) with SMTP id 558856B00B9 for ; Fri, 5 Apr 2013 07:58:24 -0400 (EDT) From: "Kirill A. Shutemov" Subject: [PATCHv3, RFC 18/34] thp, mm: naive support of thp in generic read/write routines Date: Fri, 5 Apr 2013 14:59:42 +0300 Message-Id: <1365163198-29726-19-git-send-email-kirill.shutemov@linux.intel.com> In-Reply-To: <1365163198-29726-1-git-send-email-kirill.shutemov@linux.intel.com> References: <1365163198-29726-1-git-send-email-kirill.shutemov@linux.intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Andrea Arcangeli , Andrew Morton Cc: Al Viro , Hugh Dickins , Wu Fengguang , Jan Kara , Mel Gorman , linux-mm@kvack.org, Andi Kleen , Matthew Wilcox , "Kirill A. Shutemov" , Hillf Danton , Dave Hansen , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" From: "Kirill A. Shutemov" For now we still write/read at most PAGE_CACHE_SIZE bytes a time. This implementation doesn't cover address spaces with backing store. Signed-off-by: Kirill A. Shutemov --- mm/filemap.c | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/mm/filemap.c b/mm/filemap.c index bcb679c..3296f5c 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1161,6 +1161,16 @@ find_page: if (unlikely(page == NULL)) goto no_cached_page; } + if (PageTransCompound(page)) { + struct page *head = compound_trans_head(page); + /* + * We don't yet support huge pages in page cache + * for filesystems with backing device, so pages + * should always be up-to-date. + */ + BUG_ON(PageReadahead(head) || !PageUptodate(head)); + goto page_ok; + } if (PageReadahead(page)) { page_cache_async_readahead(mapping, ra, filp, page, @@ -2439,8 +2449,13 @@ again: if (mapping_writably_mapped(mapping)) flush_dcache_page(page); + if (PageTransHuge(page)) + offset = pos & ~HPAGE_PMD_MASK; + pagefault_disable(); - copied = iov_iter_copy_from_user_atomic(page, i, offset, bytes); + copied = iov_iter_copy_from_user_atomic( + page + (offset >> PAGE_CACHE_SHIFT), + i, offset & ~PAGE_CACHE_MASK, bytes); pagefault_enable(); flush_dcache_page(page); @@ -2463,6 +2478,7 @@ again: * because not all segments in the iov can be copied at * once without a pagefault. */ + offset = pos & ~PAGE_CACHE_MASK; bytes = min_t(unsigned long, PAGE_CACHE_SIZE - offset, iov_iter_single_seg_count(i)); goto again; -- 1.7.10.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org