From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
To: "Theodore Ts'o" <tytso@mit.edu>,
Andreas Dilger <adilger.kernel@dilger.ca>,
Jan Kara <jack@suse.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>,
Hugh Dickins <hughd@google.com>,
Andrea Arcangeli <aarcange@redhat.com>,
Dave Hansen <dave.hansen@intel.com>,
Vlastimil Babka <vbabka@suse.cz>,
Matthew Wilcox <willy@infradead.org>,
Ross Zwisler <ross.zwisler@linux.intel.com>,
linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-block@vger.kernel.org,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [PATCHv2 24/41] fs: make block_write_{begin,end}() be able to handle huge pages
Date: Fri, 12 Aug 2016 21:38:07 +0300 [thread overview]
Message-ID: <1471027104-115213-25-git-send-email-kirill.shutemov@linux.intel.com> (raw)
In-Reply-To: <1471027104-115213-1-git-send-email-kirill.shutemov@linux.intel.com>
It's more or less straight-forward.
Most changes are around getting offset/len withing page right and zero
out desired part of the page.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
fs/buffer.c | 53 +++++++++++++++++++++++++++++++----------------------
1 file changed, 31 insertions(+), 22 deletions(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 2739f5dae690..7f50e5a63670 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -1870,21 +1870,21 @@ void page_zero_new_buffers(struct page *page, unsigned from, unsigned to)
do {
block_end = block_start + bh->b_size;
- if (buffer_new(bh)) {
- if (block_end > from && block_start < to) {
- if (!PageUptodate(page)) {
- unsigned start, size;
+ if (buffer_new(bh) && block_end > from && block_start < to) {
+ if (!PageUptodate(page)) {
+ unsigned start, size;
- start = max(from, block_start);
- size = min(to, block_end) - start;
+ start = max(from, block_start);
+ size = min(to, block_end) - start;
- zero_user(page, start, size);
- set_buffer_uptodate(bh);
- }
-
- clear_buffer_new(bh);
- mark_buffer_dirty(bh);
+ zero_user(page + block_start / PAGE_SIZE,
+ start % PAGE_SIZE,
+ size);
+ set_buffer_uptodate(bh);
}
+
+ clear_buffer_new(bh);
+ mark_buffer_dirty(bh);
}
block_start = block_end;
@@ -1950,18 +1950,20 @@ iomap_to_bh(struct inode *inode, sector_t block, struct buffer_head *bh,
int __block_write_begin_int(struct page *page, loff_t pos, unsigned len,
get_block_t *get_block, struct iomap *iomap)
{
- unsigned from = pos & (PAGE_SIZE - 1);
- unsigned to = from + len;
- struct inode *inode = page->mapping->host;
+ unsigned from, to;
+ struct inode *inode = page_mapping(page)->host;
unsigned block_start, block_end;
sector_t block;
int err = 0;
unsigned blocksize, bbits;
struct buffer_head *bh, *head, *wait[2], **wait_bh=wait;
+ page = compound_head(page);
+ from = pos & ~hpage_mask(page);
+ to = from + len;
BUG_ON(!PageLocked(page));
- BUG_ON(from > PAGE_SIZE);
- BUG_ON(to > PAGE_SIZE);
+ BUG_ON(from > hpage_size(page));
+ BUG_ON(to > hpage_size(page));
BUG_ON(from > to);
head = create_page_buffers(page, inode, 0);
@@ -2001,10 +2003,15 @@ int __block_write_begin_int(struct page *page, loff_t pos, unsigned len,
mark_buffer_dirty(bh);
continue;
}
- if (block_end > to || block_start < from)
- zero_user_segments(page,
- to, block_end,
- block_start, from);
+ if (block_end > to || block_start < from) {
+ BUG_ON(to - from > PAGE_SIZE);
+ zero_user_segments(page +
+ block_start / PAGE_SIZE,
+ to % PAGE_SIZE,
+ (block_start % PAGE_SIZE) + blocksize,
+ block_start % PAGE_SIZE,
+ from % PAGE_SIZE);
+ }
continue;
}
}
@@ -2048,6 +2055,7 @@ static int __block_commit_write(struct inode *inode, struct page *page,
unsigned blocksize;
struct buffer_head *bh, *head;
+ VM_BUG_ON_PAGE(PageTail(page), page);
bh = head = page_buffers(page);
blocksize = bh->b_size;
@@ -2114,7 +2122,8 @@ int block_write_end(struct file *file, struct address_space *mapping,
struct inode *inode = mapping->host;
unsigned start;
- start = pos & (PAGE_SIZE - 1);
+ page = compound_head(page);
+ start = pos & ~hpage_mask(page);
if (unlikely(copied < len)) {
/*
--
2.8.1
next prev parent reply other threads:[~2016-08-12 18:38 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-08-12 18:37 [PATCHv2, 00/41] ext4: support of huge pages Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 01/41] tools: Add WARN_ON_ONCE Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 02/41] radix tree test suite: Allow GFP_ATOMIC allocations to fail Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 03/41] radix-tree: Add radix_tree_join Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 04/41] radix-tree: Add radix_tree_split Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 05/41] radix-tree: Add radix_tree_split_preload() Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 06/41] radix-tree: Handle multiorder entries being deleted by replace_clear_tags Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 07/41] mm, shmem: swich huge tmpfs to multi-order radix-tree entries Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 08/41] Revert "radix-tree: implement radix_tree_maybe_preload_order()" Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 09/41] page-flags: relax page flag policy for few flags Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 10/41] mm, rmap: account file thp pages Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 11/41] thp: try to free page's buffers before attempt split Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 12/41] thp: handle write-protection faults for file THP Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 13/41] truncate: make sure invalidate_mapping_pages() can discard huge pages Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 14/41] filemap: allocate huge page in page_cache_read(), if allowed Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 15/41] filemap: handle huge pages in do_generic_file_read() Kirill A. Shutemov
2016-08-12 18:37 ` [PATCHv2 16/41] filemap: allocate huge page in pagecache_get_page(), if allowed Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 17/41] filemap: handle huge pages in filemap_fdatawait_range() Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 18/41] HACK: readahead: alloc huge pages, if allowed Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 19/41] block: define BIO_MAX_PAGES to HPAGE_PMD_NR if huge page cache enabled Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 20/41] mm: make write_cache_pages() work on huge pages Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 21/41] thp: introduce hpage_size() and hpage_mask() Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 22/41] thp: do not threat slab pages as huge in hpage_{nr_pages,size,mask} Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 23/41] fs: make block_read_full_page() be able to read huge page Kirill A. Shutemov
2016-08-12 18:38 ` Kirill A. Shutemov [this message]
2016-08-12 18:38 ` [PATCHv2 25/41] fs: make block_page_mkwrite() aware about huge pages Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 26/41] truncate: make truncate_inode_pages_range() " Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 27/41] truncate: make invalidate_inode_pages2_range() " Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 28/41] mm, hugetlb: switch hugetlbfs to multi-order radix-tree entries Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 29/41] ext4: make ext4_mpage_readpages() hugepage-aware Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 30/41] ext4: make ext4_writepage() work on huge pages Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 31/41] ext4: handle huge pages in ext4_page_mkwrite() Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 32/41] ext4: handle huge pages in __ext4_block_zero_page_range() Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 33/41] ext4: make ext4_block_write_begin() aware about huge pages Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 34/41] ext4: handle huge pages in ext4_da_write_end() Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 35/41] ext4: make ext4_da_page_release_reservation() aware about huge pages Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 36/41] ext4: handle writeback with " Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 37/41] ext4: make EXT4_IOC_MOVE_EXT work " Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 38/41] ext4: fix SEEK_DATA/SEEK_HOLE for " Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 39/41] ext4: make fallocate() operations work with " Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 40/41] mm, fs, ext4: expand use of page_mapping() and page_to_pgoff() Kirill A. Shutemov
2016-08-12 18:38 ` [PATCHv2 41/41] ext4, vfs: add huge= mount option Kirill A. Shutemov
2016-08-12 20:34 ` [PATCHv2, 00/41] ext4: support of huge pages Theodore Ts'o
2016-08-12 23:19 ` Kirill A. Shutemov
2016-08-14 7:20 ` Andreas Dilger
2016-08-14 12:40 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1471027104-115213-25-git-send-email-kirill.shutemov@linux.intel.com \
--to=kirill.shutemov@linux.intel.com \
--cc=aarcange@redhat.com \
--cc=adilger.kernel@dilger.ca \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@intel.com \
--cc=hughd@google.com \
--cc=jack@suse.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ross.zwisler@linux.intel.com \
--cc=tytso@mit.edu \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).