public inbox for linux-ext4@vger.kernel.org
 help / color / mirror / Atom feed
From: Hugh Dickins <hughd@google.com>
To: Lukas Czerner <lczerner@redhat.com>
Cc: linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org,
	tytso@mit.edu, linux-mmc@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 06/15] mm: teach truncate_inode_pages_range() to handle non page aligned ranges
Date: Sun, 19 Aug 2012 21:52:48 -0700 (PDT)	[thread overview]
Message-ID: <alpine.LSU.2.00.1208192144260.2390@eggly.anvils> (raw)
In-Reply-To: <1343376074-28034-7-git-send-email-lczerner@redhat.com>

On Fri, 27 Jul 2012, Lukas Czerner wrote:

> This commit changes truncate_inode_pages_range() so it can handle non
> page aligned regions of the truncate. Currently we can hit BUG_ON when
> the end of the range is not page aligned, but we can handle unaligned
> start of the range.
> 
> Being able to handle non page aligned regions of the page can help file
> system punch_hole implementations and save some work, because once we're
> holding the page we might as well deal with it right away.
> 
> In order for this to work correctly, called must register
> invalidatepage_range address space operation, or rely solely on the
> block_invalidatepage_range. That said it will BUG_ON() if caller
> implements invalidatepage(), does not implement invalidatepage_range()
> and use truncate_inode_pages_range() with unaligned end of the range.
> 
> This was based on the code provided by Hugh Dickins with some small
> changes to make use of do_invalidatepage_range().
> 
> Signed-off-by: Lukas Czerner <lczerner@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Hugh Dickins <hughd@google.com>

Acked-by: Hugh Dickins <hughd@google.com>

This looks good to me.  I like the way you provide the same args
to do_invalidatepage_range() as to zero_user_segment():

		zero_user_segment(page, partial_start, top);
		if (page_has_private(page))
			do_invalidatepage_range(page, partial_start, top);

Unfortunately, that is not what patches 01-05 are expecting...

Hugh

> ---
>  mm/truncate.c |   77 +++++++++++++++++++++++++++++++++++++--------------------
>  1 files changed, 50 insertions(+), 27 deletions(-)
> 
> diff --git a/mm/truncate.c b/mm/truncate.c
> index e29e5ea..1f6ea8b 100644
> --- a/mm/truncate.c
> +++ b/mm/truncate.c
> @@ -71,14 +71,6 @@ void do_invalidatepage_range(struct page *page, unsigned long offset,
>  #endif
>  }
>  
> -static inline void truncate_partial_page(struct page *page, unsigned partial)
> -{
> -	zero_user_segment(page, partial, PAGE_CACHE_SIZE);
> -	cleancache_invalidate_page(page->mapping, page);
> -	if (page_has_private(page))
> -		do_invalidatepage(page, partial);
> -}
> -
>  /*
>   * This cancels just the dirty bit on the kernel page itself, it
>   * does NOT actually remove dirty bits on any mmap's that may be
> @@ -212,8 +204,8 @@ int invalidate_inode_page(struct page *page)
>   * @lend: offset to which to truncate
>   *
>   * Truncate the page cache, removing the pages that are between
> - * specified offsets (and zeroing out partial page
> - * (if lstart is not page aligned)).
> + * specified offsets (and zeroing out partial pages
> + * if lstart or lend + 1 is not page aligned).
>   *
>   * Truncate takes two passes - the first pass is nonblocking.  It will not
>   * block on page locks and it will not block on writeback.  The second pass
> @@ -224,35 +216,44 @@ int invalidate_inode_page(struct page *page)
>   * We pass down the cache-hot hint to the page freeing code.  Even if the
>   * mapping is large, it is probably the case that the final pages are the most
>   * recently touched, and freeing happens in ascending file offset order.
> + *
> + * Note that it is able to handle cases where lend + 1 is not page aligned.
> + * However in order for this to work caller have to register
> + * invalidatepage_range address space operation or rely solely on
> + * block_invalidatepage_range(). That said, do_invalidatepage_range() will
> + * BUG_ON() if caller implements invalidatapage(), does not implement
                                    invalidatepage()
> + * invalidatepage_range() and uses truncate_inode_pages_range() with lend + 1
> + * unaligned to the page cache size.
>   */
>  void truncate_inode_pages_range(struct address_space *mapping,
>  				loff_t lstart, loff_t lend)
>  {
> -	const pgoff_t start = (lstart + PAGE_CACHE_SIZE-1) >> PAGE_CACHE_SHIFT;
> -	const unsigned partial = lstart & (PAGE_CACHE_SIZE - 1);
> +	pgoff_t start = (lstart + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
> +	pgoff_t end = (lend + 1) >> PAGE_CACHE_SHIFT;
> +	unsigned int partial_start = lstart & (PAGE_CACHE_SIZE - 1);
> +	unsigned int partial_end = (lend + 1) & (PAGE_CACHE_SIZE - 1);
>  	struct pagevec pvec;
>  	pgoff_t index;
> -	pgoff_t end;
>  	int i;
>  
>  	cleancache_invalidate_inode(mapping);
>  	if (mapping->nrpages == 0)
>  		return;
>  
> -	BUG_ON((lend & (PAGE_CACHE_SIZE - 1)) != (PAGE_CACHE_SIZE - 1));
> -	end = (lend >> PAGE_CACHE_SHIFT);
> +	if (lend == -1)
> +		end = -1;	/* unsigned, so actually very big */
>  
>  	pagevec_init(&pvec, 0);
>  	index = start;
> -	while (index <= end && pagevec_lookup(&pvec, mapping, index,
> -			min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1)) {
> +	while (index < end && pagevec_lookup(&pvec, mapping, index,
> +			min(end - index, (pgoff_t)PAGEVEC_SIZE))) {
>  		mem_cgroup_uncharge_start();
>  		for (i = 0; i < pagevec_count(&pvec); i++) {
>  			struct page *page = pvec.pages[i];
>  
>  			/* We rely upon deletion not changing page->index */
>  			index = page->index;
> -			if (index > end)
> +			if (index >= end)
>  				break;
>  
>  			if (!trylock_page(page))
> @@ -271,27 +272,51 @@ void truncate_inode_pages_range(struct address_space *mapping,
>  		index++;
>  	}
>  
> -	if (partial) {
> +	if (partial_start) {
>  		struct page *page = find_lock_page(mapping, start - 1);
>  		if (page) {
> +			unsigned int top = PAGE_CACHE_SIZE;
> +			if (start > end) {
> +				top = partial_end;
> +				partial_end = 0;
> +			}
> +			wait_on_page_writeback(page);
> +			zero_user_segment(page, partial_start, top);
> +			cleancache_invalidate_page(mapping, page);
> +			if (page_has_private(page))
> +				do_invalidatepage_range(page, partial_start,
> +							top);
> +			unlock_page(page);
> +			page_cache_release(page);
> +		}
> +	}
> +	if (partial_end) {
> +		struct page *page = find_lock_page(mapping, end);
> +		if (page) {
>  			wait_on_page_writeback(page);
> -			truncate_partial_page(page, partial);
> +			zero_user_segment(page, 0, partial_end);
> +			cleancache_invalidate_page(mapping, page);
> +			if (page_has_private(page))
> +				do_invalidatepage_range(page, 0,
> +							partial_end);
>  			unlock_page(page);
>  			page_cache_release(page);
>  		}
>  	}
> +	if (start >= end)
> +		return;
>  
>  	index = start;
>  	for ( ; ; ) {
>  		cond_resched();
>  		if (!pagevec_lookup(&pvec, mapping, index,
> -			min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1)) {
> +			min(end - index, (pgoff_t)PAGEVEC_SIZE))) {
>  			if (index == start)
>  				break;
>  			index = start;
>  			continue;
>  		}
> -		if (index == start && pvec.pages[0]->index > end) {
> +		if (index == start && pvec.pages[0]->index >= end) {
>  			pagevec_release(&pvec);
>  			break;
>  		}
> @@ -301,7 +326,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
>  
>  			/* We rely upon deletion not changing page->index */
>  			index = page->index;
> -			if (index > end)
> +			if (index >= end)
>  				break;
>  
>  			lock_page(page);
> @@ -646,10 +671,8 @@ void truncate_pagecache_range(struct inode *inode, loff_t lstart, loff_t lend)
>  	 * This rounding is currently just for example: unmap_mapping_range
>  	 * expands its hole outwards, whereas we want it to contract the hole
>  	 * inwards.  However, existing callers of truncate_pagecache_range are
> -	 * doing their own page rounding first; and truncate_inode_pages_range
> -	 * currently BUGs if lend is not pagealigned-1 (it handles partial
> -	 * page at start of hole, but not partial page at end of hole).  Note
> -	 * unmap_mapping_range allows holelen 0 for all, and we allow lend -1.
> +	 * doing their own page rounding first.  Note that unmap_mapping_range
> +	 * allows holelen 0 for all, and we allow lend -1 for end of file.
>  	 */
>  
>  	/*
> -- 
> 1.7.7.6
> 
> 

  reply	other threads:[~2012-08-20  4:53 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-27  8:00 Add invalidatepage_range address space operation Lukas Czerner
2012-07-27  8:01 ` [PATCH 01/15] mm: add " Lukas Czerner
2012-08-20  5:24   ` Hugh Dickins
     [not found]     ` <5033a999.0f403a0a.19c3.ffff95deSMTPIN_ADDED@mx.google.com>
2012-08-21 18:59       ` Hugh Dickins
2012-07-27  8:01 ` [PATCH 02/15] jbd2: implement jbd2_journal_invalidatepage_range Lukas Czerner
2012-07-27  8:01 ` [PATCH 03/15] ext4: implement invalidatepage_range aop Lukas Czerner
2012-07-27  8:01 ` [PATCH 04/15] xfs: " Lukas Czerner
2012-07-27  8:01 ` [PATCH 05/15] ocfs2: " Lukas Czerner
2012-07-27  8:01 ` [PATCH 06/15] mm: teach truncate_inode_pages_range() to handle non page aligned ranges Lukas Czerner
2012-08-20  4:52   ` Hugh Dickins [this message]
2012-08-20 10:26     ` Lukáš Czerner
2012-08-20 15:47       ` Hugh Dickins
     [not found]         ` <50339e0d.69b2340a.50ba.ffff92bcSMTPIN_ADDED@mx.google.com>
2012-08-21 18:44           ` Hugh Dickins
2012-08-20 15:53       ` Hugh Dickins
2012-07-27  8:01 ` [PATCH 07/15] ext4: Take i_mutex before punching hole Lukas Czerner
2012-07-27  9:04   ` Lukáš Czerner
2012-07-27 12:08     ` Zheng Liu
2012-08-20  5:45   ` Hugh Dickins
2012-07-27  8:01 ` [PATCH 08/15] Revert "ext4: remove no longer used functions in inode.c" Lukas Czerner
2012-07-27  8:01 ` [PATCH 09/15] Revert "ext4: fix fsx truncate failure" Lukas Czerner
2012-07-27  8:01 ` [PATCH 10/15] ext4: use ext4_zero_partial_blocks in punch_hole Lukas Czerner
2012-07-27  8:01 ` [PATCH 11/15] ext4: remove unused discard_partial_page_buffers Lukas Czerner
2012-07-27  8:01 ` [PATCH 12/15] ext4: remove unused code from ext4_remove_blocks() Lukas Czerner
2012-07-27  8:01 ` [PATCH 13/15] ext4: update ext4_ext_remove_space trace point Lukas Czerner
2012-07-27  8:01 ` [PATCH 14/15] ext4: make punch hole code path work with bigalloc Lukas Czerner
2012-07-27  8:01 ` [PATCH 15/15] ext4: Allow punch hole with bigalloc enabled Lukas Czerner
2012-08-19  0:57 ` Add invalidatepage_range address space operation Theodore Ts'o
2012-08-20  4:43   ` Hugh Dickins
2012-08-20 13:23     ` Theodore Ts'o

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.LSU.2.00.1208192144260.2390@eggly.anvils \
    --to=hughd@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=lczerner@redhat.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mmc@vger.kernel.org \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox