linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jaegeuk Kim <jaegeuk.kim@samsung.com>
To: Chao Yu <chao2.yu@samsung.com>
Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, 谭姝 <shu.tan@samsung.com>
Subject: Re: [f2fs-dev] [PATCH] f2fs: readahead contiguous pages for restore_node_summary
Date: Wed, 27 Nov 2013 14:29:55 +0900	[thread overview]
Message-ID: <1385530195.2417.22.camel@kjgkr> (raw)
In-Reply-To: <000101cee757$6f3b4790$4db1d6b0$@samsung.com>

Hi Chao,

It seems that we already have a readahed function for node pages,
ra_node_page().
So, we don't make a page list for this, but can use the node_inode's
page cache.

So how about writing ra_node_pages() which use the node_inode's page
cache?

Thanks,

2013-11-22 (금), 15:48 +0800, Chao Yu:
> If cp has no CP_UMOUNT_FLAG, we will read all pages in whole node segment 
> one by one, it makes low performance. So let's merge contiguous pages and 
> readahead for better performance.
> 
> Signed-off-by: Chao Yu <chao2.yu@samsung.com>
> ---
>  fs/f2fs/node.c |   89 +++++++++++++++++++++++++++++++++++++++-----------------
>  1 file changed, 63 insertions(+), 26 deletions(-)
> 
> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> index 4ac4150..81e704a 100644
> --- a/fs/f2fs/node.c
> +++ b/fs/f2fs/node.c
> @@ -1572,47 +1572,84 @@ int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page)
>  	return 0;
>  }
>  
> +/*
> + * ra_sum_pages() merge contiguous pages into one bio and submit.
> + * these pre-readed pages are linked in pages list.
> + */
> +static int ra_sum_pages(struct f2fs_sb_info *sbi, struct list_head *pages,
> +				int start, int nrpages)
> +{
> +	struct page *page;
> +	int page_idx = start;
> +
> +	for (; page_idx < start + nrpages; page_idx++) {
> +		/* alloc temporal page for read node summary info*/
> +		page = alloc_page(GFP_NOFS | __GFP_ZERO);
> +		if (!page) {
> +			struct page *tmp;
> +			list_for_each_entry_safe(page, tmp, pages, lru) {
> +				list_del(&page->lru);
> +				unlock_page(page);
> +				__free_pages(page, 0);
> +			}
> +			return -ENOMEM;
> +		}
> +
> +		lock_page(page);
> +		page->index = page_idx;
> +		list_add_tail(&page->lru, pages);
> +	}
> +
> +	list_for_each_entry(page, pages, lru)
> +		submit_read_page(sbi, page, page->index, READ_SYNC);
> +
> +	f2fs_submit_read_bio(sbi, READ_SYNC);
> +	return 0;
> +}
> +
>  int restore_node_summary(struct f2fs_sb_info *sbi,
>  			unsigned int segno, struct f2fs_summary_block *sum)
>  {
>  	struct f2fs_node *rn;
>  	struct f2fs_summary *sum_entry;
> -	struct page *page;
> +	struct page *page, *tmp;
>  	block_t addr;
> -	int i, last_offset;
> -
> -	/* alloc temporal page for read node */
> -	page = alloc_page(GFP_NOFS | __GFP_ZERO);
> -	if (!page)
> -		return -ENOMEM;
> -	lock_page(page);
> +	int bio_blocks = MAX_BIO_BLOCKS(max_hw_blocks(sbi));
> +	int i, last_offset, nrpages, err = 0;
> +	LIST_HEAD(page_list);
>  
>  	/* scan the node segment */
>  	last_offset = sbi->blocks_per_seg;
>  	addr = START_BLOCK(sbi, segno);
>  	sum_entry = &sum->entries[0];
>  
> -	for (i = 0; i < last_offset; i++, sum_entry++) {
> -		/*
> -		 * In order to read next node page,
> -		 * we must clear PageUptodate flag.
> -		 */
> -		ClearPageUptodate(page);
> +	for (i = 0; i < last_offset; i += nrpages, addr += nrpages) {
>  
> -		if (f2fs_readpage(sbi, page, addr, READ_SYNC))
> -			goto out;
> +		nrpages = min(last_offset - i, bio_blocks);
> +		/* read ahead node pages */
> +		err = ra_sum_pages(sbi, &page_list, addr, nrpages);
> +		if (err)
> +			return err;
>  
> -		lock_page(page);
> -		rn = F2FS_NODE(page);
> -		sum_entry->nid = rn->footer.nid;
> -		sum_entry->version = 0;
> -		sum_entry->ofs_in_node = 0;
> -		addr++;
> +		list_for_each_entry_safe(page, tmp, &page_list, lru) {
> +
> +			lock_page(page);
> +			if(PageUptodate(page)) {
> +				rn = F2FS_NODE(page);
> +				sum_entry->nid = rn->footer.nid;
> +				sum_entry->version = 0;
> +				sum_entry->ofs_in_node = 0;
> +				sum_entry++;
> +			} else {
> +				err = -EIO;
> +			}
> +
> +			list_del(&page->lru);
> +			unlock_page(page);
> +			__free_pages(page, 0);
> +		}
>  	}
> -	unlock_page(page);
> -out:
> -	__free_pages(page, 0);
> -	return 0;
> +	return err;
>  }
>  
>  static bool flush_nats_in_journal(struct f2fs_sb_info *sbi)

-- 
Jaegeuk Kim
Samsung

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2013-11-27  5:30 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-22  7:48 [PATCH] f2fs: readahead contiguous pages for restore_node_summary Chao Yu
2013-11-27  5:29 ` Jaegeuk Kim [this message]
2013-11-27  7:58   ` Chao Yu
2013-11-27  8:19     ` [f2fs-dev] " Jaegeuk Kim
2013-11-28  1:26       ` Chao Yu
2013-11-28  3:33         ` [f2fs-dev] " Jaegeuk Kim
2013-11-28  5:56           ` Chao Yu
2013-11-30  5:12             ` [f2fs-dev] " Jaegeuk Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1385530195.2417.22.camel@kjgkr \
    --to=jaegeuk.kim@samsung.com \
    --cc=chao2.yu@samsung.com \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=shu.tan@samsung.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).