From: Chris Mason <chris.mason@fusionio.com>
To: Liu Bo <liubo2009@cn.fujitsu.com>
Cc: "linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>,
Josef Bacik <JBacik@fusionio.com>
Subject: Re: [PATCH v2] Btrfs: improve multi-thread buffer read
Date: Thu, 12 Jul 2012 14:04:02 -0400 [thread overview]
Message-ID: <20120712180402.GA30128@shiny> (raw)
In-Reply-To: <1342059231-20301-1-git-send-email-liubo2009@cn.fujitsu.com>
On Wed, Jul 11, 2012 at 08:13:51PM -0600, Liu Bo wrote:
> While testing with my buffer read fio jobs[1], I find that btrfs does not
> perform well enough.
>
> Here is a scenario in fio jobs:
>
> We have 4 threads, "t1 t2 t3 t4", starting to buffer read a same file,
> and all of them will race on add_to_page_cache_lru(), and if one thread
> successfully puts its page into the page cache, it takes the responsibility
> to read the page's data.
>
> And what's more, reading a page needs a period of time to finish, in which
> other threads can slide in and process rest pages:
>
> t1 t2 t3 t4
> add Page1
> read Page1 add Page2
> | read Page2 add Page3
> | | read Page3 add Page4
> | | | read Page4
> -----|------------|-----------|-----------|--------
> v v v v
> bio bio bio bio
>
> Now we have four bios, each of which holds only one page since we need to
> maintain consecutive pages in bio. Thus, we can end up with far more bios
> than we need.
>
> Here we're going to
> a) delay the real read-page section and
> b) try to put more pages into page cache.
>
> With that said, we can make each bio hold more pages and reduce the number
> of bios we need.
>
> Here is some numbers taken from fio results:
> w/o patch w patch
> ------------- -------- ---------------
> READ: 745MB/s +32% 987MB/s
>
> [1]:
> [global]
> group_reporting
> thread
> numjobs=4
> bs=32k
> rw=read
> ioengine=sync
> directory=/mnt/btrfs/
>
> [READ]
> filename=foobar
> size=2000M
> invalidate=1
>
> Signed-off-by: Liu Bo <liubo2009@cn.fujitsu.com>
> ---
> v1->v2: if we fail to make a allocation, just fall back to the old way to
> read page.
> fs/btrfs/extent_io.c | 41 +++++++++++++++++++++++++++++++++++++++--
> 1 files changed, 39 insertions(+), 2 deletions(-)
>
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 01c21b6..5c8ab6c 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -3549,6 +3549,11 @@ int extent_writepages(struct extent_io_tree *tree,
> return ret;
> }
>
> +struct pagelst {
> + struct page *page;
> + struct list_head lst;
> +};
> +
I like this patch, its a big improvement for just a little timing
change. Instead of doing the kmalloc of this struct, can you please
change it to put a pagevec on the stack.
The model would be:
add a page to the pagevec array
if pagevec full
launch all the readpages
This lets you avoid the kmalloc, and it is closer to how we solve
similar problems in other parts of the kernel.
Thanks!
-chris
next prev parent reply other threads:[~2012-07-12 18:04 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-12 2:13 [PATCH v2] Btrfs: improve multi-thread buffer read Liu Bo
2012-07-12 18:04 ` Chris Mason [this message]
2012-07-13 2:52 ` Liu Bo
2012-07-13 10:32 ` Chris Mason
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120712180402.GA30128@shiny \
--to=chris.mason@fusionio.com \
--cc=JBacik@fusionio.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=liubo2009@cn.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).