From: NeilBrown <neilb@suse.de>
To: Shaohua Li <shli@kernel.org>
Cc: Christoph Hellwig <hch@infradead.org>, linux-raid@vger.kernel.org
Subject: Re: [RFC]raid5: add an option to avoid copy data from bio to stripe cache
Date: Wed, 21 May 2014 17:01:12 +1000 [thread overview]
Message-ID: <20140521170112.16ae2d11@notabene.brown> (raw)
In-Reply-To: <20140429111358.GA10584@kernel.org>
[-- Attachment #1: Type: text/plain, Size: 7706 bytes --]
On Tue, 29 Apr 2014 19:13:58 +0800 Shaohua Li <shli@kernel.org> wrote:
> On Tue, Apr 29, 2014 at 05:07:25PM +1000, NeilBrown wrote:
> > On Tue, 29 Apr 2014 10:01:24 +0800 Shaohua Li <shli@kernel.org> wrote:
> >
> > > On Mon, Apr 28, 2014 at 08:44:07PM +1000, NeilBrown wrote:
> > > > On Mon, 28 Apr 2014 03:17:48 -0700 Christoph Hellwig <hch@infradead.org>
> > > > wrote:
> > > >
> > > > > On Mon, Apr 28, 2014 at 02:58:41PM +0800, Shaohua Li wrote:
> > > > > >
> > > > > > The stripe cache has two goals:
> > > > > > 1. cache data, so next time if data can be found in stripe cache, disk access
> > > > > > can be avoided.
> > > > >
> > > > > I think this is mostly a side effect. We have a much larger and better
> > > > > tuned page cache to take care of this.
> > > > >
> > > > > > 2. stable data. data is copied from bio to stripe cache and calculated parity.
> > > > > > data written to disk is from stripe cache, so if upper layer changes bio data,
> > > > > > data written to disk isn't impacted.
> > > > > >
> > > > > > In my environment, I can guarantee 2 will not happen.
> > > > >
> > > > > Why just in your environment? Now that we got stable pages in the page
> > > > > cache this should always be the case.
> > > >
> > > > Hmm... I hadn't realised that we were guaranteed stabled pages always (if
> > > > requested). It seems that we are.
> > > >
> > > > >
> > > > > > Of course, this shouldn't be enabled by default, so I added an option to
> > > > > > control it.
> > > > >
> > > > > Unless careful benchmarking in various scenarious shows adverse effects
> > > > > this should be the default. And if we can find adverse effects we need
> > > > > to look into them.
> > > >
> > > > Certainly some benchmarking is needed.
> > > >
> > > > We should set
> > > >
> > > > mddev->queue->backing_dev_info.capabilities |= BDI_CAP_STABLE_WRITES
> > > >
> > > > if and only iff 'skip_copy' is set. Then test various cases just to confirm
> > > > that it is generally an improvement.
> > >
> > > IIRC, we switched from 'force wait page writeback' to 'wait page writeback if
> > > required' because of performance issues reported, so we shoudn't always enable
> > > BDI_CAP_STABLE_WRITES. Is it safe to set/clear BDI_CAP_STABLE_WRITES at
> > > runtime, if we use 'skip_copy' to control it? Ofcourse, we don't need runtime
> > > changing the setting, but we need a mechanism to setup it before array runs.
> >
> > So for md/RAID5 the trade off is:
> > - If we set BDI_CAP_STABLE_WRITES then processes might sometimes have to wait
> > for the writeout to complete where otherwise they would not
> > - If we don't then RAID5 *always* has to copy the page into the stripe cache.
> >
> > It isn't at all clear to me which is best. It is very possible that copying
> > costs a lot. But then waiting for read-modify-write cycles can be a real
> > cost too....
> >
> > I think it is perfectly safe to change BDI_CAP_STABLE_WRITES while the array
> > is suspended. So
> > mddev_suspend(mddev);
> > change BDI_CAP_STABLE_WRITES
> > mddev_resume(mddev);
> >
> > should be safe.
>
> sounds good.
>
> > >
> > > As of performance, the 'skip_copy' is very helpful (> 30% boost) for my raid5
> > > array (with 6 fast PCIe SSD) for 1M request size workload. Nothing changed for
> > > 4k randwrite workload.
> >
> > It would be really good to see comparison for sequential and random loads on
> > various filesytems with both rotating and SSD devices, in RAID5 and RAID6,
> > with various numbers of devices.
> > :-)
> >
> > If you'd like to update your patch to adjust BDI_CAP_STABLE_WRITES when
> > skip_copy is changed, I'll apply it so that people can test it.
>
> Here it is.
I've removed this patch for now. It causes a nasty crash when running the
07changelevels test in the mdadm test suite.
First we get
kernel: [ 8282.822194] WARNING: CPU: 0 PID: 16377 at /home/git/md/drivers/md/raid5.c:1404 raid_run
which is
+ BUG_ON(sh->dev[i].page != sh->dev[i].orig_page);
which I changed to WARN_ON,
then
kernel: [ 8284.116364] BUG: sleeping function called from invalid context at /home/git/md/kernel/locking/rwsem.c:20
kernel: [ 8284.116369] in_atomic(): 1, irqs_disabled(): 0, pid: 16377, name: md0_raid5
kernel: [ 8284.116372] INFO: lockdep is turned off.
kernel: [ 8284.116379] Preemption disabled at:[<ffffffff81a63de4>] handle_stripe_expansion+0x134/0x1e0
kernel: [ 8284.116380]
kernel: [ 8284.116385] CPU: 0 PID: 16377 Comm: md0_raid5 Tainted: G D W 3.15.0-rc5+ #855
kernel: [ 8284.116388] Hardware name: HP ProLiant ML310 G3, BIOS W02 04/17/2006
kernel: [ 8284.116400] ffff8800d25641d0 ffff8800b7403888 ffffffff81c62893 0000000000000000
kernel: [ 8284.116409] ffff8800b74038b0 ffffffff810c4dea ffff88014091d410 ffff88014091d470
kernel: [ 8284.116415] ffff8800d25641d0 ffff8800b74038d8 ffffffff81c716e5 ffff8800d25641d0
kernel: [ 8284.116416] Call Trace:
kernel: [ 8284.116422] [<ffffffff81c62893>] dump_stack+0x4e/0x7a
kernel: [ 8284.116429] [<ffffffff810c4dea>] __might_sleep+0x15a/0x250
kernel: [ 8284.116436] [<ffffffff81c716e5>] down_read+0x25/0xa0
kernel: [ 8284.116445] [<ffffffff810a6dcf>] exit_signals+0x1f/0x120
kernel: [ 8284.116453] [<ffffffff81093d35>] do_exit+0xb5/0xc70
kernel: [ 8284.116462] [<ffffffff810f7bcd>] ? kmsg_dump+0x1ad/0x220
kernel: [ 8284.116465] [<ffffffff810f7a40>] ? kmsg_dump+0x20/0x220
kernel: [ 8284.116473] [<ffffffff81055515>] oops_end+0x85/0xc0
kernel: [ 8284.116480] [<ffffffff81055686>] die+0x46/0x70
kernel: [ 8284.116487] [<ffffffff8105250a>] do_general_protection+0xca/0x150
kernel: [ 8284.116494] [<ffffffff81c739d2>] general_protection+0x22/0x30
kernel: [ 8284.116501] [<ffffffff8166a0a9>] ? memcpy+0x29/0x110
kernel: [ 8284.116508] [<ffffffff81638275>] ? async_memcpy+0xc5/0x160
kernel: [ 8284.116516] [<ffffffff81a63de4>] handle_stripe_expansion+0x134/0x1e0
kernel: [ 8284.116522] [<ffffffff81a6496e>] handle_stripe+0xade/0x23e0
I've haven't looked at why this might be.
And re the other patch you send, I meant to also say to please use
__test_and_clear_bit(). This version is sufficient when the variable is only
used by one thread, and it is slightly more efficient.
NeilBrown
>
>
> Subject: raid5: add an option to avoid copy data from bio to stripe cache
>
> The stripe cache has two goals:
> 1. cache data, so next time if data can be found in stripe cache, disk access
> can be avoided.
> 2. stable data. data is copied from bio to stripe cache and calculated parity.
> data written to disk is from stripe cache, so if upper layer changes bio data,
> data written to disk isn't impacted.
>
> In my environment, I can guarantee 2 will not happen. And BDI_CAP_STABLE_WRITES
> can guarantee 2 too. For 1, it's not common too. block plug mechanism will
> dispatch a bunch of sequentail small requests together. And since I'm using
> SSD, I'm using small chunk size. It's rare case stripe cache is really useful.
>
> So I'd like to avoid the copy from bio to stripe cache and it's very helpful
> for performance. In my 1M randwrite tests, avoid the copy can increase the
> performance more than 30%.
>
> Of course, this shouldn't be enabled by default. It's reported enabling
> BDI_CAP_STABLE_WRITES can harm some workloads before, so I added an option to
> control it.
>
> Signed-off-by: Shaohua Li <shli@fusionio.com>
> ---
> drivers/md/raid5.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++-------
> drivers/md/raid5.h | 4 +-
> 2 files changed, 94 insertions(+), 14 deletions(-)
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
next prev parent reply other threads:[~2014-05-21 7:01 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-04-28 6:58 [RFC]raid5: add an option to avoid copy data from bio to stripe cache Shaohua Li
2014-04-28 7:06 ` NeilBrown
2014-04-28 7:28 ` Shaohua Li
2014-04-28 10:08 ` NeilBrown
2014-04-28 10:17 ` Christoph Hellwig
2014-04-28 10:44 ` NeilBrown
2014-04-29 2:01 ` Shaohua Li
2014-04-29 7:07 ` NeilBrown
2014-04-29 11:13 ` Shaohua Li
2014-05-21 7:01 ` NeilBrown [this message]
2014-05-21 9:57 ` Shaohua Li
2014-05-29 7:01 ` NeilBrown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140521170112.16ae2d11@notabene.brown \
--to=neilb@suse.de \
--cc=hch@infradead.org \
--cc=linux-raid@vger.kernel.org \
--cc=shli@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).