From: Shaohua Li <shli@kernel.org>
To: Coly Li <colyli@suse.de>
Cc: linux-raid@vger.kernel.org, Shaohua Li <shli@fb.com>,
Hannes Reinecke <hare@suse.com>, Neil Brown <neilb@suse.de>,
Johannes Thumshirn <jthumshirn@suse.de>,
Guoqing Jiang <gqjiang@suse.com>
Subject: Re: [RFC PATCH 2/2] RAID1: avoid unnecessary spin locks in I/O barrier code
Date: Tue, 22 Nov 2016 13:58:57 -0800 [thread overview]
Message-ID: <20161122215857.g4l66hoawdroyo24@kernel.org> (raw)
In-Reply-To: <1479765241-15528-2-git-send-email-colyli@suse.de>
On Tue, Nov 22, 2016 at 05:54:01AM +0800, Coly Li wrote:
> When I run a parallel reading performan testing on a md raid1 device with
> two NVMe SSDs, I observe very bad throughput in supprise: by fio with 64KB
> block size, 40 seq read I/O jobs, 128 iodepth, overall throughput is
> only 2.7GB/s, this is around 50% of the idea performance number.
>
> The perf reports locking contention happens at allow_barrier() and
> wait_barrier() code,
> - 41.41% fio [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> - _raw_spin_lock_irqsave
> + 89.92% allow_barrier
> + 9.34% __wake_up
> - 37.30% fio [kernel.kallsyms] [k] _raw_spin_lock_irq
> - _raw_spin_lock_irq
> - 100.00% wait_barrier
>
> The reason is, in these I/O barrier related functions,
> - raise_barrier()
> - lower_barrier()
> - wait_barrier()
> - allow_barrier()
> They always hold conf->resync_lock firstly, even there are only regular
> reading I/Os and no resync I/O at all. This is a huge performance penalty.
>
> The solution is a lockless-like algorithm in I/O barrier code, and only
> holding conf->resync_lock when it is really necessary.
>
> The original idea is from Hannes Reinecke, and Neil Brown provides
> comments to improve it. Now I write the patch based on new simpler raid1
> I/O barrier code.
>
> In the new simpler raid1 I/O barrier implementation, there are two
> wait barrier functions,
> - wait_barrier()
> Which in turns calls _wait_barrier(), is used for regular write I/O.
> If there is resync I/O happening on the same barrier bucket index, or
> the whole array is frozen, task will wait untill no barrier on same
> bucket index, or the whold array is unfreezed.
> - wait_read_barrier()
> Since regular read I/O won't interfere with resync I/O (read_balance()
> will make sure only uptodate data will be read out), so it is
> unnecessary to wait for barrier in regular read I/Os, they only have to
> wait only when the whole array is frozen.
> The operations on conf->nr_pending[idx], conf->nr_waiting[idx], conf->
> barrier[idx] are very carefully designed in raise_barrier(),
> lower_barrier(), _wait_barrier() and wait_read_barrier(), in order to
> avoid unnecessary spin locks in these functions. Once conf->
> nr_pengding[idx] is increased, a resync I/O with same barrier bucket index
> has to wait in raise_barrier(). Then in _wait_barrier() or
> wait_read_barrier() if no barrier raised in same barrier bucket index or
> array is not frozen, the regular I/O doesn't need to hold conf->
> resync_lock, it can just increase conf->nr_pending[idx], and return to its
> caller. For heavy parallel reading I/Os, the lockless I/O barrier code
> almostly gets rid of all spin lock cost.
>
> This patch significantly improves raid1 reading peroformance. From my
> testing, a raid1 device built by two NVMe SSD, runs fio with 64KB
> blocksize, 40 seq read I/O jobs, 128 iodepth, overall throughput
> increases from 2.7GB/s to 4.6GB/s (+70%).
>
> Open question:
> - I am not comfortable with freeze_array() and unfreeze_array(), for
> writing I/Os if devices failed, wait_barrier() may have race with
> freeze_array(), I am still looking for a solution now.
Only have rough review so far, this one does need more time to look.
Since you make all the count atomic, is it safe to check several atomic at the
same time? For example, in freeze_array we check both nr_queued and nr_pending,
but they are updated without lock so could be in any order. Please add comments
to explain.
Also you need to add smp_mb__after_atomic and friends to maintain ordering. An
example is _wait_barrier, 'atomic_inc(&conf->nr_pending[idx])' should happen
before read 'barrier'.
Thanks,
Shaohua
next prev parent reply other threads:[~2016-11-22 21:58 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-21 21:54 [RFC PATCH 1/2] RAID1: a new I/O barrier implementation to remove resync window Coly Li
2016-11-21 21:54 ` [RFC PATCH 2/2] RAID1: avoid unnecessary spin locks in I/O barrier code Coly Li
2016-11-22 21:58 ` Shaohua Li [this message]
2016-11-22 21:35 ` [RFC PATCH 1/2] RAID1: a new I/O barrier implementation to remove resync window Shaohua Li
2016-11-23 9:05 ` Guoqing Jiang
2016-11-24 5:45 ` NeilBrown
2016-11-24 6:05 ` Guoqing Jiang
2016-11-28 6:59 ` Coly Li
2016-11-28 6:42 ` Coly Li
2016-11-29 19:29 ` Shaohua Li
2016-11-30 2:57 ` Coly Li
2016-11-24 7:34 ` Guoqing Jiang
2016-11-28 7:33 ` Coly Li
2016-11-30 6:37 ` Guoqing Jiang
2016-11-30 7:19 ` Coly Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161122215857.g4l66hoawdroyo24@kernel.org \
--to=shli@kernel.org \
--cc=colyli@suse.de \
--cc=gqjiang@suse.com \
--cc=hare@suse.com \
--cc=jthumshirn@suse.de \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
--cc=shli@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).