From: Piergiorgio Sartor <piergiorgio.sartor@nexgo.de>
To: Adam Goryachev <mailinglists@websitemanagers.com.au>
Cc: Giuseppe Bilotta <giuseppe.bilotta@gmail.com>,
Guoqing Jiang <guoqing.jiang@cloud.ionos.com>,
Piergiorgio Sartor <piergiorgio.sartor@nexgo.de>,
Wolfgang Denk <wd@denx.de>,
linux-raid@vger.kernel.org
Subject: Re: raid6check extremely slow ?
Date: Tue, 12 May 2020 18:11:51 +0200 [thread overview]
Message-ID: <20200512161151.GD7261@lazy.lzy> (raw)
In-Reply-To: <dbc3f2ee-589e-9347-6918-a1f544721443@websitemanagers.com.au>
On Tue, May 12, 2020 at 04:27:59PM +1000, Adam Goryachev wrote:
>
> On 12/5/20 11:52, Giuseppe Bilotta wrote:
> > On Mon, May 11, 2020 at 11:16 PM Guoqing Jiang
> > <guoqing.jiang@cloud.ionos.com> wrote:
> > > On 5/11/20 11:12 PM, Guoqing Jiang wrote:
> > > > On 5/11/20 10:53 PM, Giuseppe Bilotta wrote:
> > > > > Would it be possible/effective to lock multiple stripes at once? Lock,
> > > > > say, 8 or 16 stripes, process them, unlock. I'm not familiar with the
> > > > > internals, but if locking is O(1) on the number of stripes (at least
> > > > > if they are consecutive), this would help reduce (potentially by a
> > > > > factor of 8 or 16) the costs of the locks/unlocks at the expense of
> > > > > longer locks and their influence on external I/O.
> > > > >
> > > > Hmm, maybe something like.
> > > >
> > > > check_stripes
> > > >
> > > > -> mddev_suspend
> > > >
> > > > while (whole_stripe_num--) {
> > > > check each stripe
> > > > }
> > > >
> > > > -> mddev_resume
> > > >
> > > >
> > > > Then just need to call suspend/resume once.
> > > But basically, the array can't process any new requests when checking is
> > Yeah, locking the entire device might be excessive (especially if it's
> > a big one). Using a granularity larger than 1 but smaller than the
> > whole device could be a compromise. Since the “no lock” approach seems
> > to be about an order of magnitude faster (at least in Piergiorgio's
> > benchmark), my guess was that something between 8 and 16 could bring
> > the speed up to be close to the “no lock” case without having dramatic
> > effects on I/O. Reading all 8/16 stripes before processing (assuming
> > sufficient memory) might even lead to better disk utilization during
> > the check.
>
> I know very little about this, but could you perhaps lock 2 x 16 stripes,
> and then after you complete the first 16, release the first 16, lock the 3rd
> 16 stripes, and while waiting for the lock continue to process the 2nd set
> of 16?
For some reason I don not know, the unlock
is global.
If I recall correctly, this was the way
Neil mentioned is "more" correct.
> Would that allow you to do more processing and less waiting for
> lock/release?
I think the general concept of pipelineing
is good, this would really improve the
performances of the whole thing.
If we could just multithread, I suspect
it could improve.
We need to solve the unlock problem...
bye,
>
> Regards,
> Adam
--
piergiorgio
next prev parent reply other threads:[~2020-05-12 16:11 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-10 12:07 raid6check extremely slow ? Wolfgang Denk
2020-05-10 13:26 ` Piergiorgio Sartor
2020-05-11 6:33 ` Wolfgang Denk
2020-05-10 22:16 ` Guoqing Jiang
2020-05-11 6:40 ` Wolfgang Denk
2020-05-11 8:58 ` Guoqing Jiang
2020-05-11 15:39 ` Piergiorgio Sartor
2020-05-12 7:37 ` Wolfgang Denk
2020-05-12 16:17 ` Piergiorgio Sartor
2020-05-13 6:13 ` Wolfgang Denk
2020-05-13 16:22 ` Piergiorgio Sartor
2020-05-11 16:14 ` Piergiorgio Sartor
2020-05-11 20:53 ` Giuseppe Bilotta
2020-05-11 21:12 ` Guoqing Jiang
2020-05-11 21:16 ` Guoqing Jiang
2020-05-12 1:52 ` Giuseppe Bilotta
2020-05-12 6:27 ` Adam Goryachev
2020-05-12 16:11 ` Piergiorgio Sartor [this message]
2020-05-12 16:05 ` Piergiorgio Sartor
2020-05-11 21:07 ` Guoqing Jiang
2020-05-11 22:44 ` Peter Grandi
2020-05-12 16:09 ` Piergiorgio Sartor
2020-05-12 20:54 ` antlists
2020-05-13 16:18 ` Piergiorgio Sartor
2020-05-13 17:37 ` Wols Lists
2020-05-13 18:23 ` Piergiorgio Sartor
2020-05-12 16:07 ` Piergiorgio Sartor
2020-05-12 18:16 ` Guoqing Jiang
2020-05-12 18:32 ` Piergiorgio Sartor
2020-05-13 6:18 ` Wolfgang Denk
2020-05-13 6:07 ` Wolfgang Denk
2020-05-15 10:34 ` Andrey Jr. Melnikov
2020-05-15 11:54 ` Wolfgang Denk
2020-05-15 12:58 ` Guoqing Jiang
2020-05-14 17:20 ` Roy Sigurd Karlsbakk
2020-05-14 18:20 ` Wolfgang Denk
2020-05-14 19:51 ` Roy Sigurd Karlsbakk
2020-05-15 8:08 ` Wolfgang Denk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200512161151.GD7261@lazy.lzy \
--to=piergiorgio.sartor@nexgo.de \
--cc=giuseppe.bilotta@gmail.com \
--cc=guoqing.jiang@cloud.ionos.com \
--cc=linux-raid@vger.kernel.org \
--cc=mailinglists@websitemanagers.com.au \
--cc=wd@denx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).