From: Piergiorgio Sartor <piergiorgio.sartor@nexgo.de>
To: Wols Lists <antlists@youngman.org.uk>
Cc: Piergiorgio Sartor <piergiorgio.sartor@nexgo.de>,
Peter Grandi <pg@lxraid.list.sabi.co.uk>,
Linux RAID <linux-raid@vger.kernel.org>
Subject: Re: raid6check extremely slow ?
Date: Wed, 13 May 2020 20:23:40 +0200 [thread overview]
Message-ID: <20200513182340.GA10256@lazy.lzy> (raw)
In-Reply-To: <5EBC304E.7010402@youngman.org.uk>
On Wed, May 13, 2020 at 06:37:18PM +0100, Wols Lists wrote:
> On 13/05/20 17:18, Piergiorgio Sartor wrote:
> > On Tue, May 12, 2020 at 09:54:21PM +0100, antlists wrote:
> >> On 12/05/2020 17:09, Piergiorgio Sartor wrote:
> >>> About the check -> maybe lock -> re-check,
> >>> it is a possible workaround, but I find it
> >>> a bit extreme.
> >>
> >> This seems the best (most obvious?) solution to me.
> >>
> >> If the system is under light write pressure, and the disk is healthy, it
> >> will scan pretty quickly with almost no locking.
> >
> > I've some concerns about optimization
> > solutions which can result in less
> > performances than the original status.
> >
> > You mention "write pressure", but there
> > is an other case, which will cause
> > read -> lock -> re-read...
> > Namely, when some chunk is really corrupted.
> >
> Yup. That's why I said "the disk is healthy" :-)
We need to consider all posibilities...
> > Now, I do not know, maybe there are other
> > things we overlook, or maybe not.
> >
> > I do not know either how likely is that some
> > situations will occur to reduce performances.
> >
> > I would prefer a solution which will *only*
> > improve, without any possible drawback.
>
> Wouldn't we all. But if the *normal* case shows an appreciable
> improvement, then I'm inclined to write off a "shouldn't happen" case as
> "tough luck, shit happens".
> >
> > Again, this does not mean this approach is
> > wrong, actually is to be considered.
> >
> > In the end, I would like also to understand
> > why the lock / unlock is so expensive.
>
> Agreed.
> >
> >> If the system is under heavy pressure, chances are there'll be a fair few
> >> stripes needing rechecking, but even at it's worst it'll only be as bad as
> >> the current setup.
> >
> > It will be worse (or worst, I'm always
> > confused...).
> > The read and the check will double.
>
> Touche - my logic was off ...
>
> But a bit of grammar - bad = descriptive, worse = comparative, worst =
> absolute, so you were correct with worse.
Ah! Thank you.
That's always confusing me. Usually I check
with some search engine, but sometimes I'm
too lazy... And then I forgot.
BTW, somehow related, please do not
refrain to correct my English.
> > I'm not sure about the read, but the
> > check is currently expensive.
>
> But you're still going to need a very unlucky state of affairs for the
> optimised check to be worse. Okay, if the disk IS damaged, then the
> optimised check could easily be the worst, but if it's just write
> pressure, you're going to need every second stripe to be messed up by a
> collision. Rather unlikely imho.
Well, as Neil would say, patch are welcome! :-)
Really, I've too little time to make
changes to the code.
I can do some test and, hopefully,
some support.
bye,
pg
> >
> > bye,
> >
> > pg
>
> Cheers,
> Wol
> >
> >> And if the system is somewhere inbetween, you still stand a good chance of a
> >> fast scan.
> >>
> >> At the end of the day, the rule should always be "lock only if you need to"
> >> so looking for problems with an optimistic no-lock scan, then locking only
> >> if needed to check and fix the problem, just feels right.
> >>
> >> Cheers,
> >> Wol
> >
--
piergiorgio
next prev parent reply other threads:[~2020-05-13 18:23 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-10 12:07 raid6check extremely slow ? Wolfgang Denk
2020-05-10 13:26 ` Piergiorgio Sartor
2020-05-11 6:33 ` Wolfgang Denk
2020-05-10 22:16 ` Guoqing Jiang
2020-05-11 6:40 ` Wolfgang Denk
2020-05-11 8:58 ` Guoqing Jiang
2020-05-11 15:39 ` Piergiorgio Sartor
2020-05-12 7:37 ` Wolfgang Denk
2020-05-12 16:17 ` Piergiorgio Sartor
2020-05-13 6:13 ` Wolfgang Denk
2020-05-13 16:22 ` Piergiorgio Sartor
2020-05-11 16:14 ` Piergiorgio Sartor
2020-05-11 20:53 ` Giuseppe Bilotta
2020-05-11 21:12 ` Guoqing Jiang
2020-05-11 21:16 ` Guoqing Jiang
2020-05-12 1:52 ` Giuseppe Bilotta
2020-05-12 6:27 ` Adam Goryachev
2020-05-12 16:11 ` Piergiorgio Sartor
2020-05-12 16:05 ` Piergiorgio Sartor
2020-05-11 21:07 ` Guoqing Jiang
2020-05-11 22:44 ` Peter Grandi
2020-05-12 16:09 ` Piergiorgio Sartor
2020-05-12 20:54 ` antlists
2020-05-13 16:18 ` Piergiorgio Sartor
2020-05-13 17:37 ` Wols Lists
2020-05-13 18:23 ` Piergiorgio Sartor [this message]
2020-05-12 16:07 ` Piergiorgio Sartor
2020-05-12 18:16 ` Guoqing Jiang
2020-05-12 18:32 ` Piergiorgio Sartor
2020-05-13 6:18 ` Wolfgang Denk
2020-05-13 6:07 ` Wolfgang Denk
2020-05-15 10:34 ` Andrey Jr. Melnikov
2020-05-15 11:54 ` Wolfgang Denk
2020-05-15 12:58 ` Guoqing Jiang
2020-05-14 17:20 ` Roy Sigurd Karlsbakk
2020-05-14 18:20 ` Wolfgang Denk
2020-05-14 19:51 ` Roy Sigurd Karlsbakk
2020-05-15 8:08 ` Wolfgang Denk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200513182340.GA10256@lazy.lzy \
--to=piergiorgio.sartor@nexgo.de \
--cc=antlists@youngman.org.uk \
--cc=linux-raid@vger.kernel.org \
--cc=pg@lxraid.list.sabi.co.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).