public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: kreijack@inwind.it, waxhead@dirtcellar.net,
	Duncan <1i5t5.duncan@cox.net>,
	linux-btrfs@vger.kernel.org
Subject: Re: RAID56 - 6 parity raid
Date: Wed, 2 May 2018 15:29:46 -0400	[thread overview]
Message-ID: <99e79113-39ac-e67e-ff22-bfcfcdac00bc@gmail.com> (raw)
In-Reply-To: <66dceb8e-7a43-10cc-2ec6-e477a55b4deb@inwind.it>

On 2018-05-02 13:25, Goffredo Baroncelli wrote:
> On 05/02/2018 06:55 PM, waxhead wrote:
>>>
>>> So again, which problem would solve having the parity checksummed ? On the best of my knowledge nothing. In any case the data is checksummed so it is impossible to return corrupted data (modulo bug :-) ).
>>>
>> I am not a BTRFS dev , but this should be quite easy to answer. Unless you checksum the parity there is no way to verify that that the data (parity) you use to reconstruct other data is correct.
> 
> In any case you could catch that the compute data is wrong, because the data is always checksummed. And in any case you must check the data against their checksum.
> 
> My point is that storing the checksum is a cost that you pay *every time*. Every time you update a part of a stripe you need to update the parity, and then in turn the parity checksum. It is not a problem of space occupied nor a computational problem. It is a a problem of write amplification...
> 
> The only gain is to avoid to try to use the parity when
> a) you need it (i.e. when the data is missing and/or corrupted)
> and b) it is corrupted.
> But the likelihood of this case is very low. And you can catch it during the data checksum check (which has to be performed in any case !).
> 
> So from one side you have a *cost every time* (the write amplification), to other side you have a gain (cpu-time) *only in case* of the parity is corrupted and you need it (eg. scrub or corrupted data)).
> 
> IMHO the cost are very higher than the gain, and the likelihood the gain is very lower compared to the likelihood (=100% or always) of the cost.
You do realize that a write is already rewriting checksums elsewhere? 
It would be pretty trivial to make sure that the checksums for every 
part of a stripe end up in the same metadata block, at which point the 
only cost is computing the checksum (because when a checksum gets 
updated, the whole block it's in gets rewritten, period, because that's 
how CoW works).

Looking at this another way (all the math below uses SI units):

Assume you have a BTRFS raid5 volume consisting of 6 8TB disks (which 
gives you 40TB of usable space).  You're storing roughly 20TB of data on 
it, using a 16kB block size, and it sees about 1GB of writes a day, with 
no partial stripe writes.  You, for reasons of argument, want to scrub 
it every week, because the data in question matters a lot to you.

With a decent CPU, lets say you can compute 1.5GB/s worth of checksums, 
and can compute the parity at a rate of 1.25G/s (the ratio here is about 
the average across the almost 50 systems I have quick access to check, 
including a number of server and workstation systems less than a year 
old, though the numbers themselves are artificially low to accentuate 
the point here).

At this rate, scrubbing by computing parity requires processing:

* Checksums for 20TB of data, at a rate of 1.5GB/s, which would take 
13333 seconds, or 222 minutes, or about 3.7 hours.
* Parity for 20TB of data, at a rate of 1.25GB/s, which would take 16000 
seconds, or 267 minutes, or roughly 4.4 hours.

So, over a week, you would be spending 8.1 hours processing data solely 
for data integrity, or roughly 4.8214% of your time.

Now assume instead that you're doing checksummed parity:

* Scrubbing data is the same, 3.7 hours.
* Scrubbing parity turns into computing checksums for 4TB of data, which 
would take 3200 seconds, or 53 minutes, or roughly 0.88 hours.
* Computing parity for the 7GB of data you write each week takes 5.6 
_SECONDS_.

So, over a week, you would spend just over 4.58 hours processing data 
solely for data integrity, or roughly 2.7262% of your time.

So, in terms of just time spent, it's almost twice as fast to use 
checksummed parity (roughly 43% faster to be more specific).

So, lets look at data usage:

1GB of data is translates to 62500 16kB blocks of data, which equates to 
an additional 15625 blocks for parity.  Adding parity checksums adds a 
25% overhead to checksums being written, but that actually doesn't 
translate to a huge increase in the number of _blocks_ of checksums 
written.  One 16k block can hold roughly 500 checksums, so it would take 
125 blocks worth of checksums without parity, and 157 (technically 
156.25, but you can't write a quarter block) with parity checksums. 
Thus, without parity checksums, writing 1GB of data involves writing 
78250 blocks, while doing the same with parity checksums involves 
writing 78282 blocks, a net change of only 32 blocks, or **0.0409%**.

Note that the difference in the amount of checksums written is a simple 
linear function directly proportionate to the amount of data being 
written provided that all rewrites only rewrite full stripes (because 
that's equivalent for this to just adding new data).  In other words, 
even if we were to increase the total amount of data that array was 
getting in a day, the net change from having parity checksumming would 
still stay within the range of 0.03-0.05%.

Making some of those partial re-writes skews the value upwards, but it 
can never be worse than 25% on a raid5 array (because you can't write 
less than a single block, and therefore the pathological worst case 
involves writing one data block, which translates to a single checksum 
and parity write, and in turn to only a single block written for parity 
checksums).  The exact level of how bad it can get is of course worse 
with higher levels of parity (it's a 33.333% increase for RAID6, 60% for 
raid with 3 parity blocks, etc).

So, given the above, this is a pretty big net win in terms of overhead 
for single-parity RAID arrays, even in the pathological worst case (25% 
higher write overhead (which happens once for each block), in exchange 
for 43% lower post-write processing overhead for data integrity (which 
usually happens way more than once for each block)).

  parent reply	other threads:[~2018-05-02 19:29 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-01 21:57 RAID56 - 6 parity raid Gandalf Corvotempesta
2018-05-02  1:47 ` Duncan
2018-05-02 16:27   ` Goffredo Baroncelli
2018-05-02 16:55     ` waxhead
2018-05-02 17:19       ` Austin S. Hemmelgarn
2018-05-02 17:25       ` Goffredo Baroncelli
2018-05-02 18:17         ` waxhead
2018-05-02 18:50           ` Andrei Borzenkov
2018-05-02 21:20             ` waxhead
2018-05-02 21:54               ` Goffredo Baroncelli
2018-05-02 19:04           ` Goffredo Baroncelli
2018-05-02 19:29         ` Austin S. Hemmelgarn [this message]
2018-05-02 20:40           ` Goffredo Baroncelli
2018-05-02 23:32             ` Duncan
2018-05-03 11:26             ` Austin S. Hemmelgarn
2018-05-03 19:00               ` Goffredo Baroncelli
2018-05-03  8:11           ` Andrei Borzenkov
2018-05-03 11:28             ` Austin S. Hemmelgarn
2018-05-03 12:47 ` Alberto Bursi
2018-05-03 19:03   ` Goffredo Baroncelli
  -- strict thread matches above, loose matches on Subject: below --
2018-05-02 19:25 Gandalf Corvotempesta
2018-05-02 23:07 ` Duncan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=99e79113-39ac-e67e-ff22-bfcfcdac00bc@gmail.com \
    --to=ahferroin7@gmail.com \
    --cc=1i5t5.duncan@cox.net \
    --cc=kreijack@inwind.it \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=waxhead@dirtcellar.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox