* RAID56
@ 2018-06-19 15:26 Gandalf Corvotempesta
2018-06-20 0:06 ` RAID56 waxhead
2018-06-20 8:31 ` RAID56 Duncan
0 siblings, 2 replies; 7+ messages in thread
From: Gandalf Corvotempesta @ 2018-06-19 15:26 UTC (permalink / raw)
To: linux-btrfs
Another kernel release was made.
Any improvements in RAID56?
I didn't see any changes in that sector, is something still being
worked on or it's stuck waiting for something ?
Based on official BTRFS status page, RAID56 is the only "unstable"
item marked in red.
No interested from Suse in fixing that?
I think it's the real missing part for a feature-complete filesystem.
Nowadays parity raid is mandatory, we can't only rely on mirroring.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RAID56
2018-06-19 15:26 RAID56 Gandalf Corvotempesta
@ 2018-06-20 0:06 ` waxhead
2018-06-20 7:34 ` RAID56 Gandalf Corvotempesta
2018-06-20 8:31 ` RAID56 Duncan
1 sibling, 1 reply; 7+ messages in thread
From: waxhead @ 2018-06-20 0:06 UTC (permalink / raw)
To: Gandalf Corvotempesta, linux-btrfs
Gandalf Corvotempesta wrote:
> Another kernel release was made.
> Any improvements in RAID56?
>
> I didn't see any changes in that sector, is something still being
> worked on or it's stuck waiting for something ?
>
> Based on official BTRFS status page, RAID56 is the only "unstable"
> item marked in red.
> No interested from Suse in fixing that?
>
> I think it's the real missing part for a feature-complete filesystem.
> Nowadays parity raid is mandatory, we can't only rely on mirroring.
First of all: I am not a BTRFS developer, but I follow the mailing list
closely and I too have a particular interest in the "RAID"5/6 feature
which realistically is probably about 3-4 years (if not more) in the future.
From what I am able to understand the pesky write hole is one of the
major obstacles for having BTRFS "RAID"5/6 work reliably. There was
patches to fix this a while ago, but if these patches are to be
classified as a workaround or actually as "the darn thing done right" is
perhaps up for discussion.
In general there seems to be a lot more momentum on the "RAID"5/6
feature now compared to earlier. There also seem to be a lot of focus on
fixing bugs and running tests as well. This is why I am guessing that
3-4 years ahead is a absolute minimum until "RAID"5/6 might be somewhat
reliable and usable.
There are a few other basics missing that may be acceptable for you as
long as you know about it. For example as far as I know BTRFS does still
not use the "device-id" or "BTRFS internal number" for storage devices
to keep track of the storage device.
This means that if you have a multi storage device filesystem with for
example /dev/sda /dev/sdb /dev/sdc etc... and /dev/sdc disappears and
show up again as /dev/sdx then BTRFS would not recoginize this and
happily try to continue to write on /dev/sdc even if it does not exist.
...and perhaps even worse - I can imagine that if you swap device
ordering and a different device takes /dev/sdc's place then BTRFS
*could* overwrite data on this device - possibly making a real mess of
things. I am not sure if this holds true, but if it does it's for sure a
real nugget of basic functionality missing right there.
BTRFS also so far have no automatic "drop device" function e.g. it will
not automatically kick out a storage device that is throwing lots of
errors and causing delays etc. There may be benefits to keeping this
design of course, but for some dropping the device might be desirable.
And no hot-spare "or hot-(reserved-)space" (which would be more accurate
in BTRFS terms) is implemented either, and that is one good reason to
keep an eye on your storage pool.
What you *might* consider is to have your metadata in "RAID"1 or
"RAID"10 and your data in "RAID5" or even "RAID6" so that if you run
into problems then you might in worst case loose some data, but since
"RAID"1/10 is beginning to be rather mature then it is likely that your
filesystem might survive a disk failure.
So if you are prepared to perhaps loose a file or two, but want to feel
confident that your filesystem is surviving and will give you a report
about what file(s) are toast then this may be acceptable for you as you
can always restore from backups (because you do have backups right? If
not, read 'any' of Duncan's posts - he explains better than most people
why you need and should have backups!)
Now keep in mind that this is just a humble users analysis of the
situation based on whatever I have picked up from the mailing list which
may or may not be entirely accurate so take it for what it is!
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RAID56
2018-06-20 0:06 ` RAID56 waxhead
@ 2018-06-20 7:34 ` Gandalf Corvotempesta
2018-06-20 8:44 ` RAID56 Nikolay Borisov
0 siblings, 1 reply; 7+ messages in thread
From: Gandalf Corvotempesta @ 2018-06-20 7:34 UTC (permalink / raw)
To: waxhead; +Cc: linux-btrfs
Il giorno mer 20 giu 2018 alle ore 02:06 waxhead
<waxhead@dirtcellar.net> ha scritto:
> First of all: I am not a BTRFS developer, but I follow the mailing list
> closely and I too have a particular interest in the "RAID"5/6 feature
> which realistically is probably about 3-4 years (if not more) in the future.
Ok.
[cut]
> Now keep in mind that this is just a humble users analysis of the
> situation based on whatever I have picked up from the mailing list which
> may or may not be entirely accurate so take it for what it is!
I wasn't aware of all of these "restrictions".
If this is true, now I understand why redhat lost interest in BTRFS.
3-4 years more for a "working" RAID56 is absolutely too much, in this case,
ZFS support for RAID-Z expansion/reduction (actively being worked on)
will be released
much earlier (probably, a test working-version later this year and a
stable version next year)
RAID-Z single disk espansion/removal is probably the real missing feature in ZFS
allowing it to be considered a general-purpose FS.
Device removal was added some months ago and now is possible (so, if
you add a single disk to a mirrored vdev,
you don't have to destroy the whole pool to remove the accidentally-added disk)
In 3-4 years, maybe oracle release ZFS as GPL-compatible (solaris is
dying, latest release is 3 years ago,
so there is no need to keep a FS opensource compatible only with a died OS)
Keep in mind that i'm not a ZFS-fan (honestly, I don't like it) but
with these 2 features added and tons of restriction in BTRFS,
there is no other choise.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RAID56
2018-06-19 15:26 RAID56 Gandalf Corvotempesta
2018-06-20 0:06 ` RAID56 waxhead
@ 2018-06-20 8:31 ` Duncan
2018-06-20 9:15 ` RAID56 Gandalf Corvotempesta
1 sibling, 1 reply; 7+ messages in thread
From: Duncan @ 2018-06-20 8:31 UTC (permalink / raw)
To: linux-btrfs
Gandalf Corvotempesta posted on Tue, 19 Jun 2018 17:26:59 +0200 as
excerpted:
> Another kernel release was made.
> Any improvements in RAID56?
<meta> Btrfs feature improvements come in "btrfs time". Think long term,
multiple releases, even multiple years (5 releases per year). </meta>
In fact, btrfs raid56 is a good example. Originally it was supposed to
be in kernel 3.6 (or even before, but 3.5 is when I really started
getting into btrfs enough to know), but for various reasons primarily
involving the complexity of the feature as well as btrfs itself and the
number of devs actually working on btrfs, even partial raid56 support
didn't get added until 3.9, and still-buggy full support for raid56 scrub
and device replace wasn't there until 3.19, with 4.3 fixing some bugs
while others remained hidden for many releases until they were finally
fixed in 4.12.
Since 4.12, btrfs raid56 mode, as such, has the known major bugs fixed
and is ready for "still cautious use"[1], but for rather technical
reasons discussed below, may not actually meet people's general
expectations for what btrfs raid56 should be in reliability terms.
And that's the long term 3+ years out bit that waxhead was talking about.
> I didn't see any changes in that sector, is something still being worked
> on or it's stuck waiting for something ?
Actually, if you look on the wiki page, there were indeed raid56 changes
in 4.17.
https://btrfs.wiki.kernel.org/index.php/Changelog#v4.17_.28Jun_2018.29
<quote>
* raid56:
** make sure target is identical to source when raid56 rebuild fails
after dev-replace
** faster rebuild during scrub, batch by stripes and not block-by-block
** make more use of cached data when rebuilding from a missing device
</quote>
Tho that's actually the small stuff, "ignoring the elephant in the room"
raid56 reliability expectations mentioned earlier as likely taking years
to deal with.
As for those long term issues...
The "elephant in the room" problem is simply the parity-raid "write hole"
common to all parity-raid systems, unless they've taken specific measures
to work around the issue in one way or another.
In simple terms, the "write hole" problem is just that parity-raid makes
the assumption that an update to a stripe including its parity is atomic,
it happens all at once, so that it's impossible for the parity to be out
of sync with the data actually written on all the other stripe-component
devices. In "real life", that's an invalid assumption. Should the
system crash at the wrong time, in the middle of a stripe update, it's
quite possible that the parity will not match what's actually written to
the data devices in the stripe, because either the parity will have been
updated while at least one data device was still writing at the time of
the crash, or the data will be updated but the parity device won't have
finished writing yet at the time of the crash. Either way, the parity
doesn't match the data that's actually in the stripe, and should a device
be/go missing so the parity is actually needed to recover the missing
data, that missing data will be calculated incorrectly because the parity
doesn't match what the data actually was.
Now as I already stated, that's a known problem common to parity-raid in
general, so it's not unique at all to btrfs.
The problem specific to btrfs, however, is that in general it's copy-on-
write, with checksumming to guard against invalid data, so in general, it
provides higher guarantees of data integrity than does a normal update-in-
place filesystem, and it'd be quite reasonable for someone to expect
those guarantees to extend to btrfs raid56 mode as well, but they don't.
They don't, because while btrfs in general is copy-on-write and thus
atomic update (in the event of a crash you get either the data as it was
before the write or the completely written data, not some unpredictable
mix of before and after), btrfs parity-raid stripes are *NOT* copy-on-
write, they're update-in-place, meaning the write-hole problem applies,
and in the event of a crash when the parity-raid was already degraded,
the integrity of the data or metadata being parity-raid written at the
time of the crash is not guaranteed, nor at present, with the current
raid56 implementation, /can/ it be guaranteed.
But as I said, the write hole problem is common to parity-raid in
general, so for people that understand the problem and are prepared to
deal with the reliability implications it implies[3], btrfs raid56 mode
should be reasonably ready for still cautious use, even tho it doesn't
carry the same data integrity and reliability guarantees that btrfs in
general does.
As for working around or avoiding the write-hole problem entirely,
there's (at least) four possible solutions, each with their own drawbacks.
The arguably "most proper" but also longest term solution would be to
rewrite btrfs raid56 mode so it does copy-on-write for partial-stripes in
parity-mode as well (full-stripe-width writes are already COW, I
believe). This involves an on-disk format change and creation of a new
stripe-metadata tree to track in-use stripes. This tree, as the various
other btrfs metadata trees, would be cascade-updated atomically, so at
any transaction commit, either all tracked changes since the last commit
would be complete and the new tree would be valid, or the last commit
tree would remain active and none of the pending changes would be
effective in the case of a crash and reboot with a new mount.
But that would be a major enough rewrite it would take years to write and
then test again to current raid56 stability levels.
A second possible solution would be to enforce a "whole-stripe-write-
only" rule. Partial stripes wouldn't be written, only full stripes
(which are already COWed), thus avoiding the read-modify-write cycle of a
partial stripe. If there wasn't enough changed data to write a full
stripe, the rest of it would be empty, wasting space. A periodic
rebalance would be needed to rewrite all these partially empty stripes to
full stripes, and presumably a new balance filter would be created to
rebalance /only/ partially empty stripes.
This would require less code and could be done sooner, but of course
would require testing to stability of the new code that was written, and
it has the significant negative of all that wasted space in the partially
empty stripe writes and the periodic rebalance required to make space
usage efficient again.
A third possible solution would allow stripes of less than the full
possible width -- a small write could involve just two devices in raid5,
three in raid6, just one data strip and the one or two parity strips.
This one's likely the easiest so far to implement since btrfs will
already reduce stripe width in the mixed-device-size case when small
devices fill up, and similarly, deals with less-than-full-width stripes
when a new device is added, until a rebalance is done to rewrite existing
stripes to full width including the new device. So the code to deal with
mixed-width stripes is already there and tested, and the only thing to be
done for this one would be to change the allocator implementation to
allow routine writing of less than than full width stripes (currently it
always writes a stripe as wide as possible), and to choose the stripe
width dynamically based on the amount of data to be written.
Of course these "short stripes" would waste space as well, since they'd
still require the full one (raid5) or two (raid6) parity strips even if
it was only one data strip written, and a periodic rebalance would be
necessary to rewrite to full stripe width and regain the wasted space
here too.
Solution #4 is the one I believe we've already seen RFC patches for.
It's a pure workaround, not a fix, and involves a stripe-write log.
Partial-stripe-width writes would be first written to the log, then
rewritten to the destination stripe. In this way it'd be much like ext3's
data=journal mode, except that only partial stripe writes would need
logged (full stripe writes are already COW and thus atomic).
This would arguably be the easiest to implement since it'd only involve
writing the logging code, indeed, as I mentioned above I believe RFC
level patches have already been posted, and the failure mode for bugs
would at least in theory be simply the same situation we already have
now. And it wouldn't waste space or require rebalances to get it back
like the two middle solutions, tho the partial-stripe log would take some
space overhead.
But writing stuff twice is going to be slow, and the speed penalty would
be taken on top of the already known to be slow parity-raid partial-
stripe-width read-modify-write cycle.
But as mentioned, parity-raid *is* already known to be slow, and admins
with raid experience are already only going to chose it when top speed
isn't their top priority, and the write-twice logging penalty would only
apply to partial-stripe-writes, so it might actually be an acceptable
trade-off, particularly when it's the likely quickest solution to the
existing write-hole problem, and is very similar to the solution mdraid
already took for its parity-raid write-hole problem.
But, given the speed at which btrfs feature additions occur, even the
arguably fastest to implement and rfc-patches-posted logging choice is
likely to take a number of kernel cycles to mainline and test to
stability equivalent to the rest of the btrfs raid56 code. And that's if
it were agreed to be the correct solution, at least for the short term
pending a longer term fix of one of the other choices, a question that
I'm not sure has been settled yet.
> Based on official BTRFS status page, RAID56 is the only "unstable" item
> marked in red.
> No interested from Suse in fixing that?
As the above should make clear, it's _not_ a question as simple as
"interest"!
> I think it's the real missing part for a feature-complete filesystem.
> Nowadays parity raid is mandatory, we can't only rely on mirroring.
"Nowdays"? "Mandatory"?
Parity-raid is certainly nice, but mandatory, especially when there's
already other parity solutions (both hardware and software) available
that btrfs can be run on top of, should a parity-raid solution be /that/
necessary? Of course btrfs isn't the only next-gen fs out there, either,
there's other solutions such as zfs available too, if btrfs doesn't have
the features required at the maturity required.
So I'd like to see the supporting argument to parity-raid being mandatory
for btrfs, first, before I'll take it as a given. Nice, sure.
Mandatory? Call me skeptical.
---
[1] "Still cautious" use: In addition to the raid56-specific reliability
issues described above, as well as to cover Waxhead's referral to my
usual backups advice:
Sysadmin's[2] first rule of data value and backups: The real value of
your data is not defined by any arbitrary claims, but rather by how many
backups you consider it worth having of that data. No backups simply
defines the data as of such trivial value that it's worth less than the
time/trouble/resources necessary to do and have at least one level of
backup.
With such a definition, data loss can never be a big deal, because even
in the event of data loss, what was defined as of most importance, the
time/trouble/resources necessary to have a backup (or at least one more
level of backup, in the event there were backups but they failed too),
was saved. So regardless of whether the data was recoverable or not, you
*ALWAYS* save what you defined as most important, either the data if you
had a backup to retrieve it from, or the time/trouble/resources necessary
to make that backup, if you didn't have it because saving that time/
trouble/resources was considered more important than making that backup.
Of course the sysadmin's second rule of backups is that it's not a
backup, merely a potential backup, until you've tested that you can
actually recover the data from it in similar conditions to those under
which you'd need to recover it. IOW, boot to the backup or to the
recovery environment, and be sure the backup's actually readable and can
be recovered from using only the resources available in the recovery
environment, then reboot back to the normal or recovered environment and
be sure that what you recovered from the recovery environment is actually
bootable or readable in the normal environment. Once that's done, THEN
it can be considered a real backup.
"Still cautious use" is simply ensuring that you're following the above
rules, as any good admin will be regardless, and that those backups are
actually available and recoverable in a timely manner should that be
necessary. IOW, an only backup "to the cloud" that's going to take a
week to download and recover to, isn't "still cautious use", if you can
only afford a few hours down time. Unfortunately, that's a real life
scenario I've seen people say they're in here more than once.
[2] Sysadmin: As used here, "sysadmin" simply refers to the person who
has the choice of btrfs, as compared to say ext4, in the first place,
that is, the literal admin of at least one system, regardless of whether
that's administering just their own single personal system, or thousands
of systems across dozens of locations in some large corporation or
government institution.
[3] Raid56 mode reliability implications: For raid56 data, this isn't
/that/ big of a deal, tho depending on what's in the rest of the stripe,
it could still affect files not otherwise written in some time. For
metadata, however, it's a huge deal, since an incorrectly reconstructed
metadata stripe could take out much or all of the filesystem, depending
on what metadata was actually in that stripe. This is where waxhead's
recommendation to use raid1/10 for metadata even if using raid56 for data
comes in.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RAID56
2018-06-20 7:34 ` RAID56 Gandalf Corvotempesta
@ 2018-06-20 8:44 ` Nikolay Borisov
0 siblings, 0 replies; 7+ messages in thread
From: Nikolay Borisov @ 2018-06-20 8:44 UTC (permalink / raw)
To: Gandalf Corvotempesta, waxhead; +Cc: linux-btrfs
On 20.06.2018 10:34, Gandalf Corvotempesta wrote:
> Il giorno mer 20 giu 2018 alle ore 02:06 waxhead
> <waxhead@dirtcellar.net> ha scritto:
>> First of all: I am not a BTRFS developer, but I follow the mailing list
>> closely and I too have a particular interest in the "RAID"5/6 feature
>> which realistically is probably about 3-4 years (if not more) in the future.
>
> Ok.
>
> [cut]
>
>> Now keep in mind that this is just a humble users analysis of the
>> situation based on whatever I have picked up from the mailing list which
>> may or may not be entirely accurate so take it for what it is!
>
> I wasn't aware of all of these "restrictions".
> If this is true, now I understand why redhat lost interest in BTRFS.
> 3-4 years more for a "working" RAID56 is absolutely too much, in this case,
> ZFS support for RAID-Z expansion/reduction (actively being worked on)
> will be released
> much earlier (probably, a test working-version later this year and a
> stable version next year)
>
> RAID-Z single disk espansion/removal is probably the real missing feature in ZFS
> allowing it to be considered a general-purpose FS.
>
> Device removal was added some months ago and now is possible (so, if
> you add a single disk to a mirrored vdev,
> you don't have to destroy the whole pool to remove the accidentally-added disk)
>
> In 3-4 years, maybe oracle release ZFS as GPL-compatible (solaris is
> dying, latest release is 3 years ago,
> so there is no need to keep a FS opensource compatible only with a died OS)
>
> Keep in mind that i'm not a ZFS-fan (honestly, I don't like it) but
> with these 2 features added and tons of restriction in BTRFS,
> there is no other choise.
Of course btrfs is open source and new contributors are always welcome.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RAID56
2018-06-20 8:31 ` RAID56 Duncan
@ 2018-06-20 9:15 ` Gandalf Corvotempesta
2018-06-20 12:32 ` RAID56 Duncan
0 siblings, 1 reply; 7+ messages in thread
From: Gandalf Corvotempesta @ 2018-06-20 9:15 UTC (permalink / raw)
To: 1i5t5.duncan; +Cc: linux-btrfs
Il giorno mer 20 giu 2018 alle ore 10:34 Duncan <1i5t5.duncan@cox.net>
ha scritto:
> Parity-raid is certainly nice, but mandatory, especially when there's
> already other parity solutions (both hardware and software) available
> that btrfs can be run on top of, should a parity-raid solution be /that/
> necessary?
You can't be serious. hw raid as much more flaws than any sw raid.
Current CPUs are much more performant than any hw raid chipset and
there is no more a performance lost in using a sw raid VS hw raid.
Biggest difference is that you are not locked with a single vendor.
When you have to move disks between servers you can do safely without
having to use the same hw raid controller (with the same firmware). Almost
all raid controller only support one-way upgrades, if your raid was created
with an older model, you can upgrade to a newer one but then it's impossible
to move it back. If you have some issues with the new controller, you can't use
the previous one.
Almost all server vendor doesn't support old-gen controller on new-gen servers
(at lest DELL), so you are forced to upgrade the raid controller when
you have to upgrade
the whole server or move disks between servers. I can continue for
hours, no, you can't
compare any modern software raid to any hw raid.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: RAID56
2018-06-20 9:15 ` RAID56 Gandalf Corvotempesta
@ 2018-06-20 12:32 ` Duncan
0 siblings, 0 replies; 7+ messages in thread
From: Duncan @ 2018-06-20 12:32 UTC (permalink / raw)
To: linux-btrfs
Gandalf Corvotempesta posted on Wed, 20 Jun 2018 11:15:03 +0200 as
excerpted:
> Il giorno mer 20 giu 2018 alle ore 10:34 Duncan <1i5t5.duncan@cox.net>
> ha scritto:
>> Parity-raid is certainly nice, but mandatory, especially when there's
>> already other parity solutions (both hardware and software) available
>> that btrfs can be run on top of, should a parity-raid solution be
>> /that/ necessary?
>
> You can't be serious. hw raid as much more flaws than any sw raid.
I didn't say /good/ solutions, I said /other/ solutions.
FWIW, I'd go for mdraid at the lower level, were I to choose, here.
But for a 4-12-ish device solution, I'd probably go btrfs raid1 on a pair
of mdraid-0s. That gets you btrfs raid1 data integrity and recovery from
its other mirror, while also being faster than the still not optimized
btrfs raid 10. Beyond about a dozen devices, six per "side" of the btrfs
raid1, the risk of multi-device breakdown before recovery starts to get
too high for comfort, but six 8 TB devices in raid0 gives you up to 48 TB
to work with, and more than that arguably should be broken down into
smaller blocks to work with in any case, because otherwise you're simply
dealing with so much data it'll take you unreasonably long to do much of
anything non-incremental with it, from any sort of fscks or btrfs
maintenance, to trying to copy or move the data anywhere (including for
backup/restore purposes), to ... whatever.
Actually, I'd argue that point is reached well before 48 TB, but the
point remains, at some point it's just too much data to do much of
anything with, too much to risk losing all at once, too much to backup
and restore all at once as it just takes too much time to do it, just too
much... And that point's well within ordinary raid sizes with a dozen
devices or less, mirrored, these days.
Which is one of the reasons I'm so skeptical about parity-raid being
mandatory "nowadays". Maybe it was in the past, when disks were (say)
half a TB or less and mirroring a few TB of data was resource-
prohibitive, but now?
Of course we've got a guy here who works with CERN and deals with their
annual 50ish petabytes of data (49 in 2016, see wikipedia's CERN
article), but that's simply problems on a different scale.
Even so, I'd say it needs broken up into manageable chunks, and 50 PB is
"only" a bit over 1000 48 TB filesystems worth. OK, say 2000, so you're
not filling them all absolutely full.
Meanwhile, I'm actually an N-way-mirroring proponent, here, as opposed to
a parity-raid proponent. And at that sort of scale, you /really/ don't
want to have to restore from backups, so 3-way or even 4-5 way mirroring
makes a lot of sense. Hmm... 2.5 dozen for 5-way-mirroring, 2000 times,
2.5*12*2000=... 60K devices! That's a lot of hard drives! And a lot of
power to spin them. But I guess it's a rounding error compared to what
CERN uses for the LHC.
FWIW, N-way-mirroring has been on the btrfs roadmap, since at least
kernel 3.6, for "after raid56". I've been waiting awhile too; no sign of
it yet so I guess I'll be waiting awhile longer. So as they say,
"welcome to the club!" I'm 51 now. Maybe I'll see it before I die.
Imagine, I'm in my 80s in the retirement home and get the news btrfs
finally has N-way-mirroring in mainline. I'll be jumping up and down and
cause a ruckus when I break my hip! Well, hoping it won't be /that/
long, but... =;^]
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2018-06-20 12:35 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-06-19 15:26 RAID56 Gandalf Corvotempesta
2018-06-20 0:06 ` RAID56 waxhead
2018-06-20 7:34 ` RAID56 Gandalf Corvotempesta
2018-06-20 8:44 ` RAID56 Nikolay Borisov
2018-06-20 8:31 ` RAID56 Duncan
2018-06-20 9:15 ` RAID56 Gandalf Corvotempesta
2018-06-20 12:32 ` RAID56 Duncan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).