From: Duncan <1i5t5.duncan@cox.net>
To: linux-btrfs@vger.kernel.org
Subject: Re: Question: raid1 behaviour on failure
Date: Thu, 28 Apr 2016 08:08:58 +0000 (UTC) [thread overview]
Message-ID: <pan$76628$87e41497$7920bbe4$5a54b486@cox.net> (raw)
In-Reply-To: CA+WRLO-L_jy651cr_amdqisQPrZGm7mOmniOF7q_t9ZSfY+EPw@mail.gmail.com
Gareth Pye posted on Thu, 28 Apr 2016 15:24:51 +1000 as excerpted:
> PDF doc info dates it at 23/1/2013, which is the best guess that can
> easily be found.
Well, "easily" is relative, but motivated by your observation I first
confirmed it, then decided to see what google had to say about the
authors.
I only looked at the two University of Minnesota authors. David Lilja is
a professor there since the 90s, with google turning up various lectures,
etc, at other universities. Peng Li, listed as a student in the paper,
was presumably a graduate student. His linkedin profile says he's at
Intel from Aug 2015 to present (software engineer, non-volatitle memory
device R&D), but was Sr. Engineer, Seagate Tech, Minneapolis/St.Paul
area, July 2013 to August 2015 (drive arch and performance modeling), and
was a summer intern at Huawei in San Fran area in the middle of 2012.
There's several patent and papers to his name.
More importantly for us, however, linkedin links to his personal page,
still University of Minnesota as he graduated there with a doctorate, PhD
Advisor, no surprise, Prof. David J Lilja.
http://people.ece.umn.edu/~lipeng/
That page lists as a one project:
Reliability of SATA Port Multiplier (2012).
So while the paper probably came out in January of 2013 as the pdf date
suggests, he was working on it in 2012.
BTW, his personal site was last updated in June of 2013 and thus doesn't
mention anything about his move to Intel in 2015. I'd guess he hasn't
touched it since getting the doctorate and the job at Seagate, given the
page mentions that, but the Linkedin profile said it didn't start until
July of that year, the month after his last personal page at the
university, update.
Took me longer to write that up than to find it, so it wasn't hard, but
as I said, "easy" is relative, so YMMV. =:^)
Meanwhile, that was just a single sampling, as the paper itself points
out, so we don't know where it falls among other port multipliers, or
even if its behavior was characteristic of that brand and model.
What we do have, however, is that semi-official paper, along with other
observations here about the reliability, or more accurately, lack of
reliability, of the various USB2SATA bridge chips, etc. Even without the
port multiplier, prior real world posted experience here suggests that
while single device btrfs on USB via USB2SATA bridge may be reasonable,
it's not particularly reliable as part of a multi-device btrfs, as too
often the bridges and devices behind them drop out temporarily due to
power or other reasons, and btrfs at this point simply doesn't cope well
with devices dropping out and appearing again, possibly as other
devices. With a single-device btrfs there isn't much to screw up, the
data either gets there or doesn't, and the atomic-cow nature of btrfs
does at least normally allow for recovery to a known past state plus
replay of the fsync log between commits if it doesn't, but multi-device
can quickly get out of hand, particularly if more than one device is
playing the disappear and reappear game at once.
A reasonable conclusion then, is that the given layout isn't particularly
reliable at more than one point, making multi-device anything over it
rather unwise. JBOD /as/ /JBOD/, creating individual single-device
filesystems on each device (or device partition), may be somewhat more
workable, but multi-device, whether at the btrfs level or dm- or md-raid
level underneath some other filesystem, isn't likely to be very reliable
at all.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
next prev parent reply other threads:[~2016-04-28 8:09 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-18 5:06 Question: raid1 behaviour on failure Matthias Bodenbinder
2016-04-18 7:22 ` Qu Wenruo
2016-04-20 5:17 ` Matthias Bodenbinder
2016-04-20 7:25 ` Qu Wenruo
2016-04-21 5:22 ` Matthias Bodenbinder
2016-04-21 5:43 ` Qu Wenruo
2016-04-21 6:02 ` Liu Bo
2016-04-21 6:09 ` Qu Wenruo
2016-04-21 17:40 ` Matthias Bodenbinder
2016-04-22 6:02 ` Qu Wenruo
2016-04-23 7:07 ` Matthias Bodenbinder
2016-04-23 7:17 ` Matthias Bodenbinder
2016-04-26 8:17 ` Satoru Takeuchi
2016-04-26 15:16 ` Henk Slager
2016-04-20 13:32 ` Anand Jain
2016-04-21 5:15 ` Matthias Bodenbinder
2016-04-21 7:19 ` Anand Jain
2016-04-21 6:23 ` Satoru Takeuchi
2016-04-21 11:09 ` Austin S. Hemmelgarn
2016-04-21 11:28 ` Henk Slager
2016-04-21 17:27 ` Matthias Bodenbinder
2016-04-26 16:19 ` Henk Slager
2016-04-26 16:42 ` Holger Hoffstätte
2016-04-28 5:12 ` Matthias Bodenbinder
2016-04-28 5:24 ` Gareth Pye
2016-04-28 8:08 ` Duncan [this message]
2016-04-28 5:09 ` Matthias Bodenbinder
2016-04-28 19:14 ` Henk Slager
[not found] ` <57188534.1070408@jp.fujitsu.com>
2016-04-21 11:58 ` Qu Wenruo
2016-04-22 2:21 ` Satoru Takeuchi
2016-04-22 5:32 ` Qu Wenruo
2016-04-22 6:17 ` Satoru Takeuchi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='pan$76628$87e41497$7920bbe4$5a54b486@cox.net' \
--to=1i5t5.duncan@cox.net \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).