From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: linux-btrfs@vger.kernel.org
Subject: Re: Btrfs/SSD
Date: Mon, 15 May 2017 08:03:48 -0400 [thread overview]
Message-ID: <beb7f0b0-b84a-dfa1-12c2-256fe967af1d@gmail.com> (raw)
In-Reply-To: <20170512203644.26e068e5@jupiter.sol.kaishome.de>
On 2017-05-12 14:36, Kai Krakow wrote:
> Am Fri, 12 May 2017 15:02:20 +0200
> schrieb Imran Geriskovan <imran.geriskovan@gmail.com>:
>
>> On 5/12/17, Duncan <1i5t5.duncan@cox.net> wrote:
>>> FWIW, I'm in the market for SSDs ATM, and remembered this from a
>>> couple weeks ago so went back to find it. Thanks. =:^)
>>>
>>> (I'm currently still on quarter-TB generation ssds, plus spinning
>>> rust for the larger media partition and backups, and want to be rid
>>> of the spinning rust, so am looking at half-TB to TB, which seems
>>> to be the pricing sweet spot these days anyway.)
>>
>> Since you are taking ssds to mainstream based on your experience,
>> I guess your perception of data retension/reliability is better than
>> that of spinning rust. Right? Can you eloborate?
>>
>> Or an other criteria might be physical constraints of spinning rust
>> on notebooks which dictates that you should handle the device
>> with care when running.
>>
>> What was your primary motivation other than performance?
>
> Personally, I don't really trust SSDs so much. They are much more
> robust when it comes to physical damage because there are no physical
> parts. That's absolutely not my concern. Regarding this, I trust SSDs
> better than HDDs.
>
> My concern is with fail scenarios of some SSDs which die unexpected and
> horribly. I found some reports of older Samsung SSDs which failed
> suddenly and unexpected, and in a way that the drive completely died:
> No more data access, everything gone. HDDs start with bad sectors and
> there's a good chance I can recover most of the data except a few
> sectors.
Older is the key here. Some early SSD's did indeed behave like that,
but most modern ones do generally show signs that they will fail in the
near future. There's also the fact that traditional hard drives _do_
fail like that sometimes, even without rough treatment.
>
> When SSD blocks die, they are probably huge compared to a sector (256kB
> to 4MB usually because that's erase block sizes). If this happens, the
> firmware may decide to either allow read-only access or completely deny
> access. There's another situation where dying storage chips may
> completely mess up the firmware and there's no longer any access to
> data.
I've yet to see an SSD that blocks user access to an erase block.
Almost every one I've seen will instead rewrite the block (possibly with
the corrupted data intact (that is, without mangling it further)) to one
of the reserve blocks, and then just update it's internal mapping so
that the old block doesn't get used, and the new one is pointing to the
right place. Some of the really good SSD's even use erasure coding in
the FTL for data verification instead of CRC's, so they can actually
reconstruct the missing bits when they do this.
Traditional hard drives usually do this too these days (they've been
under-provisioned since before SSD's existed), which is part of why
older disks tend to be noisier and slower (the reserved space is usually
at the far inside or outside of the platter, so using sectors from there
to replace stuff leads to long seeks).
>
> That's why I don't trust any of my data to them. But I still want the
> benefit of their speed. So I use SSDs mostly as frontend caches to
> HDDs. This gives me big storage with fast access. Indeed, I'm using
> bcache successfully for this. A warm cache is almost as fast as native
> SSD (at least it feels almost that fast, it will be slower if you threw
> benchmarks at it).
That's to be expected though, most benchmarks don't replicate actual
usage patterns for client systems, and using SSD's for caching with
bcache or dm-cache for most server workloads except a file server will
usually get you a performance hit.
It's worth noting also that on average, COW filesystems like BTRFS (or
log-structured-filesystems will not benefit as much as traditional
filesystems from SSD caching unless the caching is built into the
filesystem itself, since they don't do in-place rewrites (so any new
write by definition has to drop other data from the cache).
next prev parent reply other threads:[~2017-05-15 12:03 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-14 11:02 Btrfs/SSD Imran Geriskovan
2017-04-17 11:53 ` Btrfs/SSD Austin S. Hemmelgarn
2017-04-17 16:58 ` Btrfs/SSD Chris Murphy
2017-04-17 17:13 ` Btrfs/SSD Austin S. Hemmelgarn
2017-04-17 18:24 ` Btrfs/SSD Roman Mamedov
2017-04-17 19:22 ` Btrfs/SSD Imran Geriskovan
2017-04-17 22:55 ` Btrfs/SSD Hans van Kranenburg
2017-04-19 18:10 ` Btrfs/SSD Chris Murphy
2017-04-18 12:26 ` Btrfs/SSD Austin S. Hemmelgarn
2017-04-18 3:23 ` Btrfs/SSD Duncan
2017-04-18 4:58 ` Btrfs/SSD Roman Mamedov
2017-04-17 18:34 ` Btrfs/SSD Chris Murphy
2017-04-17 19:26 ` Btrfs/SSD Austin S. Hemmelgarn
2017-04-17 19:39 ` Btrfs/SSD Chris Murphy
2017-04-18 11:31 ` Btrfs/SSD Austin S. Hemmelgarn
2017-04-18 12:20 ` Btrfs/SSD Hugo Mills
2017-04-18 13:02 ` Btrfs/SSD Imran Geriskovan
2017-04-18 13:39 ` Btrfs/SSD Austin S. Hemmelgarn
2017-05-12 18:27 ` Btrfs/SSD Kai Krakow
2017-05-12 20:31 ` Btrfs/SSD Imran Geriskovan
2017-05-13 9:39 ` Btrfs/SSD Duncan
2017-05-13 11:15 ` Btrfs/SSD Janos Toth F.
2017-05-13 11:34 ` [OT] SSD performance patterns (was: Btrfs/SSD) Kai Krakow
2017-05-14 16:21 ` Btrfs/SSD Chris Murphy
2017-05-14 18:01 ` Btrfs/SSD Tomasz Kusmierz
2017-05-14 20:47 ` Btrfs/SSD (my -o ssd "summary") Hans van Kranenburg
2017-05-14 23:01 ` Btrfs/SSD Imran Geriskovan
2017-05-15 0:23 ` Btrfs/SSD Tomasz Kusmierz
2017-05-15 0:24 ` Btrfs/SSD Tomasz Kusmierz
2017-05-15 11:25 ` Btrfs/SSD Imran Geriskovan
2017-05-15 11:46 ` Btrfs/SSD Austin S. Hemmelgarn
2017-05-15 19:22 ` Btrfs/SSD Kai Krakow
2017-05-12 4:51 ` Btrfs/SSD Duncan
2017-05-12 13:02 ` Btrfs/SSD Imran Geriskovan
2017-05-12 18:36 ` Btrfs/SSD Kai Krakow
2017-05-13 9:52 ` Btrfs/SSD Roman Mamedov
2017-05-13 10:47 ` Btrfs/SSD Kai Krakow
2017-05-15 12:03 ` Austin S. Hemmelgarn [this message]
2017-05-15 13:09 ` Btrfs/SSD Tomasz Kusmierz
2017-05-15 19:12 ` Btrfs/SSD Kai Krakow
2017-05-16 4:48 ` Btrfs/SSD Duncan
2017-05-15 19:49 ` Btrfs/SSD Kai Krakow
2017-05-15 20:05 ` Btrfs/SSD Tomasz Torcz
2017-05-16 1:58 ` Btrfs/SSD Kai Krakow
2017-05-16 12:21 ` Btrfs/SSD Tomasz Torcz
2017-05-16 12:35 ` Btrfs/SSD Austin S. Hemmelgarn
2017-05-16 17:08 ` Btrfs/SSD Kai Krakow
2017-05-16 11:43 ` Btrfs/SSD Austin S. Hemmelgarn
2017-05-14 8:46 ` Btrfs/SSD Duncan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=beb7f0b0-b84a-dfa1-12c2-256fe967af1d@gmail.com \
--to=ahferroin7@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).