From: Duncan <1i5t5.duncan@cox.net>
To: linux-btrfs@vger.kernel.org
Subject: Re: Btrfs/SSD
Date: Sat, 13 May 2017 09:39:39 +0000 (UTC) [thread overview]
Message-ID: <pan$3384$13ad2b1f$3febf76a$c09fcbd7@cox.net> (raw)
In-Reply-To: 20170512202756.16bd785f@jupiter.sol.kaishome.de
Kai Krakow posted on Fri, 12 May 2017 20:27:56 +0200 as excerpted:
> In the end, the more continuous blocks of free space there are, the
> better the chance for proper wear leveling.
Talking about which...
When I was doing my ssd research the first time around, the going
recommendation was to keep 20-33% of the total space on the ssd entirely
unallocated, allowing it to use that space as an FTL erase-block
management pool.
At the time, I added up all my "performance matters" data dirs and
allowing for reasonable in-filesystem free-space, decided I could fit it
in 64 GB if I had to, tho 80 GB would be a more comfortable fit, so
allowing for the above entirely unpartitioned/unused slackspace
recommendations, had a target of 120-128 GB, with a reasonable range
depending on actual availability of 100-160 GB.
It turned out, due to pricing and availability, I ended up spending
somewhat more and getting 256 GB (238.5 GiB). Of course that allowed me
much more flexibility than I had expected and I ended up with basically
everything but the media partition on the ssds, PLUS I still left them at
only just over 50% partitioned, (using the gdisk figures, 51%-
partitioned, 49%+ free).
Given that, I've not enabled btrfs trim/discard (which saved me from the
bugs with it a few kernel cycles ago), and while I do have a weekly fstrim
systemd timer setup, I've not had to be too concerned about btrfs bugs
(also now fixed, I believe) when fstrim on btrfs was known not to be
trimming everything it really should have been.
Anyway, that 20-33% left entirely unallocated/unpartitioned
recommendation still holds, right? Am I correct in asserting that if one
is following that, the FTL already has plenty of erase-blocks available
for management and the discussion about filesystem level trim and free
space management becomes much less urgent, tho of course it's still worth
considering if it's convenient to do so?
And am I also correct in believing that while it's not really worth
spending more to over-provision to the near 50% as I ended up doing, if
things work out that way as they did with me because the difference in
price between 30% overprovisioning and 50% overprovisioning ends up being
trivial, there's really not much need to worry about active filesystem
trim at all, because the FTL has effectively half the device left to play
erase-block musical chairs with as it decides it needs to?
Of course the higher per-GiB cost of ssd as compared to spinning rust
does mean that the above overprovisioning recommendation really does
hurt, most of the time, driving per-usable-GB costs even higher, and as I
recall that was definitely the case back then between 80 GiB and 160 GiB,
and it was basically an accident of timing, that I was buying just as the
manufactures flooded the market with newly cost-effective 256 GB devices,
that meant they were only trivially more expensive than the 128 or 160
GB, AND unlike the smaller devices, actually /available/ in the 500-ish
MB/sec performance range that (for SATA-based SSDs) is actually capped by
SATA-600 bus speeds more than the chips themselves. (There were lower
cost 128 GB devices, but they were lower speed than I wanted, too.)
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
next prev parent reply other threads:[~2017-05-13 9:39 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-14 11:02 Btrfs/SSD Imran Geriskovan
2017-04-17 11:53 ` Btrfs/SSD Austin S. Hemmelgarn
2017-04-17 16:58 ` Btrfs/SSD Chris Murphy
2017-04-17 17:13 ` Btrfs/SSD Austin S. Hemmelgarn
2017-04-17 18:24 ` Btrfs/SSD Roman Mamedov
2017-04-17 19:22 ` Btrfs/SSD Imran Geriskovan
2017-04-17 22:55 ` Btrfs/SSD Hans van Kranenburg
2017-04-19 18:10 ` Btrfs/SSD Chris Murphy
2017-04-18 12:26 ` Btrfs/SSD Austin S. Hemmelgarn
2017-04-18 3:23 ` Btrfs/SSD Duncan
2017-04-18 4:58 ` Btrfs/SSD Roman Mamedov
2017-04-17 18:34 ` Btrfs/SSD Chris Murphy
2017-04-17 19:26 ` Btrfs/SSD Austin S. Hemmelgarn
2017-04-17 19:39 ` Btrfs/SSD Chris Murphy
2017-04-18 11:31 ` Btrfs/SSD Austin S. Hemmelgarn
2017-04-18 12:20 ` Btrfs/SSD Hugo Mills
2017-04-18 13:02 ` Btrfs/SSD Imran Geriskovan
2017-04-18 13:39 ` Btrfs/SSD Austin S. Hemmelgarn
2017-05-12 18:27 ` Btrfs/SSD Kai Krakow
2017-05-12 20:31 ` Btrfs/SSD Imran Geriskovan
2017-05-13 9:39 ` Duncan [this message]
2017-05-13 11:15 ` Btrfs/SSD Janos Toth F.
2017-05-13 11:34 ` [OT] SSD performance patterns (was: Btrfs/SSD) Kai Krakow
2017-05-14 16:21 ` Btrfs/SSD Chris Murphy
2017-05-14 18:01 ` Btrfs/SSD Tomasz Kusmierz
2017-05-14 20:47 ` Btrfs/SSD (my -o ssd "summary") Hans van Kranenburg
2017-05-14 23:01 ` Btrfs/SSD Imran Geriskovan
2017-05-15 0:23 ` Btrfs/SSD Tomasz Kusmierz
2017-05-15 0:24 ` Btrfs/SSD Tomasz Kusmierz
2017-05-15 11:25 ` Btrfs/SSD Imran Geriskovan
2017-05-15 11:46 ` Btrfs/SSD Austin S. Hemmelgarn
2017-05-15 19:22 ` Btrfs/SSD Kai Krakow
2017-05-12 4:51 ` Btrfs/SSD Duncan
2017-05-12 13:02 ` Btrfs/SSD Imran Geriskovan
2017-05-12 18:36 ` Btrfs/SSD Kai Krakow
2017-05-13 9:52 ` Btrfs/SSD Roman Mamedov
2017-05-13 10:47 ` Btrfs/SSD Kai Krakow
2017-05-15 12:03 ` Btrfs/SSD Austin S. Hemmelgarn
2017-05-15 13:09 ` Btrfs/SSD Tomasz Kusmierz
2017-05-15 19:12 ` Btrfs/SSD Kai Krakow
2017-05-16 4:48 ` Btrfs/SSD Duncan
2017-05-15 19:49 ` Btrfs/SSD Kai Krakow
2017-05-15 20:05 ` Btrfs/SSD Tomasz Torcz
2017-05-16 1:58 ` Btrfs/SSD Kai Krakow
2017-05-16 12:21 ` Btrfs/SSD Tomasz Torcz
2017-05-16 12:35 ` Btrfs/SSD Austin S. Hemmelgarn
2017-05-16 17:08 ` Btrfs/SSD Kai Krakow
2017-05-16 11:43 ` Btrfs/SSD Austin S. Hemmelgarn
2017-05-14 8:46 ` Btrfs/SSD Duncan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='pan$3384$13ad2b1f$3febf76a$c09fcbd7@cox.net' \
--to=1i5t5.duncan@cox.net \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).