From: Calvin Walton <calvin.walton@kepstin.ca>
To: Martin <m_btrfs@ml1.co.uk>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: SSD erase state and reducing SSD wear
Date: Wed, 23 May 2012 00:19:37 -0400 [thread overview]
Message-ID: <1337746777.2479.9.camel@ayu> (raw)
In-Reply-To: <jph1hu$2tq$1@dough.gmane.org>
On Tue, 2012-05-22 at 22:47 +0100, Martin wrote:
> I've got two recent examples of SSDs. Their pristine state from the
> manufacturer shows:
> Device Model: OCZ-VERTEX3
> 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> Device Model: OCZ VERTEX PLUS
> 00000000 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
> What's a good way to test what state they get erased to from a TRIM
> operation?
This pristine state probably matches up with the result of a trim
command on the drive. In particular, a freshly erased flash block is in
a state where the bits are all 1, so the Vertex Plus drive is showing
you the flash contents directly. The Vertex 3 has substantially more
processing, and the 0s are effectively generated on the fly for unmapped
flash blocks (similar to how the missing portions of a sparse file
contains 0s).
> Can btrfs detect the erase state and pad unused space in filesystem
> writes with the same value so as to reduce SSD wear?
On the Vertex 3, this wouldn't actually do what you'd hope. The firmware
in that drive actually compresses, deduplicates, and encrypts all the
data prior to writing it to flash - and as a result the data that hits
the flash looks nothing like what the filesystem wrote.
(For best performance, it might make sense to disable btrfs's built-in
compression on the Vertex 3 drive to allow the drive's compression to
kick in. Let us know if you benchmark it either way.)
The benefit to doing this on the Vertex Plus is probably fairly small,
since to rewrite a block - even if the block is partially unwritten - is
still likely to require a read-modify-write cycle with an erase step.
The granularity of the erase blocks is just too big for the savings to
be very meaningful.
--
Calvin Walton <calvin.walton@kepstin.ca>
next prev parent reply other threads:[~2012-05-23 4:19 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-05-22 21:47 SSD erase state and reducing SSD wear Martin
2012-05-23 4:19 ` Calvin Walton [this message]
2012-05-23 15:44 ` Martin
2012-05-23 19:50 ` Calvin Walton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1337746777.2479.9.camel@ayu \
--to=calvin.walton@kepstin.ca \
--cc=linux-btrfs@vger.kernel.org \
--cc=m_btrfs@ml1.co.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).