linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "J. Bruce Fields" <bfields@fieldses.org>
To: Theodore Ts'o <tytso@mit.edu>, Chinmay V S <cvs268@gmail.com>,
	Stefan Priebe - Profihost AG <s.priebe@profihost.ag>,
	Christoph Hellwig <hch@infradead.org>,
	linux-fsdevel@vger.kernel.org, Al Viro <viro@zeniv.linux.org.uk>,
	LKML <linux-kernel@vger.kernel.org>,
	matthew@wil.cx
Subject: Re: Why is O_DSYNC on linux so slow / what's wrong with my SSD?
Date: Wed, 20 Nov 2013 10:55:07 -0500	[thread overview]
Message-ID: <20131120155507.GA5380@fieldses.org> (raw)
In-Reply-To: <20131120153703.GA23160@thunk.org>

On Wed, Nov 20, 2013 at 10:37:03AM -0500, Theodore Ts'o wrote:
> On Wed, Nov 20, 2013 at 08:52:36PM +0530, Chinmay V S wrote:
> > 
> > If you have confirmed the performance numbers, then it indicates that
> > the Intel 530 controller is more advanced and makes better use of the
> > internal disk-cache to achieve better performance (as compared to the
> > Intel 520). Thus forcing CMD_FLUSH on each IOP (negating the benefits
> > of the disk write-cache and not allowing any advanced disk controller
> > optimisations) has a more pronouced effect of degrading the
> > performance on Intel 530 SSDs. (Someone with some actual info on Intel
> > SSDs kindly confirm this.)
> 
> You might also want to do some power fail testing to make sure that
> the SSD is actually flusing all of its internal Flash Translation
> Layer (FTL) metadata to stable storage on every CMD_FLUSH command.

Some SSD's are also claim the ability to flush the cache on power loss:

	http://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-series-power-loss-data-protection-brief.html

Which should in theory let them respond immediately to flush requests,
right?  Except they only seem to advertise it as a safety (rather than a
performance) feature, so I probably misunderstand something.

And the 520 doesn't claim this feature (look for "enhanced power loss
protection" at http://ark.intel.com/products/66248), so that wouldn't
explain these results anyway.

--b.

> 
> There are lots of flash media that don't do this, with the result that
> I get lots of users whining at me when their file system stored on an
> SD card has massive corruption after a power fail event.
> 
> Historically, Intel has been really good about avoiding this, but
> since they've moved to using 3rd party flash controllers, I now advise
> everyone who plans to use any flash storage, regardless of the
> manufacturer, to do their own explicit power fail testing (hitting the
> reset button is not good enough, you need to kick the power plug out
> of the wall, or better yet, use a network controlled power switch you
> so you can repeat the power fail test dozens or hundreds of times for
> your qualification run) before being using flash storage in a mission
> critical situation where you care about data integrity after a power
> fail event.
> 
> IOW, make sure that the SSD isn't faster because it's playing fast and
> loose with the FTL metadata....
> 
> Cheers,
> 
> 						- Ted
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2013-11-20 15:55 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-20 12:12 Why is O_DSYNC on linux so slow / what's wrong with my SSD? Stefan Priebe - Profihost AG
2013-11-20 12:54 ` Christoph Hellwig
2013-11-20 13:34   ` Chinmay V S
2013-11-20 13:38     ` Christoph Hellwig
2013-11-20 14:12     ` Stefan Priebe - Profihost AG
2013-11-20 15:22       ` Chinmay V S
2013-11-20 15:37         ` Theodore Ts'o
2013-11-20 15:55           ` J. Bruce Fields [this message]
2013-11-20 17:11             ` Chinmay V S
2013-11-20 17:58               ` J. Bruce Fields
2013-11-20 18:43                 ` Chinmay V S
2013-11-21 10:11                   ` Christoph Hellwig
2013-11-22 20:01                     ` Stefan Priebe
2013-11-22 20:37                       ` Ric Wheeler
2013-11-22 21:05                         ` Stefan Priebe
2013-11-23 18:27                         ` Stefan Priebe
2013-11-23 19:35                           ` Ric Wheeler
2013-11-23 19:48                             ` Stefan Priebe
2013-11-25  7:37                             ` Stefan Priebe
2020-01-08  6:58                             ` slow sync performance on LSI / Broadcom MegaRaid performance with battery cache Stefan Priebe - Profihost AG
2013-11-22 19:57             ` Why is O_DSYNC on linux so slow / what's wrong with my SSD? Stefan Priebe
2013-11-24  0:10               ` One Thousand Gnomes
2013-11-20 16:02           ` Howard Chu
2013-11-23 20:36             ` Pavel Machek
2013-11-23 23:01               ` Ric Wheeler
2013-11-24  0:22                 ` Pavel Machek
2013-11-24  1:03                   ` One Thousand Gnomes
2013-11-24  2:43                   ` Ric Wheeler
2013-11-22 19:55         ` Stefan Priebe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131120155507.GA5380@fieldses.org \
    --to=bfields@fieldses.org \
    --cc=cvs268@gmail.com \
    --cc=hch@infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matthew@wil.cx \
    --cc=s.priebe@profihost.ag \
    --cc=tytso@mit.edu \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).