From: Martin Steigerwald <martin@lichtvoll.de>
To: xfs@oss.sgi.com
Cc: "Georg Schönberger" <g.schoenberger@xortex.com>
Subject: Re: XFS and nobarrier with SSDs
Date: Sat, 12 Dec 2015 13:26:06 +0100 [thread overview]
Message-ID: <3496214.YTSKClH6pV@merkaba> (raw)
In-Reply-To: <E127700EFE58FD45BD6298EAC813FA42020D8173@TIGER2010.xortex.local>
Am Samstag, 12. Dezember 2015, 10:24:25 CET schrieb Georg Schönberger:
> Hi folks!
Hi Georg.
> We are using a lot of SSDs in our Ceph clusters with XFS. Our SSDs have
> Power Loss Protection via capacitors, so is it safe in all cases to run XFS
> with nobarrier on them? Or is there indeed a need for a specific I/O
> scheduler?
I do think that using nobarrier would be safe with those SSDs as long as there
is no other caching happening on the hardware side, for example inside the
controller that talks to the SSDs.
I always thought barrier/nobarrier acts independently of the I/O scheduler
thing, but I can understand the thought from the bug report you linked to
below. As for I/O schedulers, with recent kernels and block multiqueue I see
it being set to "none".
> I have found a recent discussion on the Ceph mailing list, anyone from XFS
> that can help us?
>
> *http://www.spinics.net/lists/ceph-users/msg22053.html
Also see:
http://xfs.org/index.php/
XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.
3F
> *https://bugzilla.redhat.com/show_bug.cgi?id=1104380
Interesting. Never thought of that one.
So would it be safe to interrupt the flow of data towards the SSD at any point
if time with reordering I/O schedulers in place? And how about blk-mq which
has mutiple software queus?
I like to think that they are still independent of the barrier thing and the
last bug comment by Eric, where he quoted from Jeff, supports this:
> Eric Sandeen 2014-06-24 10:32:06 EDT
>
> As Jeff Moyer says:
> > The file system will manually order dependent I/O.
> > What I mean by that is the file system will send down any I/O for the
> > transaction log, wait for that to complete, issue a barrier (which will
> > be a noop in the case of a battery-backed write cache), and then send
> > down the commit block along with another barrier. As such, you cannot
> > have the I/O scheduler reorder the commit block and the log entry with
> > which it is associated.\0
Ciao,
--
Martin
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2015-12-12 12:26 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-12 10:24 XFS and nobarrier with SSDs Georg Schönberger
2015-12-12 12:26 ` Martin Steigerwald [this message]
2015-12-14 6:43 ` Georg Schönberger
2015-12-14 8:38 ` Martin Steigerwald
2015-12-14 9:58 ` Christoph Hellwig
2015-12-14 10:18 ` Georg Schönberger
2015-12-14 10:27 ` Christoph Hellwig
2015-12-14 10:34 ` Georg Schönberger
2015-12-14 16:39 ` Eric Sandeen
2015-12-26 23:44 ` Linda Walsh
2015-12-14 11:48 ` Emmanuel Florac
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3496214.YTSKClH6pV@merkaba \
--to=martin@lichtvoll.de \
--cc=g.schoenberger@xortex.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox