public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "Georg Schönberger" <g.schoenberger@xortex.com>
To: Martin Steigerwald <martin@lichtvoll.de>,
	"xfs@oss.sgi.com" <xfs@oss.sgi.com>
Subject: Re: XFS and nobarrier with SSDs
Date: Mon, 14 Dec 2015 06:43:48 +0000	[thread overview]
Message-ID: <566E6524.6070401@xortex.com> (raw)
In-Reply-To: <3496214.YTSKClH6pV@merkaba>


On 2015-12-12 13:26, Martin Steigerwald wrote:
> Am Samstag, 12. Dezember 2015, 10:24:25 CET schrieb Georg Schönberger:
>> We are using a lot of SSDs in our Ceph clusters with XFS. Our SSDs have
>> Power Loss Protection via capacitors, so is it safe in all cases to run XFS
>> with nobarrier on them? Or is there indeed a need for a specific I/O
>> scheduler?
> I do think that using nobarrier would be safe with those SSDs as long as there
> is no other caching happening on the hardware side, for example inside the
> controller that talks to the SSDs.
Hi Martin, thanks for your response!

We are using HBAs and no RAID controller, therefore there is no other 
cache in the I/O stack.

>
> I always thought barrier/nobarrier acts independently of the I/O scheduler
> thing, but I can understand the thought from the bug report you linked to
> below. As for I/O schedulers, with recent kernels and block multiqueue I see
> it being set to "none".
What do you mean by "none" near? Do you think I will be more on the safe 
side with noop scheduler?

>
>> I have found a recent discussion on the Ceph mailing list, anyone from XFS
>> that can help us?
>>
>> *http://www.spinics.net/lists/ceph-users/msg22053.html
> Also see:
>
> http://xfs.org/index.php/
> XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.
> 3F
I've already read that XFS wiki entry before and also found some Intel 
presentations where they suggest to use nobarrier with
there enterprise SSDs. But a confirmation from any block layer 
specialist would be a good thing!

>
>> *https://bugzilla.redhat.com/show_bug.cgi?id=1104380
> Interesting. Never thought of that one.
>
> So would it be safe to interrupt the flow of data towards the SSD at any point
> if time with reordering I/O schedulers in place? And how about blk-mq which
> has mutiple software queus?
Maybe we should ask the block layer mailing list about that?

>
> I like to think that they are still independent of the barrier thing and the
> last bug comment by Eric, where he quoted from Jeff, supports this:
>
>> Eric Sandeen 2014-06-24 10:32:06 EDT
>>
>> As Jeff Moyer says:
>>> The file system will manually order dependent I/O.
>>> What I mean by that is the file system will send down any I/O for the
>>> transaction log, wait for that to complete, issue a barrier (which will
>>> be a noop in the case of a battery-backed write cache), and then send
>>> down the commit block along with another barrier.  As such, you cannot
>>> have the I/O scheduler reorder the commit block and the log entry with
>>> which it is associated.
If it is truly that way then I do not see any problems using nobarrier 
with the SSDs an power loss protection.
I have already find some people say that enterprise SSDs with PLP simply 
ignore the sync call. If that's the case
then using nobarrier would have no performance improvement...

Cheers, Georg

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2015-12-14  6:44 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-12 10:24 XFS and nobarrier with SSDs Georg Schönberger
2015-12-12 12:26 ` Martin Steigerwald
2015-12-14  6:43   ` Georg Schönberger [this message]
2015-12-14  8:38     ` Martin Steigerwald
2015-12-14  9:58       ` Christoph Hellwig
2015-12-14 10:18         ` Georg Schönberger
2015-12-14 10:27           ` Christoph Hellwig
2015-12-14 10:34             ` Georg Schönberger
2015-12-14 16:39               ` Eric Sandeen
2015-12-26 23:44             ` Linda Walsh
2015-12-14 11:48 ` Emmanuel Florac

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=566E6524.6070401@xortex.com \
    --to=g.schoenberger@xortex.com \
    --cc=martin@lichtvoll.de \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox