public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Greg Freemyer <greg.freemyer@gmail.com>
Cc: Stefan Ring <stefanrin@gmail.com>,
	weber@zbfmail.de, Xfs <xfs@oss.sgi.com>
Subject: Re: mount options question
Date: Sat, 30 Aug 2014 09:45:05 +1000	[thread overview]
Message-ID: <20140829234505.GE20518@dastard> (raw)
In-Reply-To: <6306cfa5-457d-4794-8fc0-1768f7f1deec@email.android.com>

On Fri, Aug 29, 2014 at 07:26:59AM -0400, Greg Freemyer wrote:
> 
> 
> On August 29, 2014 4:37:38 AM EDT, Dave Chinner <david@fromorbit.com> wrote:
> >On Fri, Aug 29, 2014 at 08:31:43AM +0200, Stefan Ring wrote:
> >> On Thu, Aug 28, 2014 at 1:07 AM, Dave Chinner <david@fromorbit.com>
> >wrote:
> >> > On Wed, Aug 27, 2014 at 12:14:21PM +0200, Marko Weber|8000 wrote:
> >> >>
> >> >> sorry dave and all other,
> >> >>
> >> >> can you guys recommend me the most stable / best mount options for
> >> >> my new server with ssd´s and XFS filesystem?
> >> >>
> >> >> at moment i would set: 
> >defaults,nobarrier,discard,logbsize=256k,noikeep
> >> >> or is just "default" the best solution and xfs detect itself whats
> >best.
> >> >>
> >> >> can you guide me a bit?
> >> >>
> >> >> as eleavtor i set elevator=noop
> >> >>
> >> >> i setup disks with linux softraid raid1. On top of the raid is LVM
> >> >> (for some data partations).
> >> >>
> >> >>
> >> >> would be nice to hear some tipps from you
> >> >
> >> > Unless you have specific requirements or have the knowledge to
> >> > understand how the different options affect behaviour, then just
> >use
> >> > the defaults.
> >> 
> >> Mostly agreed, but using "discard" would be a no-brainer for me. I
> >> suppose XFS does not automatically switch it on for non-rotational
> >> storage.
> >
> >Yup, you're not using your brain. :P
> >
> >mount -o discard *sucks* on so many levels it is not funny. I don't
> >recommend that anybody *ever* use it, on XFS, ext4 or btrfs.  Just
> >use fstrim if you ever need to clean up a SSD.
> 

> In particular trim is a synchronous command in many SSDs, I don't
> know about the impact on the kernel block stack.

blkdev_issue_discard() is synchronous as well, which is a big
problem for something that needs to iterate (potentially) thousands
of regions for discard when a journal checkpoint completes....

> For the SSD
> itself that means the SSDs basically flush their write cache on
> every trim call.

Oh, it's worse than that, usually. TRIM is one of the slowest
operations you can run on many drives, so it can take hundreds of
milliseconds to execute....

> I often tell people to do performance testing with and without it
> and report back to me if they see no degradation caused by -o
> discard.  To date no one has ever reported back.  I think -o
> discard should have never been introduced and certainly not 5
> years ago.

It was introduced into XFS as a checkbox feature. We resisted as
long as we could, but too many people were shouting at us that we
needed realtime discard because ext4 and btrfs had it. Of course,
all those people shouting for it realised that we were right in that
it sucked the moment they tried to use it and found that performance
was woeful. Not to mention that SSD trim implementations were so bad
that they caused random data corruption by trimming the wrong
regions, drives would simply hang randomly and in a couple of cases
too many trims too fast would brick them...

So, yeah, it was implement because lots of people demanded it, not
because it was a good idea.

> In theory, SSDs that handle trim as a asynchronous
> command are now available, but I don't know any specifics.

Requires SATA 3.1 for the queued TRIM, and I'm not sure that there
is any hardware out there that uses this end-to-end yet. And the
block layer can't make use of it yet, either...

> In any case, fstrim works for almost all workloads and doesn't
> have the potential continuous negative impact of -o discard.

Precisely my point - you just gave some more detail. :)

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2014-08-29 23:45 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-27 10:14 mount options question Marko Weber|8000
2014-08-27 23:07 ` Dave Chinner
2014-08-29  6:31   ` Stefan Ring
2014-08-29  8:37     ` Dave Chinner
2014-08-29 11:26       ` Greg Freemyer
2014-08-29 23:45         ` Dave Chinner [this message]
2014-08-30  3:43           ` Greg Freemyer
2014-09-05 12:15       ` Stefan Ring

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140829234505.GE20518@dastard \
    --to=david@fromorbit.com \
    --cc=greg.freemyer@gmail.com \
    --cc=stefanrin@gmail.com \
    --cc=weber@zbfmail.de \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox