public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Emmanuel Florac <eflorac@intellique.com>
To: 王勇 <wang.yong@datatom.com>
Cc: linux-xfs <linux-xfs@vger.kernel.org>, xfs <xfs@oss.sgi.com>
Subject: Re: xfs seems performance lower when long time seq write into ssd
Date: Mon, 31 Jul 2017 13:17:54 +0200	[thread overview]
Message-ID: <20170731131754.73fac669@harpe.intellique.com> (raw)
In-Reply-To: <CA+jpiF7jK8m+a6LGKfj-CbfMEHqjsL5ZdagnzOAdRRwNxZg0oA@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 2435 bytes --]

Le Mon, 31 Jul 2017 16:51:29 +0800
王勇 <wang.yong@datatom.com> écrivait:

> Hi All,
> Recently, I meet a  strange quetion. I have done seq write
> (blocksize=1m) into mount folders.
> If total bytes is small, the avg write rate is 370MBs (little files :
>  4MB * 256 * 60 )
> If total bytes is big, the avg write rate is 180MBs  (a lot of files :
>   4MB * 256 * 600)
> the raw ssd benchmark, seq write is 400MBs.
> 
> Can anybody help to explain it?  argument is wrong, or something else.

There are several problems here:

First you didn't mention which version of xfsprogs you're using (try
"xfs_repair -V" for instance). You didn't say what your SSD is like
(make, model, size, flash type, interface, etc), either.

Second, you didn't tell the exact command lines you used in each of
your tests (small, big, and raw): "dd if=/dev/zero...", "iozone", etc?
and how you measured throughput: is it overall, mean throughput, as
reported by "dd" after the fact, or did you sample the performance at
some points during the test?

Third, SSD don't work like HDD. Particularly, you can't simply
oeverwrite on data blocks that are marked unused but still hold data;
you must erase them first. Worse, you write pages (typically 4K) but
erase much larger blocks (typically 128K or more). 

So you can't benchmark an SSD properly by simply writing again and
again; you MUST use the "trim" command beforehand, to clean up blocks
that have been written but NOT actually erased. Else the SSD controller
will need to perform some "garbage collection" while writing, creating
slowdowns or even pauses.
Also notice that many SSD controllers perform on-the-fly compression.
That may greatly affect performance.

Let's say that your SSD is 1000 GB. You run 10 tests with a 60GB
data set. That fills in 600 GB of flash.

You erased the files, but *that doesn't necessarily clear the flash*. So
when you try writing 600 GB the next time, the SSD will use its
remaining 400 GB, then becomes very slow for the last 200GB because it
must clear up some space before each write... 

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

[-- Attachment #2: Signature digitale OpenPGP --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

      reply	other threads:[~2017-07-31 11:17 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-31  8:51 xfs seems performance lower when long time seq write into ssd 王勇
2017-07-31 11:17 ` Emmanuel Florac [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170731131754.73fac669@harpe.intellique.com \
    --to=eflorac@intellique.com \
    --cc=linux-xfs@vger.kernel.org \
    --cc=wang.yong@datatom.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox