* xfs seems performance lower when long time seq write into ssd
@ 2017-07-31 8:51 王勇
2017-07-31 11:17 ` Emmanuel Florac
0 siblings, 1 reply; 2+ messages in thread
From: 王勇 @ 2017-07-31 8:51 UTC (permalink / raw)
To: linux-xfs, xfs, wang.yong
Hi All,
Recently, I meet a strange quetion. I have done seq write
(blocksize=1m) into mount folders.
If total bytes is small, the avg write rate is 370MBs (little files :
4MB * 256 * 60 )
If total bytes is big, the avg write rate is 180MBs (a lot of files :
4MB * 256 * 600)
the raw ssd benchmark, seq write is 400MBs.
Can anybody help to explain it? argument is wrong, or something else.
Below argument list:
Hardware: Intel 3750 ssd 480GB (sata3)
centos: 7.3 x86_64
kernel: 3.10.0-514.el7.x86_64
software: mkfs.xfs
mount feature: /dev/sdb1 type xfs
(rw,noatime,attr2,inode64,logbsize=256k,noquota)
xfs_info /dev/sdl1 :
meta-data=/dev/sdl1 isize=512 agcount=4, agsize=28975477 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=115901905, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=56592, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Tks and Rgds
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: xfs seems performance lower when long time seq write into ssd
2017-07-31 8:51 xfs seems performance lower when long time seq write into ssd 王勇
@ 2017-07-31 11:17 ` Emmanuel Florac
0 siblings, 0 replies; 2+ messages in thread
From: Emmanuel Florac @ 2017-07-31 11:17 UTC (permalink / raw)
To: 王勇; +Cc: linux-xfs, xfs
[-- Attachment #1: Type: text/plain, Size: 2435 bytes --]
Le Mon, 31 Jul 2017 16:51:29 +0800
王勇 <wang.yong@datatom.com> écrivait:
> Hi All,
> Recently, I meet a strange quetion. I have done seq write
> (blocksize=1m) into mount folders.
> If total bytes is small, the avg write rate is 370MBs (little files :
> 4MB * 256 * 60 )
> If total bytes is big, the avg write rate is 180MBs (a lot of files :
> 4MB * 256 * 600)
> the raw ssd benchmark, seq write is 400MBs.
>
> Can anybody help to explain it? argument is wrong, or something else.
There are several problems here:
First you didn't mention which version of xfsprogs you're using (try
"xfs_repair -V" for instance). You didn't say what your SSD is like
(make, model, size, flash type, interface, etc), either.
Second, you didn't tell the exact command lines you used in each of
your tests (small, big, and raw): "dd if=/dev/zero...", "iozone", etc?
and how you measured throughput: is it overall, mean throughput, as
reported by "dd" after the fact, or did you sample the performance at
some points during the test?
Third, SSD don't work like HDD. Particularly, you can't simply
oeverwrite on data blocks that are marked unused but still hold data;
you must erase them first. Worse, you write pages (typically 4K) but
erase much larger blocks (typically 128K or more).
So you can't benchmark an SSD properly by simply writing again and
again; you MUST use the "trim" command beforehand, to clean up blocks
that have been written but NOT actually erased. Else the SSD controller
will need to perform some "garbage collection" while writing, creating
slowdowns or even pauses.
Also notice that many SSD controllers perform on-the-fly compression.
That may greatly affect performance.
Let's say that your SSD is 1000 GB. You run 10 tests with a 60GB
data set. That fills in 600 GB of flash.
You erased the files, but *that doesn't necessarily clear the flash*. So
when you try writing 600 GB the next time, the SSD will use its
remaining 400 GB, then becomes very slow for the last 200GB because it
must clear up some space before each write...
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac@intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
[-- Attachment #2: Signature digitale OpenPGP --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2017-07-31 11:17 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-31 8:51 xfs seems performance lower when long time seq write into ssd 王勇
2017-07-31 11:17 ` Emmanuel Florac
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox