public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Re: Optimize RAID0 for max IOPS?
       [not found]     ` <AANLkTikx4g99-Cf_09kEGfF2mmf4Dnuh2A5gTrtKweDy@mail.gmail.com>
@ 2011-01-24 15:25       ` Justin Piszcz
  2011-01-24 20:48         ` Wolfgang Denk
  2011-01-24 21:57         ` Wolfgang Denk
  0 siblings, 2 replies; 8+ messages in thread
From: Justin Piszcz @ 2011-01-24 15:25 UTC (permalink / raw)
  To: CoolCold; +Cc: linux-raid, stefan.huebner, Wolfgang Denk, xfs

[-- Attachment #1: Type: TEXT/PLAIN, Size: 900 bytes --]



On Mon, 24 Jan 2011, CoolCold wrote:

>> So can anybody help answering these questions:
>>
>> - are there any special options when creating the RAID0 to make it
>>  perform faster for such a use case?
>> - are there other tunables, any special MD / LVM / file system /
>>  read ahead / buffer cache / ... parameters to look for?
> XFS is known for it's slow speed on metadata operations like updating
> file attributes/removing files..but things gonna change after 2.6.35
> where delaylog is used. Citating Dave Chinner :
> < dchinner> Indeed, the biggest concurrency limitation has
> traditionally been the transaction commit/journalling code, but that's
> a lot more scalable now with delayed logging....
>
> So, you may need to benchmark fs part.

Some info on XFS benchmark with delaylog here:
http://comments.gmane.org/gmane.comp.file-systems.xfs.general/34379

Justin.

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Optimize RAID0 for max IOPS?
  2011-01-24 15:25       ` Optimize RAID0 for max IOPS? Justin Piszcz
@ 2011-01-24 20:48         ` Wolfgang Denk
  2011-01-24 21:57         ` Wolfgang Denk
  1 sibling, 0 replies; 8+ messages in thread
From: Wolfgang Denk @ 2011-01-24 20:48 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: linux-raid, xfs

Dear Justin Piszcz,

In message <alpine.DEB.2.00.1101241024230.14640@p34.internal.lan> you wrote:
>
> > So, you may need to benchmark fs part.
> 
> Some info on XFS benchmark with delaylog here:
> http://comments.gmane.org/gmane.comp.file-systems.xfs.general/34379

Thanks a lot for the pointer. I will try this out.

Best regards,

Wolfgang Denk

-- 
DENX Software Engineering GmbH,     MD: Wolfgang Denk & Detlev Zundel
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: wd@denx.de
Madness takes its toll. Please have exact change.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Optimize RAID0 for max IOPS?
  2011-01-24 15:25       ` Optimize RAID0 for max IOPS? Justin Piszcz
  2011-01-24 20:48         ` Wolfgang Denk
@ 2011-01-24 21:57         ` Wolfgang Denk
  2011-01-24 23:03           ` Dave Chinner
  1 sibling, 1 reply; 8+ messages in thread
From: Wolfgang Denk @ 2011-01-24 21:57 UTC (permalink / raw)
  To: Justin Piszcz; +Cc: linux-raid, xfs

Dear Justin,

In message <alpine.DEB.2.00.1101241024230.14640@p34.internal.lan> you wrote:
> 
> Some info on XFS benchmark with delaylog here:
> http://comments.gmane.org/gmane.comp.file-systems.xfs.general/34379

For the record: I tested both the "delaylog" and "logbsize=262144" on
two systems running Fedora 14 x86_64 (kernel version
2.6.35.10-74.fc14.x86_64).


Test No.	Mount options
1		rw,noatime
2		rw,noatime,delaylog
3		rw,noatime,delaylog,logbsize=262144


System A: Gigabyte EP35C-DS3R Mainbord, Core 2 Quad CPU Q9550 @ 2.83GHz, 4 GB RAM
--------- software RAID 5 using 4 x old Maxtor 7Y250M0 S-ATA I disks
	  (chunk size 16 kB, using S-ATA ports on main board), XFS

Test 1:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
A1               8G   844  96 153107  19 56427  11  2006  98 127174  15 369.4   6
Latency             13686us    1480ms    1128ms   14986us     136ms   74911us
Version  1.96       ------Sequential Create------ --------Random Create--------
A1                  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16   104   0 +++++ +++   115   0    89   0 +++++ +++   111   0
Latency               326ms     171us     277ms     343ms       9us     360ms
1.96,1.96,A1,1,1295714835,8G,,844,96,153107,19,56427,11,2006,98,127174,15,369.4,6,16,,,,,104,0,+++++,+++,115,0,89,0,+++++,+++,111,0,13686us,1480ms,1128ms,14986us,136ms,74911us,326ms,171us,277ms,343ms,9us,360ms

Test 2:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
A2               8G   417  46 67526   8 28251   5  1338  63 53780   5 236.0   4
Latency             38626us    1859ms     508ms   26689us     258ms     188ms
Version  1.96       ------Sequential Create------ --------Random Create--------
A2                  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16    51   0 +++++ +++   128   0   102   0 +++++ +++   125   0
Latency              1526ms     169us     277ms     363ms       8us     324ms
1.96,1.96,A2,1,1295901138,8G,,417,46,67526,8,28251,5,1338,63,53780,5,236.0,4,16,,,,,51,0,+++++,+++,128,0,102,0,+++++,+++,125,0,38626us,1859ms,508ms,26689us,258ms,188ms,1526ms,169us,277ms,363ms,8us,324ms

Test 3:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
A3               8G   417  46 67526   8 28251   5  1338  63 53780   5 236.0   4
Latency             38626us    1859ms     508ms   26689us     258ms     188ms
Version  1.96       ------Sequential Create------ --------Random Create--------
A3                  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16    51   0 +++++ +++   128   0   102   0 +++++ +++   125   0
Latency              1526ms     169us     277ms     363ms       8us     324ms
1.96,1.96,A3,1,1295901138,8G,,417,46,67526,8,28251,5,1338,63,53780,5,236.0,4,16,,,,,51,0,+++++,+++,128,0,102,0,+++++,+++,125,0,38626us,1859ms,508ms,26689us,258ms,188ms,1526ms,169us,277ms,363ms,8us,324ms

System B: Supermicro H8DM8-2 Mainbord, Dual-Core AMD Opteron 2216 @ 2.4 GHz, 8 GB RAM
          software RAID 6 using 6 x Seagate ST31000524NS S-ATA II disks
          (chunk size 16 kB, using a Marvell MV88SX6081 8-port SATA II PCI-X Controller)
          XFS

Test 1:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
B1              16G   403  98 198720  66 53287  49  1013  99 228076  91 545.0  31
Latency             43022us     127ms     126ms   29328us     105ms   66395us
Version  1.96       ------Sequential Create------ --------Random Create--------
B1                  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16    97   1 +++++ +++    96   1    96   1 +++++ +++    95   1
Latency               326ms     349us     351ms     355ms      49us     363ms
1.96,1.96,B1,1,1295784794,16G,,403,98,198720,66,53287,49,1013,99,228076,91,545.0,31,16,,,,,97,1,+++++,+++,96,1,96,1,+++++,+++,95,1,43022us,127ms,126ms,29328us,105ms,66395us,326ms,349us,351ms,355ms,49us,363ms

Test 2:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
B2              16G   380  98 197319  68 54835  48   983  99 216812  89 527.8  31
Latency             47456us     227ms     280ms   24696us   38233us   80147us
Version  1.96       ------Sequential Create------ --------Random Create--------
B2                  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16    91   1 +++++ +++   115   1    73   1 +++++ +++    96   1
Latency               355ms    2274us     833ms     750ms    1079us     400ms
1.96,1.96,B2,1,1295884032,16G,,380,98,197319,68,54835,48,983,99,216812,89,527.8,31,16,,,,,91,1,+++++,+++,115,1,73,1,+++++,+++,96,1,47456us,227ms,280ms,24696us,38233us,80147us,355ms,2274us,833ms,750ms,1079us,400ms

Test 3:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
B3              16G   402  99 175802  64 55639  48  1006  99 232748  87 543.7  32
Latency             43160us     426ms     164ms   13306us   40857us   65114us
Version  1.96       ------Sequential Create------ --------Random Create--------
B3                  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16    93   1 +++++ +++   101   1    95   1 +++++ +++    95   1
Latency               479ms    2281us     383ms     366ms      22us     402ms
1.96,1.96,B3,1,1295880202,16G,,402,99,175802,64,55639,48,1006,99,232748,87,543.7,32,16,,,,,93,1,+++++,+++,101,1,95,1,+++++,+++,95,1,43160us,426ms,164ms,13306us,40857us,65114us,479ms,2281us,383ms,366ms,22us,402ms


I do not see any significant improvement in any of the parameters -
especially when compared to the serious performance degradation (down
to 44% for block write, 42% for block read) on system A.

Best regards,

Wolfgang Denk

-- 
DENX Software Engineering GmbH,     MD: Wolfgang Denk & Detlev Zundel
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: wd@denx.de
A supercomputer is a machine that runs an endless loop in 2 seconds.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Optimize RAID0 for max IOPS?
  2011-01-24 21:57         ` Wolfgang Denk
@ 2011-01-24 23:03           ` Dave Chinner
  2011-01-25  7:39             ` Emmanuel Florac
  0 siblings, 1 reply; 8+ messages in thread
From: Dave Chinner @ 2011-01-24 23:03 UTC (permalink / raw)
  To: Wolfgang Denk; +Cc: linux-raid, Justin Piszcz, xfs

On Mon, Jan 24, 2011 at 10:57:13PM +0100, Wolfgang Denk wrote:
> Dear Justin,
> 
> In message <alpine.DEB.2.00.1101241024230.14640@p34.internal.lan> you wrote:
> > 
> > Some info on XFS benchmark with delaylog here:
> > http://comments.gmane.org/gmane.comp.file-systems.xfs.general/34379
> 
> For the record: I tested both the "delaylog" and "logbsize=262144" on
> two systems running Fedora 14 x86_64 (kernel version
> 2.6.35.10-74.fc14.x86_64).
> 
> 
> Test No.	Mount options
> 1		rw,noatime
> 2		rw,noatime,delaylog
> 3		rw,noatime,delaylog,logbsize=262144
> 
> 
> System A: Gigabyte EP35C-DS3R Mainbord, Core 2 Quad CPU Q9550 @ 2.83GHz, 4 GB RAM
> --------- software RAID 5 using 4 x old Maxtor 7Y250M0 S-ATA I disks
> 	  (chunk size 16 kB, using S-ATA ports on main board), XFS
> 
> Test 1:
> 
> Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
> A1               8G   844  96 153107  19 56427  11  2006  98 127174  15 369.4   6
> Latency             13686us    1480ms    1128ms   14986us     136ms   74911us
> Version  1.96       ------Sequential Create------ --------Random Create--------
> A1                  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>                  16   104   0 +++++ +++   115   0    89   0 +++++ +++   111   0

Only 16 files? You need to test something that takes more than 5
milliseconds to run. Given that XFS can run at >20,000 creates/s for
a single threaded sequential create like this, perhaps you should
start at 100,000 files (maybe a million) so you get an idea of
sustained performance.

.....

> I do not see any significant improvement in any of the parameters -
> especially when compared to the serious performance degradation (down
> to 44% for block write, 42% for block read) on system A.

delaylog does not affect the block IO path in any way, so something
else is going on there. You need to sort that out before drawing any
conclusions.

Similarly, you need to test something relevant to your workload, not
use a canned benchmarks in the expectation the results are in any
way meaningful to your real workload. Also, if you do use a stupid
canned benchmark, make sure you configure it to test something
relevant to what you are trying to compare...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Optimize RAID0 for max IOPS?
  2011-01-24 23:03           ` Dave Chinner
@ 2011-01-25  7:39             ` Emmanuel Florac
  2011-01-25  8:36               ` Dave Chinner
  0 siblings, 1 reply; 8+ messages in thread
From: Emmanuel Florac @ 2011-01-25  7:39 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-raid, xfs, Wolfgang Denk, Justin Piszcz

Le Tue, 25 Jan 2011 10:03:14 +1100 vous écriviez:

> Only 16 files?

IIRC this is 16 thousands of files. Though this is not enough, I
generally use 80 to 160 for tests.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Optimize RAID0 for max IOPS?
  2011-01-25  7:39             ` Emmanuel Florac
@ 2011-01-25  8:36               ` Dave Chinner
  2011-01-25 12:45                 ` Wolfgang Denk
  0 siblings, 1 reply; 8+ messages in thread
From: Dave Chinner @ 2011-01-25  8:36 UTC (permalink / raw)
  To: Emmanuel Florac; +Cc: linux-raid, xfs, Wolfgang Denk, Justin Piszcz

[ As a small note - if you are going to comment on the results table
from a previous message, please don't cut it from your response.
Context is important. I pasted the relevant part back in so i can
refer back to it in my response. ]

On Tue, Jan 25, 2011 at 08:39:00AM +0100, Emmanuel Florac wrote:
> Le Tue, 25 Jan 2011 10:03:14 +1100 vous écriviez:
> > > Version  1.96       ------Sequential Create------ --------Random Create--------
> > > A1                  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
> > >               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> > >                  16   104   0 +++++ +++   115   0    89   0 +++++ +++   111   0
> > 
> > Only 16 files?
> 
> IIRC this is 16 thousands of files. Though this is not enough, I
> generally use 80 to 160 for tests.

Yes, you're right, the bonnie++ man page states that it is in units
of 1024 files. Be nice if there was a "k" to signify that so people
who aren't intimately familiar with it's output format can see
exactly what was tested....

As it is, a create rate of 104 files/s (note the consistency of
units between 2 adjacent numbers!) indicates something else is
screwed, because my local test VM on RAID0 gets numbers like this:

Version  1.96       ------Sequential Create------ --------Random Create--------
test-4              -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 25507  90 +++++ +++ 30472  97 25281  93 +++++ +++ 29077  97
Latency             23864us     204us   21092us   18855us      82us     121us

IOWs, create rates of 25k/s and unlink of 30k/s and it is clearly
CPU bound.

Therein lies the difference: the original numbers have 0% CPU usage,
which indicates that the test is blocking.  Something is causing the
reported test system to be blocked almost all the time.

/me looks closer.

Oh, despite $subject being "RAID0" the filesystems being tested are
on RAID5 and RAID6 with very small chunk sizes on slow SATA drives.
This is smelling like a case of barrier IOs on software raid on
cheap storage....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Optimize RAID0 for max IOPS?
  2011-01-25  8:36               ` Dave Chinner
@ 2011-01-25 12:45                 ` Wolfgang Denk
  2011-01-25 12:51                   ` Emmanuel Florac
  0 siblings, 1 reply; 8+ messages in thread
From: Wolfgang Denk @ 2011-01-25 12:45 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-raid, Justin Piszcz, xfs

Dear Dave Chinner,

In message <20110125083643.GE28803@dastard> you wrote:
>
> Oh, despite $subject being "RAID0" the filesystems being tested are
> on RAID5 and RAID6 with very small chunk sizes on slow SATA drives.
> This is smelling like a case of barrier IOs on software raid on
> cheap storage....

Right. [Any way to avoid these, btw?]  I got side-tracked by the
comments about the new (to me) delaylog mount option to xfs; as the
results were not exactly as exp[ected I though it might be interesting
to report these.

But as the subject says, my current topic is tuning RAID0 to avoid
exactly this type of bottleneck; or rather looking for tunable options
on RAID0

Best regards,

Wolfgang Denk

--
DENX Software Engineering GmbH,     MD: Wolfgang Denk & Detlev Zundel
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: wd@denx.de
PLEASE NOTE: Some Quantum Physics Theories Suggest That When the Con-
sumer Is Not Directly Observing This Product, It May Cease  to  Exist
or Will Exist Only in a Vague and Undetermined State.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Optimize RAID0 for max IOPS?
  2011-01-25 12:45                 ` Wolfgang Denk
@ 2011-01-25 12:51                   ` Emmanuel Florac
  0 siblings, 0 replies; 8+ messages in thread
From: Emmanuel Florac @ 2011-01-25 12:51 UTC (permalink / raw)
  To: Wolfgang Denk; +Cc: linux-raid, xfs, Justin, Piszcz

Le Tue, 25 Jan 2011 13:45:09 +0100
Wolfgang Denk <wd@denx.de> écrivait:

> > This is smelling like a case of barrier IOs on software raid on
> > cheap storage....  
> 
> Right. [Any way to avoid these, btw?] 

Easy enough, use the "nobarrier" mount option. 

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-01-25 12:49 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20110118210112.D13A236C@gemini.denx.de>
     [not found] ` <4D361F26.3060507@stud.tu-ilmenau.de>
     [not found]   ` <20110119192104.1FA92D30267@gemini.denx.de>
     [not found]     ` <AANLkTikx4g99-Cf_09kEGfF2mmf4Dnuh2A5gTrtKweDy@mail.gmail.com>
2011-01-24 15:25       ` Optimize RAID0 for max IOPS? Justin Piszcz
2011-01-24 20:48         ` Wolfgang Denk
2011-01-24 21:57         ` Wolfgang Denk
2011-01-24 23:03           ` Dave Chinner
2011-01-25  7:39             ` Emmanuel Florac
2011-01-25  8:36               ` Dave Chinner
2011-01-25 12:45                 ` Wolfgang Denk
2011-01-25 12:51                   ` Emmanuel Florac

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox