From: "Stefan /*St0fF*/ Hübner" <stefan.huebner@stud.tu-ilmenau.de>
To: Roberto Spadim <roberto@spadim.com.br>
Cc: Wolfgang Denk <wd@denx.de>, linux-raid@vger.kernel.org
Subject: Re: Optimize RAID0 for max IOPS?
Date: Wed, 19 Jan 2011 23:36:45 +0100 [thread overview]
Message-ID: <4D37677D.9010108@stud.tu-ilmenau.de> (raw)
In-Reply-To: <AANLkTimY8XWxnx1MZWXp2undt9J35U8vCyuCHaSF2KkJ@mail.gmail.com>
@Roberto: I guess you're right. BUT: i have not seen 900MB/s coming from
(i.e. read access) a software raid, but I've seen it from a 9750 on a
LSI SASx28 backplane, running RAID6 over 16disks (HDS722020ALA330). So
one might not be wrong assuming on current raid-controllers
hardware/software matching and timing is way more optimized than what
mdraid might get at all.
The 9650 and 9690 are considerably slower, but I've seen 550MB/s thruput
from those, also (I don't recall the setup anymore, tho).
Max reading I saw from a software raid was around 350MB/s - so hence my
answers. And if people had problems with controllers which are 5 years
or older by now, the numbers are not really comparable...
Now again there's the point where there are also parameters on the
controller that can be tweaked, and a simple way to recreate the testing
scenario. We may discuss and throw in further numbers and experience,
but not being able to recreate your specific scenario makes us talk past
each other...
stefan
Am 19.01.2011 20:50, schrieb Roberto Spadim:
> So can anybody help answering these questions:
>
> - are there any special options when creating the RAID0 to make it
> perform faster for such a use case?
> - are there other tunables, any special MD / LVM / file system / read
> ahead / buffer cache / ... parameters to look for?
>
> lets see:
> what´s your disk (ssd or sas or sata) best block size to write/read?
> write this at ->(A)
> what´s your work load? 50% write 50% read ?
>
> raid0 block size should be multiple of (A)
> *****filesystem size should be multiple of (A) of all disks
> *****read ahead should be a multiple of (A)
> for example
> /dev/sda 1kb
> /dev/sdb 4kb
>
> you should use 6kb... you should use 4kb, 8kb, 16kb (multiple of 1kb and 4kb)
>
> check i/o sheduller per disk too (ssd should use noop, disks should
> use cfq, deadline or another...)
> async and sync option at mount /etc/fstab, noatime reduce a lot of i/o
> too, you should optimize your application too
> hdparm each disk to use dma and fastest i/o options
>
> are you using only filesystem? are you using somethink more? samba?
> mysql? apache? lvm?
> each of this programs have some tunning, check their benchmarks
>
>
> getting back....
> what´s a raid controller?
> cpu + memory + disk controller + disks
> but... it only run raid software (it can run linux....)
>
> if you computer is slower than raid cpu+memory+disk controller, you
> will have a slower software raid, than hardware raid
> it´s like load balance on cpu/memory utilization of disk i/o (use
> dedicated hardware, or use your hardware?)
> got it?
> using a super fast xeon with ddr3 and optical fiber running software
> raid, is faster than a hardware raid using a arm (or fpga) ddrX memory
> and sas(fiber optical) connection to disks
>
> two solutions for the same problem
> what´s fast? benchmark it
> i think that if your xeon run a database and a very workloaded apache,
> a dedicated hardware raid can run faster, but a light xeon can run
> faster than a dedicated hardware raid
>
>
>
> 2011/1/19 Wolfgang Denk <wd@denx.de>:
>> Dear =?ISO-8859-15?Q?Stefan_/*St0fF*/_H=FCbner?=,
>>
>> In message <4D361F26.3060507@stud.tu-ilmenau.de> you wrote:
>>>
>>> [in German:] Schätzelein, Dein Problem sind die Platten, nicht der
>>> Controller.
>>>
>>> [in English:] Dude, the disks are your bottleneck.
>> ...
>>
>> Maybe we can stop speculations about what might be the cause of the
>> problems in some setup I do NOT intend to use, and rather discuss the
>> questions I asked.
>>
>>>> I will have 4 x 1 TB disks for this setup.
>>>>
>>>> The plan is to build a RAID0 from the 4 devices, create a physical
>>>> volume and a volume group on the resulting /dev/md?, then create 2 or
>>>> 3 logical volumes that will be used as XFS file systems.
>>
>> Clarrification: I'll run /dev/md* on the raw disks, without any
>> partitions on them.
>>
>>>> My goal is to optimize for maximum number of I/O operations per
>>>> second. ...
>>>>
>>>> Is this a reasonable approach for such a task?
>>>>
>>>> Should I do anything different to acchive maximum performance?
>>>>
>>>> What are the tunables in this setup? [It seems the usual recipies are
>>>> more oriented in maximizing the data troughput for large, mostly
>>>> sequential accesses - I figure that things like increasing read-ahead
>>>> etc. will not help me much here?]
>>
>> So can anybody help answering these questions:
>>
>> - are there any special options when creating the RAID0 to make it
>> perform faster for such a use case?
>> - are there other tunables, any special MD / LVM / file system /
>> read ahead / buffer cache / ... parameters to look for?
>>
>> Thanks.
>>
>> Wolfgang Denk
>>
>> --
>> DENX Software Engineering GmbH, MD: Wolfgang Denk & Detlev Zundel
>> HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
>> Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: wd@denx.de
>> Boykottiert Microsoft - Kauft Eure Fenster bei OBI!
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2011-01-19 22:36 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-01-18 21:01 Optimize RAID0 for max IOPS? Wolfgang Denk
2011-01-18 22:18 ` Roberto Spadim
2011-01-19 7:04 ` Wolfgang Denk
2011-01-18 23:15 ` Stefan /*St0fF*/ Hübner
2011-01-19 0:05 ` Roberto Spadim
2011-01-19 7:11 ` Wolfgang Denk
2011-01-19 8:18 ` Stefan /*St0fF*/ Hübner
2011-01-19 8:29 ` Jaap Crezee
2011-01-19 9:32 ` Jan Kasprzak
2011-01-19 7:10 ` Wolfgang Denk
2011-01-19 19:21 ` Wolfgang Denk
2011-01-19 19:50 ` Roberto Spadim
2011-01-19 22:36 ` Stefan /*St0fF*/ Hübner [this message]
2011-01-19 23:09 ` Roberto Spadim
2011-01-19 23:18 ` Roberto Spadim
2011-01-20 2:48 ` Keld Jørn Simonsen
2011-01-20 3:53 ` Roberto Spadim
2011-01-21 19:34 ` Wolfgang Denk
2011-01-21 20:03 ` Roberto Spadim
2011-01-21 20:04 ` Roberto Spadim
2011-01-24 14:40 ` CoolCold
2011-01-24 15:25 ` Justin Piszcz
2011-01-24 20:48 ` Wolfgang Denk
2011-01-24 21:57 ` Wolfgang Denk
2011-01-24 23:03 ` Dave Chinner
2011-01-25 7:39 ` Emmanuel Florac
2011-01-25 8:36 ` Dave Chinner
2011-01-25 12:45 ` Wolfgang Denk
2011-01-25 12:51 ` Emmanuel Florac
2011-01-24 20:43 ` Wolfgang Denk
2011-01-25 17:10 ` Christoph Hellwig
2011-01-25 18:41 ` Wolfgang Denk
2011-01-25 21:35 ` Christoph Hellwig
2011-01-26 7:16 ` Wolfgang Denk
2011-01-26 8:32 ` Stan Hoeppner
2011-01-26 8:42 ` Wolfgang Denk
2011-01-26 9:38 ` Christoph Hellwig
2011-01-26 9:41 ` CoolCold
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4D37677D.9010108@stud.tu-ilmenau.de \
--to=stefan.huebner@stud.tu-ilmenau.de \
--cc=linux-raid@vger.kernel.org \
--cc=roberto@spadim.com.br \
--cc=wd@denx.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).