* standard performance (write speed 20Mb/s) @ 2011-07-16 19:40 Pol Hallen 2011-07-17 4:26 ` Stan Hoeppner 2011-07-17 16:48 ` standard performance (write speed 20Mb/s) Gordon Henderson 0 siblings, 2 replies; 22+ messages in thread From: Pol Hallen @ 2011-07-16 19:40 UTC (permalink / raw) To: linux-raid Hi folks :-) after assembled a new hw (xeon, ich10 controller with 5 disks of 2Tb wd - radi5) I have slow write performance: 20Mb/s :-((( dd if=/dev/zero of=/share/raid/1Gb bs=1024M count=1 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB) copied, 48.7203 s, 22.0 MB/s I repeated same test also creating a 10Gb file.. same results.. write speed is better: dd if=/share/raid/1Gb of=/dev/null 2097152+0 records in 2097152+0 records out 1073741824 bytes (1.1 GB) copied, 1.82054 s, 590 MB/s 20Mb/s is a disk/controller problem or controller can't do better? hw is new, I already check all disks.. thanks! Pol ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-16 19:40 standard performance (write speed 20Mb/s) Pol Hallen @ 2011-07-17 4:26 ` Stan Hoeppner 2011-07-17 8:12 ` Pol Hallen 2011-07-17 16:48 ` standard performance (write speed 20Mb/s) Gordon Henderson 1 sibling, 1 reply; 22+ messages in thread From: Stan Hoeppner @ 2011-07-17 4:26 UTC (permalink / raw) To: Pol Hallen; +Cc: linux-raid On 7/16/2011 2:40 PM, Pol Hallen wrote: > Hi folks :-) > > after assembled a new hw (xeon, ich10 controller with 5 disks of 2Tb wd - > radi5) I have slow write performance: > > 20Mb/s :-((( > > dd if=/dev/zero of=/share/raid/1Gb bs=1024M count=1 > 1+0 records in > 1+0 records out > 1073741824 bytes (1.1 GB) copied, 48.7203 s, 22.0 MB/s Using a write block size of 1GB with dd causes the entire file to be buffered to RAM before being flushed to disk. To prove the point, I just ran your test against a single SATA disk with an XFS filesystem. XFS is is optimized for large files. $ dd if=/dev/zero of=./test bs=1024M count=1 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB) copied, 139.895 s, 7.7 MB/s The same test using a *sane* block size of 4KB: $ dd if=/dev/zero of=./test bs=4096 count=262144 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 15.0732 s, 71.2 MB/s 10x decrease in performance due to the insane 1GB block size. The machine I ran this test on is old, having only 384MB RAM and a Sil3512 PCI SATA-I controller. The disk is a WD Blue 500GB 7.2K. Using the 1GB block size ate over 800MB of swap out of 1GB before the buffer was flushed. This massive buffering is what murders performance here. Repeat your test using a 4KB block size and post the results. Know your tools Pol. -- Stan ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-17 4:26 ` Stan Hoeppner @ 2011-07-17 8:12 ` Pol Hallen 2011-07-17 12:11 ` John Robinson 2011-07-17 23:03 ` Stan Hoeppner 0 siblings, 2 replies; 22+ messages in thread From: Pol Hallen @ 2011-07-17 8:12 UTC (permalink / raw) To: Stan Hoeppner; +Cc: linux-raid hello and thanks for the reply :-) dd if=/dev/zero of=test bs=4096 count=262144 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 31.3475 s, 34.3 MB/s I've 8Gb ddr3 of ram Pol ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-17 8:12 ` Pol Hallen @ 2011-07-17 12:11 ` John Robinson 2011-07-17 12:22 ` Iustin Pop 2011-07-27 5:42 ` Simon Matthews 2011-07-17 23:03 ` Stan Hoeppner 1 sibling, 2 replies; 22+ messages in thread From: John Robinson @ 2011-07-17 12:11 UTC (permalink / raw) To: Pol Hallen; +Cc: linux-raid On 17/07/2011 09:12, Pol Hallen wrote: > hello and thanks for the reply :-) > > dd if=/dev/zero of=test bs=4096 count=262144 > 262144+0 records in > 262144+0 records out > 1073741824 bytes (1.1 GB) copied, 31.3475 s, 34.3 MB/s Pretty poor. CentOS 5, Intel ICH10, md RAID 6 over 5 7200rpm 1TB drives, then LVM, then ext3: # dd if=/dev/zero of=test bs=4096 count=262144 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 2.5253 seconds, 425 MB/s And there's a badblocks running on another drive also on the ICH10. Having said that, I think mine's wrong too, I don't think my array can really manage that much throughput. We should both be using more realistic benchmarking tools like bonnie++: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP beast.private.yu 7G 80890 91 67527 13 41608 4 74028 69 205104 9 378.7 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ beast.private.yuiop.co.uk,7G,80890,91,67527,13,41608,4,74028,69,205104,9,378.7,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ And I should think of a command line options so I don't get all those + signs. Never mind, the above shows 40-80MB/s for writes, 70-200MB/s for reads, which is not too bad even if it's not great. Hang on. You aren't trying to benchmark your array just after creating it, while it's still doing its initial sync, are you? Cheers, John. ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-17 12:11 ` John Robinson @ 2011-07-17 12:22 ` Iustin Pop 2011-07-17 12:51 ` John Robinson 2011-07-17 22:05 ` Stan Hoeppner 2011-07-27 5:42 ` Simon Matthews 1 sibling, 2 replies; 22+ messages in thread From: Iustin Pop @ 2011-07-17 12:22 UTC (permalink / raw) To: John Robinson; +Cc: Pol Hallen, linux-raid On Sun, Jul 17, 2011 at 01:11:19PM +0100, John Robinson wrote: > On 17/07/2011 09:12, Pol Hallen wrote: > >hello and thanks for the reply :-) > > > >dd if=/dev/zero of=test bs=4096 count=262144 > >262144+0 records in > >262144+0 records out > >1073741824 bytes (1.1 GB) copied, 31.3475 s, 34.3 MB/s > > Pretty poor. CentOS 5, Intel ICH10, md RAID 6 over 5 7200rpm 1TB > drives, then LVM, then ext3: > # dd if=/dev/zero of=test bs=4096 count=262144 > 262144+0 records in > 262144+0 records out > 1073741824 bytes (1.1 GB) copied, 2.5253 seconds, 425 MB/s > > And there's a badblocks running on another drive also on the ICH10. > > Having said that, I think mine's wrong too, I don't think my array > can really manage that much throughput. We should both be using more > realistic benchmarking tools like bonnie++: Or simply pass the correct flags to dd — like oflag=direct, which will make it do non-buffered writes. regards, iustin -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-17 12:22 ` Iustin Pop @ 2011-07-17 12:51 ` John Robinson 2011-07-17 13:28 ` Iustin Pop 2011-07-17 22:05 ` Stan Hoeppner 1 sibling, 1 reply; 22+ messages in thread From: John Robinson @ 2011-07-17 12:51 UTC (permalink / raw) To: Iustin Pop; +Cc: Linux RAID On 17/07/2011 13:22, Iustin Pop wrote: > On Sun, Jul 17, 2011 at 01:11:19PM +0100, John Robinson wrote: [...] >> # dd if=/dev/zero of=test bs=4096 count=262144 >> 262144+0 records in >> 262144+0 records out >> 1073741824 bytes (1.1 GB) copied, 2.5253 seconds, 425 MB/s >> >> And there's a badblocks running on another drive also on the ICH10. >> >> Having said that, I think mine's wrong too, I don't think my array >> can really manage that much throughput. We should both be using more >> realistic benchmarking tools like bonnie++: > > Or simply pass the correct flags to dd — like oflag=direct, which will > make it do non-buffered writes. That's still not realistic: # dd if=/dev/zero of=test oflag=direct bs=4096 count=262144 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 117.434 seconds, 9.1 MB/s Because this time we're doing a read-modify-write for every 4K block, or at least a write for every 4K block. I can fix it up again to work in stripe size amounts: # dd if=/dev/zero of=test oflag=direct bs=1572864 count=683 683+0 records in 683+0 records out 1074266112 bytes (1.1 GB) copied, 18.3198 seconds, 58.6 MB/s But it's still not realistic because real I/O does use buffers and doesn't work in magic sizes, so we should be using a more realistic benchmarking tool like bonnie++. Cheers, John. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-17 12:51 ` John Robinson @ 2011-07-17 13:28 ` Iustin Pop 2011-07-18 9:04 ` John Robinson 0 siblings, 1 reply; 22+ messages in thread From: Iustin Pop @ 2011-07-17 13:28 UTC (permalink / raw) To: John Robinson; +Cc: Linux RAID On Sun, Jul 17, 2011 at 01:51:11PM +0100, John Robinson wrote: > On 17/07/2011 13:22, Iustin Pop wrote: > >On Sun, Jul 17, 2011 at 01:11:19PM +0100, John Robinson wrote: > [...] > >># dd if=/dev/zero of=test bs=4096 count=262144 > >>262144+0 records in > >>262144+0 records out > >>1073741824 bytes (1.1 GB) copied, 2.5253 seconds, 425 MB/s > >> > >>And there's a badblocks running on another drive also on the ICH10. > >> > >>Having said that, I think mine's wrong too, I don't think my array > >>can really manage that much throughput. We should both be using more > >>realistic benchmarking tools like bonnie++: > > > >Or simply pass the correct flags to dd — like oflag=direct, which will > >make it do non-buffered writes. > > That's still not realistic: > > # dd if=/dev/zero of=test oflag=direct bs=4096 count=262144 > 262144+0 records in > 262144+0 records out > 1073741824 bytes (1.1 GB) copied, 117.434 seconds, 9.1 MB/s > > Because this time we're doing a read-modify-write for every 4K > block, or at least a write for every 4K block. Of course :) But at 4K speed, this is actually the speed of your array. > I can fix it up again > to work in stripe size amounts: > > # dd if=/dev/zero of=test oflag=direct bs=1572864 count=683 > 683+0 records in > 683+0 records out > 1074266112 bytes (1.1 GB) copied, 18.3198 seconds, 58.6 MB/s > > But it's still not realistic because real I/O does use buffers and Buffers is one thing; flushing to disk after a certain block size another, and that happens quite often, depending on workload. > doesn't work in magic sizes, so we should be using a more realistic > benchmarking tool like bonnie++. Honestly, I don't find bonnie++ a realistic tool. fio is a much better one; bonnie is quite old and inflexible. regards, iustin -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-17 13:28 ` Iustin Pop @ 2011-07-18 9:04 ` John Robinson 0 siblings, 0 replies; 22+ messages in thread From: John Robinson @ 2011-07-18 9:04 UTC (permalink / raw) To: Linux RAID On 17/07/2011 14:28, Iustin Pop wrote: > On Sun, Jul 17, 2011 at 01:51:11PM +0100, John Robinson wrote: [...] >> doesn't work in magic sizes, so we should be using a more realistic >> benchmarking tool like bonnie++. > > Honestly, I don't find bonnie++ a realistic tool. fio is a much better > one; bonnie is quite old and inflexible. Ah OK, yes I agree with you there, but my general point remains: dd might be OK for producing one very specific metric in some cases, but that will not give a realistic impression of real use performance, so we should be using a more realistic benchmarking tool. Cheers, John. ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-17 12:22 ` Iustin Pop 2011-07-17 12:51 ` John Robinson @ 2011-07-17 22:05 ` Stan Hoeppner 1 sibling, 0 replies; 22+ messages in thread From: Stan Hoeppner @ 2011-07-17 22:05 UTC (permalink / raw) To: John Robinson, Pol Hallen, linux-raid On 7/17/2011 7:22 AM, Iustin Pop wrote: > On Sun, Jul 17, 2011 at 01:11:19PM +0100, John Robinson wrote: >> On 17/07/2011 09:12, Pol Hallen wrote: >>> hello and thanks for the reply :-) >>> >>> dd if=/dev/zero of=test bs=4096 count=262144 >>> 262144+0 records in >>> 262144+0 records out >>> 1073741824 bytes (1.1 GB) copied, 31.3475 s, 34.3 MB/s >> >> Pretty poor. CentOS 5, Intel ICH10, md RAID 6 over 5 7200rpm 1TB >> drives, then LVM, then ext3: >> # dd if=/dev/zero of=test bs=4096 count=262144 >> 262144+0 records in >> 262144+0 records out >> 1073741824 bytes (1.1 GB) copied, 2.5253 seconds, 425 MB/s >> >> And there's a badblocks running on another drive also on the ICH10. >> >> Having said that, I think mine's wrong too, I don't think my array >> can really manage that much throughput. We should both be using more >> realistic benchmarking tools like bonnie++: > > Or simply pass the correct flags to dd — like oflag=direct, which will > make it do non-buffered writes. I'm not sure of the reasons, but O_DIRECT doesn't work with dd quite the way one would think, at least not from a performance perspective. On my test rig it yields an almost 10x decrease, much like using insane block size. It may have something to do with write barriers being enabled in XFS on my test rig, or something similar. This system is running vanilla 2.6.38.6 with Debian Squeeze atop. Using O_DIRECT with dd with 2.6.26 and 2.6.34 yielded the same dd O_DIRECT behavior in the past. $ dd if=/dev/zero of=./test bs=4096 count=262144 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 15.0542 s, 71.3 MB/s $ dd oflag=direct if=/dev/zero of=./test bs=4096 count=262144 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 133.888 s, 8.0 MB/s -- Stan -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-17 12:11 ` John Robinson 2011-07-17 12:22 ` Iustin Pop @ 2011-07-27 5:42 ` Simon Matthews 2011-07-27 5:46 ` Roman Mamedov 2011-07-27 10:22 ` Stan Hoeppner 1 sibling, 2 replies; 22+ messages in thread From: Simon Matthews @ 2011-07-27 5:42 UTC (permalink / raw) To: John Robinson; +Cc: Pol Hallen, linux-raid On Sun, Jul 17, 2011 at 5:11 AM, John Robinson <john.robinson@anonymous.org.uk> wrote: > Pretty poor. CentOS 5, Intel ICH10, md RAID 6 over 5 7200rpm 1TB drives, > then LVM, then ext3: > # dd if=/dev/zero of=test bs=4096 count=262144 > 262144+0 records in > 262144+0 records out > 1073741824 bytes (1.1 GB) copied, 2.5253 seconds, 425 MB/s What hard drive offers a sustained data rate of 425 MB/s or even half that? Simon ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-27 5:42 ` Simon Matthews @ 2011-07-27 5:46 ` Roman Mamedov 2011-07-27 10:22 ` Stan Hoeppner 1 sibling, 0 replies; 22+ messages in thread From: Roman Mamedov @ 2011-07-27 5:46 UTC (permalink / raw) To: Simon Matthews; +Cc: John Robinson, Pol Hallen, linux-raid [-- Attachment #1: Type: text/plain, Size: 789 bytes --] On Tue, 26 Jul 2011 22:42:14 -0700 Simon Matthews <simon.d.matthews@gmail.com> wrote: > On Sun, Jul 17, 2011 at 5:11 AM, John Robinson > <john.robinson@anonymous.org.uk> wrote: > > > Pretty poor. CentOS 5, Intel ICH10, md RAID 6 over 5 7200rpm 1TB drives, > > then LVM, then ext3: > > # dd if=/dev/zero of=test bs=4096 count=262144 > > 262144+0 records in > > 262144+0 records out > > 1073741824 bytes (1.1 GB) copied, 2.5253 seconds, 425 MB/s > > What hard drive offers a sustained data rate of 425 MB/s or even half that? "md RAID 6 over 5 7200rpm 1TB drives" does? :) At least half that - easily. But still, the result posted is very much inflated, they forgot the "conv=fdatasync" option, so mostly benchmarking the FS cache in RAM. -- With respect, Roman [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 198 bytes --] ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-27 5:42 ` Simon Matthews 2011-07-27 5:46 ` Roman Mamedov @ 2011-07-27 10:22 ` Stan Hoeppner 2011-07-27 10:26 ` John Robinson 1 sibling, 1 reply; 22+ messages in thread From: Stan Hoeppner @ 2011-07-27 10:22 UTC (permalink / raw) To: Simon Matthews; +Cc: John Robinson, Pol Hallen, linux-raid On 7/27/2011 12:42 AM, Simon Matthews wrote: > On Sun, Jul 17, 2011 at 5:11 AM, John Robinson > <john.robinson@anonymous.org.uk> wrote: > >> Pretty poor. CentOS 5, Intel ICH10, md RAID 6 over 5 7200rpm 1TB drives, >> then LVM, then ext3: >> # dd if=/dev/zero of=test bs=4096 count=262144 >> 262144+0 records in >> 262144+0 records out >> 1073741824 bytes (1.1 GB) copied, 2.5253 seconds, 425 MB/s > > What hard drive offers a sustained data rate of 425 MB/s or even half that? 425 MB/s / 3 spindles = 142 MB/s per spindle That's not poor, it's excellent. Which drives are these? WD Black, Seagate, Hitachi? -- Stan ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-27 10:22 ` Stan Hoeppner @ 2011-07-27 10:26 ` John Robinson 2011-07-27 12:35 ` Joe Landman 2011-07-27 13:54 ` Stan Hoeppner 0 siblings, 2 replies; 22+ messages in thread From: John Robinson @ 2011-07-27 10:26 UTC (permalink / raw) To: Stan Hoeppner; +Cc: Simon Matthews, Pol Hallen, linux-raid On 27/07/2011 11:22, Stan Hoeppner wrote: > On 7/27/2011 12:42 AM, Simon Matthews wrote: >> On Sun, Jul 17, 2011 at 5:11 AM, John Robinson >> <john.robinson@anonymous.org.uk> wrote: >> >>> Pretty poor. CentOS 5, Intel ICH10, md RAID 6 over 5 7200rpm 1TB drives, >>> then LVM, then ext3: >>> # dd if=/dev/zero of=test bs=4096 count=262144 >>> 262144+0 records in >>> 262144+0 records out >>> 1073741824 bytes (1.1 GB) copied, 2.5253 seconds, 425 MB/s >> >> What hard drive offers a sustained data rate of 425 MB/s or even half that? > > 425 MB/s / 3 spindles = 142 MB/s per spindle > > That's not poor, it's excellent. Which drives are these? WD Black, > Seagate, Hitachi? Gentlemen, we've been round this loop before about 10 days ago. Pol's 20 MB/s was poor because he was testing on an array with unaligned partitions and a resync was running, my 425 MB/s was a bad test because it didn't use fdatasync or direct and I said dd was a bad test anyway, etc etc. Cheers, John. ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-27 10:26 ` John Robinson @ 2011-07-27 12:35 ` Joe Landman 2011-07-27 13:54 ` Stan Hoeppner 1 sibling, 0 replies; 22+ messages in thread From: Joe Landman @ 2011-07-27 12:35 UTC (permalink / raw) To: linux-raid On 07/27/2011 06:26 AM, John Robinson wrote: > On 27/07/2011 11:22, Stan Hoeppner wrote: >> On 7/27/2011 12:42 AM, Simon Matthews wrote: >>> On Sun, Jul 17, 2011 at 5:11 AM, John Robinson >>> <john.robinson@anonymous.org.uk> wrote: >>> >>>> Pretty poor. CentOS 5, Intel ICH10, md RAID 6 over 5 7200rpm 1TB >>>> drives, >>>> then LVM, then ext3: >>>> # dd if=/dev/zero of=test bs=4096 count=262144 no oflag=direct or sync date dd if=/dev/zero of=test bs=4096 count=262144 ... sync date and then a difference between the time stamps ... >>>> 262144+0 records in >>>> 262144+0 records out >>>> 1073741824 bytes (1.1 GB) copied, 2.5253 seconds, 425 MB/s This is purely file cache performance you have measured, nothing else. [...] > Gentlemen, we've been round this loop before about 10 days ago. Pol's 20 > MB/s was poor because he was testing on an array with unaligned Using a huge blocksize (anything greater than 1/10th ram) isn't terribly realistic from an actual application point of view in *most* cases. A few corner cases maybe, but not in most cases. Testing on a rebuilding array gives you a small fraction of the available bandwidth ... typically you will see writes (cached) perform better than reads in these cases, but its not a measurement that tells you much more than performance during a rebuild. Unaligned perform is altogether too common, though for streaming access, isn't normally terribly significant, as the first non-alignment access cost is amortized against many sequential accesses. Its a bad thing for more random workloads. > partitions and a resync was running, my 425 MB/s was a bad test because > it didn't use fdatasync or direct and I said dd was a bad test anyway, > etc etc. dd's not a terrible test. Its a very quick and dirty indicator of a problem in the event of an issue, if used correctly. Make sure you are testing IO sizes of 2 or more times ram size, with sync's at the end, and use date stamps to verify timing. bonnie++, the favorite of many people, isn't a great IO generator. Nor is iozone, etc. The best tests are ones that match your use cases. Finding these are hard. We like fio, as we can construct models of use cases and run them again and again. Cached, uncached, etc. Makes for very easy and repeatable testing. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics, Inc. email: landman@scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-27 10:26 ` John Robinson 2011-07-27 12:35 ` Joe Landman @ 2011-07-27 13:54 ` Stan Hoeppner 1 sibling, 0 replies; 22+ messages in thread From: Stan Hoeppner @ 2011-07-27 13:54 UTC (permalink / raw) To: John Robinson; +Cc: Simon Matthews, Pol Hallen, linux-raid On 7/27/2011 5:26 AM, John Robinson wrote: > On 27/07/2011 11:22, Stan Hoeppner wrote: >> On 7/27/2011 12:42 AM, Simon Matthews wrote: >>> On Sun, Jul 17, 2011 at 5:11 AM, John Robinson >>> <john.robinson@anonymous.org.uk> wrote: >>> >>>> Pretty poor. CentOS 5, Intel ICH10, md RAID 6 over 5 7200rpm 1TB >>>> drives, >>>> then LVM, then ext3: >>>> # dd if=/dev/zero of=test bs=4096 count=262144 >>>> 262144+0 records in >>>> 262144+0 records out >>>> 1073741824 bytes (1.1 GB) copied, 2.5253 seconds, 425 MB/s >>> >>> What hard drive offers a sustained data rate of 425 MB/s or even half >>> that? >> >> 425 MB/s / 3 spindles = 142 MB/s per spindle >> >> That's not poor, it's excellent. Which drives are these? WD Black, >> Seagate, Hitachi? > > Gentlemen, we've been round this loop before about 10 days ago. Pol's 20 > MB/s was poor because he was testing on an array with unaligned > partitions and a resync was running, my 425 MB/s was a bad test because > it didn't use fdatasync or direct and I said dd was a bad test anyway, > etc etc. Of course the 425 number is higher than real world. But this is the dd "test" method for which many post results, so the number is still somewhat valid for very coarse comparison purposes among systems. A 10GB run would have been better than 1GB obviously. The Linpack benchmark is very similar to dd in this regard. It's a horrible measure of supercomputer performance, but it's what everyone uses. It's what allows US Government labs to go to Congress with and say "The Chinese have passed us in the Supercomputer race. We need more money to take the lead". (And, stupidly, Congress gives them the money) -- Stan ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-17 8:12 ` Pol Hallen 2011-07-17 12:11 ` John Robinson @ 2011-07-17 23:03 ` Stan Hoeppner 2011-07-18 11:52 ` Pol Hallen 2011-07-21 17:07 ` standard performance (write speed ??Mb/s) - new raid5 array Pol Hallen 1 sibling, 2 replies; 22+ messages in thread From: Stan Hoeppner @ 2011-07-17 23:03 UTC (permalink / raw) To: Pol Hallen; +Cc: linux-raid On 7/17/2011 3:12 AM, Pol Hallen wrote: > hello and thanks for the reply :-) > > dd if=/dev/zero of=test bs=4096 count=262144 > 262144+0 records in > 262144+0 records out > 1073741824 bytes (1.1 GB) copied, 31.3475 s, 34.3 MB/s > > I've 8Gb ddr3 of ram Then you have multiple problems causing poor performance. Now is the time for you to post your hardware configuration, drive model number(s), mdadm config, and filesystem. You should have done all of this in your initial post. You've been on this list long enough to have known of the WD Green sector alignment issue and avoided it, and to know better than to run performance tests while an array is still in its initial sync. Thus I initially assumed your problem was merely the insane dd block size. -- Stan ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-17 23:03 ` Stan Hoeppner @ 2011-07-18 11:52 ` Pol Hallen 2011-07-21 17:07 ` standard performance (write speed ??Mb/s) - new raid5 array Pol Hallen 1 sibling, 0 replies; 22+ messages in thread From: Pol Hallen @ 2011-07-18 11:52 UTC (permalink / raw) To: Stan Hoeppner; +Cc: linux-raid [-- Attachment #1: Type: Text/Plain, Size: 803 bytes --] > Then you have multiple problems causing poor performance. Now is the > time for you to post your hardware configuration, drive model > number(s), mdadm config, and filesystem. You should have done all of > this in your initial post. hello and sorry for the late :-( now is a resync in progress, I can't stop it. Try /usr/share/mdadm/checkarray -x md0 but nothing.. fs is ext3 (done less 10 days ago) > You've been on this list long enough to have known of the WD Green > sector alignment issue and avoided it, and to know better than to run > performance tests while an array is still in its initial sync. Thus I > initially assumed your problem was merely the insane dd block size. when I wrote a post there was not resync in progress.. tell me if you need known other things, thanks Pol [-- Attachment #2: mdadm.conf --] [-- Type: text/plain, Size: 646 bytes --] # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR pol@yahoo.it # definitions of existing MD arrays # This file was auto-generated on Fri, 08 Jul 2011 14:37:51 -0700 # by mkconf 3.1.4-1+8efb9d1 [-- Attachment #3: sdb --] [-- Type: text/plain, Size: 611 bytes --] smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.11 family Device Model: ST31500341AS Serial Number: 9VS4PSG9 Firmware Version: CC1H User Capacity: 1,500,301,910,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Mon Jul 18 13:44:52 2011 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled [-- Attachment #4: mdadm-d --] [-- Type: text/plain, Size: 1034 bytes --] /dev/md0: Version : 1.2 Creation Time : Fri Jul 8 23:52:13 2011 Raid Level : raid5 Array Size : 5860538368 (5589.05 GiB 6001.19 GB) Used Dev Size : 1465134592 (1397.26 GiB 1500.30 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Update Time : Mon Jul 18 13:14:04 2011 State : active, resyncing Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Rebuild Status : 33% complete Name : smooth:0 (local to host smooth) UUID : bfca3ad6:a7df5fa5:20ba4933:55fd4181 Events : 6212 Number Major Minor RaidDevice State 0 8 64 0 active sync /dev/sde 1 8 97 1 active sync /dev/sdg1 2 8 113 2 active sync /dev/sdh1 3 8 129 3 active sync /dev/sdi1 5 8 81 4 active sync /dev/sdf1 [-- Attachment #5: sdc --] [-- Type: text/plain, Size: 611 bytes --] smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.11 family Device Model: ST31500341AS Serial Number: 9VS3KL49 Firmware Version: CC1H User Capacity: 1,500,301,910,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Mon Jul 18 13:45:00 2011 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled [-- Attachment #6: lspci --] [-- Type: text/plain, Size: 18256 bytes --] 00:00.0 Host bridge: Intel Corporation 5520 I/O Hub to ESI Port (rev 22) Subsystem: Super Micro Computer Inc Device a880 Flags: fast devsel Capabilities: [60] MSI: Enable- Count=1/2 Maskable+ 64bit- Capabilities: [90] Express Root Port (Slot-), MSI 00 Capabilities: [e0] Power Management version 3 Capabilities: [100] Advanced Error Reporting Capabilities: [150] Access Control Services Capabilities: [160] Vendor Specific Information: ID=0002 Rev=0 Len=00c <?> 00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 22) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=07, subordinate=07, sec-latency=0 Capabilities: [40] Subsystem: Super Micro Computer Inc Device a880 Capabilities: [60] MSI: Enable+ Count=1/2 Maskable+ 64bit- Capabilities: [90] Express Root Port (Slot+), MSI 00 Capabilities: [e0] Power Management version 3 Capabilities: [100] Advanced Error Reporting Capabilities: [150] Access Control Services Capabilities: [160] Vendor Specific Information: ID=0002 Rev=0 Len=00c <?> Kernel driver in use: pcieport 00:03.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 3 (rev 22) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=06, subordinate=06, sec-latency=0 I/O behind bridge: 0000e000-0000efff Memory behind bridge: faf00000-faffffff Prefetchable memory behind bridge: 00000000d8000000-00000000dfffffff Capabilities: [40] Subsystem: Super Micro Computer Inc Device a880 Capabilities: [60] MSI: Enable+ Count=1/2 Maskable+ 64bit- Capabilities: [90] Express Root Port (Slot+), MSI 00 Capabilities: [e0] Power Management version 3 Capabilities: [100] Advanced Error Reporting Capabilities: [150] Access Control Services Capabilities: [160] Vendor Specific Information: ID=0002 Rev=0 Len=00c <?> Kernel driver in use: pcieport 00:07.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 (rev 22) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=05, subordinate=05, sec-latency=0 Capabilities: [40] Subsystem: Super Micro Computer Inc Device a880 Capabilities: [60] MSI: Enable+ Count=1/2 Maskable+ 64bit- Capabilities: [90] Express Root Port (Slot+), MSI 00 Capabilities: [e0] Power Management version 3 Capabilities: [100] Advanced Error Reporting Capabilities: [150] Access Control Services Capabilities: [160] Vendor Specific Information: ID=0002 Rev=0 Len=00c <?> Kernel driver in use: pcieport 00:0e.0 Host bridge: Intel Corporation Device 341c (rev 22) Flags: fast devsel Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [60] #00 [0000] Capabilities: [100] Vendor Specific Information: ID=0001 Rev=0 Len=0b8 <?> 00:0e.1 Host bridge: Intel Corporation Device 341d (rev 22) Flags: fast devsel Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [60] #00 [0000] 00:0e.2 Host bridge: Intel Corporation Device 341e (rev 22) Flags: fast devsel Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [60] #00 [0000] 00:13.0 PIC: Intel Corporation 5520/5500/X58 I/O Hub I/OxAPIC Interrupt Controller (rev 22) (prog-if 20 [IO(X)-APIC]) Flags: bus master, fast devsel, latency 0 Memory at fec8a000 (32-bit, non-prefetchable) [size=4K] Capabilities: [6c] Power Management version 3 00:14.0 PIC: Intel Corporation 5520/5500/X58 I/O Hub System Management Registers (rev 22) (prog-if 00 [8259]) Flags: fast devsel Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 00:14.1 PIC: Intel Corporation 5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers (rev 22) (prog-if 00 [8259]) Flags: fast devsel Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 00:14.2 PIC: Intel Corporation 5520/5500/X58 I/O Hub Control Status and RAS Registers (rev 22) (prog-if 00 [8259]) Flags: fast devsel Capabilities: [40] Express Root Complex Integrated Endpoint, MSI 00 00:14.3 PIC: Intel Corporation 5520/5500/X58 I/O Hub Throttle Registers (rev 22) (prog-if 00 [8259]) Flags: fast devsel 00:16.0 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, fast devsel, latency 0, IRQ 43 Memory at faaec000 (64-bit, non-prefetchable) [size=16K] Capabilities: [80] MSI-X: Enable+ Count=1 Masked- Capabilities: [90] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [e0] Power Management version 3 Kernel driver in use: ioatdma 00:16.1 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at faae8000 (64-bit, non-prefetchable) [size=16K] Capabilities: [80] MSI-X: Enable+ Count=1 Masked- Capabilities: [90] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [e0] Power Management version 3 Kernel driver in use: ioatdma 00:16.2 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at faae4000 (64-bit, non-prefetchable) [size=16K] Capabilities: [80] MSI-X: Enable+ Count=1 Masked- Capabilities: [90] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [e0] Power Management version 3 Kernel driver in use: ioatdma 00:16.3 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, fast devsel, latency 0, IRQ 46 Memory at faae0000 (64-bit, non-prefetchable) [size=16K] Capabilities: [80] MSI-X: Enable+ Count=1 Masked- Capabilities: [90] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [e0] Power Management version 3 Kernel driver in use: ioatdma 00:16.4 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, fast devsel, latency 0, IRQ 43 Memory at faadc000 (64-bit, non-prefetchable) [size=16K] Capabilities: [80] MSI-X: Enable+ Count=1 Masked- Capabilities: [90] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [e0] Power Management version 3 Kernel driver in use: ioatdma 00:16.5 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at faad8000 (64-bit, non-prefetchable) [size=16K] Capabilities: [80] MSI-X: Enable+ Count=1 Masked- Capabilities: [90] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [e0] Power Management version 3 Kernel driver in use: ioatdma 00:16.6 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, fast devsel, latency 0, IRQ 45 Memory at faad4000 (64-bit, non-prefetchable) [size=16K] Capabilities: [80] MSI-X: Enable+ Count=1 Masked- Capabilities: [90] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [e0] Power Management version 3 Kernel driver in use: ioatdma 00:16.7 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, fast devsel, latency 0, IRQ 46 Memory at faad0000 (64-bit, non-prefetchable) [size=16K] Capabilities: [80] MSI-X: Enable+ Count=1 Masked- Capabilities: [90] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [e0] Power Management version 3 Kernel driver in use: ioatdma 00:1a.0 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #4 (prog-if 00 [UHCI]) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, medium devsel, latency 0, IRQ 16 I/O ports at aea0 [size=32] Capabilities: [50] PCI Advanced Features Kernel driver in use: uhci_hcd 00:1a.1 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #5 (prog-if 00 [UHCI]) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, medium devsel, latency 0, IRQ 21 I/O ports at ae80 [size=32] Capabilities: [50] PCI Advanced Features Kernel driver in use: uhci_hcd 00:1a.2 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #6 (prog-if 00 [UHCI]) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, medium devsel, latency 0, IRQ 19 I/O ports at ae20 [size=32] Capabilities: [50] PCI Advanced Features Kernel driver in use: uhci_hcd 00:1a.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #2 (prog-if 20 [EHCI]) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, medium devsel, latency 0, IRQ 18 Memory at faaf4000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port: BAR=1 offset=00a0 Capabilities: [98] PCI Advanced Features Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation 82801JI (ICH10 Family) HD Audio Controller Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, fast devsel, latency 0, IRQ 81 Memory at faaf0000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel Capabilities: [130] Root Complex Link Kernel driver in use: HDA Intel 00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 1 (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 I/O behind bridge: 00001000-00001fff Memory behind bridge: c0400000-c05fffff Prefetchable memory behind bridge: 00000000c0600000-00000000c07fffff Capabilities: [40] Express Root Port (Slot+), MSI 00 Capabilities: [80] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [90] Subsystem: Super Micro Computer Inc Device a880 Capabilities: [a0] Power Management version 2 Capabilities: [100] Virtual Channel Capabilities: [180] Root Complex Link Kernel driver in use: pcieport 00:1c.4 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 5 (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=03, subordinate=03, sec-latency=0 I/O behind bridge: 0000c000-0000cfff Memory behind bridge: fad00000-fadfffff Prefetchable memory behind bridge: 00000000c0200000-00000000c03fffff Capabilities: [40] Express Root Port (Slot+), MSI 00 Capabilities: [80] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [90] Subsystem: Super Micro Computer Inc Device a880 Capabilities: [a0] Power Management version 2 Capabilities: [100] Virtual Channel Capabilities: [180] Root Complex Link Kernel driver in use: pcieport 00:1c.5 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 6 (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=04, subordinate=04, sec-latency=0 I/O behind bridge: 0000d000-0000dfff Memory behind bridge: fae00000-faefffff Prefetchable memory behind bridge: 00000000c0000000-00000000c01fffff Capabilities: [40] Express Root Port (Slot+), MSI 00 Capabilities: [80] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [90] Subsystem: Super Micro Computer Inc Device a880 Capabilities: [a0] Power Management version 2 Capabilities: [100] Virtual Channel Capabilities: [180] Root Complex Link Kernel driver in use: pcieport 00:1d.0 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1 (prog-if 00 [UHCI]) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, medium devsel, latency 0, IRQ 23 I/O ports at af20 [size=32] Capabilities: [50] PCI Advanced Features Kernel driver in use: uhci_hcd 00:1d.1 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2 (prog-if 00 [UHCI]) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, medium devsel, latency 0, IRQ 19 I/O ports at af00 [size=32] Capabilities: [50] PCI Advanced Features Kernel driver in use: uhci_hcd 00:1d.2 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3 (prog-if 00 [UHCI]) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, medium devsel, latency 0, IRQ 18 I/O ports at aec0 [size=32] Capabilities: [50] PCI Advanced Features Kernel driver in use: uhci_hcd 00:1d.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1 (prog-if 20 [EHCI]) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, medium devsel, latency 0, IRQ 23 Memory at faaf6000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port: BAR=1 offset=00a0 Capabilities: [98] PCI Advanced Features Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90) (prog-if 01 [Subtractive decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=32 I/O behind bridge: 0000b000-0000bfff Memory behind bridge: fab00000-facfffff Capabilities: [50] Subsystem: Super Micro Computer Inc Device a880 00:1f.0 ISA bridge: Intel Corporation 82801JIR (ICH10R) LPC Interface Controller Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, medium devsel, latency 0 Capabilities: [e0] Vendor Specific Information: Len=0c <?> 00:1f.2 SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller (prog-if 01 [AHCI 1.0]) Subsystem: Super Micro Computer Inc Device a880 Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 71 I/O ports at aff0 [size=8] I/O ports at afac [size=4] I/O ports at afe0 [size=8] I/O ports at afa8 [size=4] I/O ports at af80 [size=32] Memory at faafe000 (32-bit, non-prefetchable) [size=2K] Capabilities: [80] MSI: Enable+ Count=1/16 Maskable- 64bit- Capabilities: [70] Power Management version 3 Capabilities: [a8] SATA HBA v1.0 Capabilities: [b0] PCI Advanced Features Kernel driver in use: ahci 00:1f.3 SMBus: Intel Corporation 82801JI (ICH10 Family) SMBus Controller Subsystem: Super Micro Computer Inc Device a880 Flags: medium devsel, IRQ 18 Memory at faafc000 (64-bit, non-prefetchable) [size=256] I/O ports at 0400 [size=32] Kernel driver in use: i801_smbus 01:02.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link) (prog-if 10 [OHCI]) Subsystem: Super Micro Computer Inc Device 1411 Flags: bus master, medium devsel, latency 64, IRQ 18 Memory at fabf7800 (32-bit, non-prefetchable) [size=2K] Memory at fabf8000 (32-bit, non-prefetchable) [size=16K] Capabilities: [44] Power Management version 2 Kernel driver in use: firewire_ohci 01:05.0 RAID bus controller: Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller (rev 02) Subsystem: Silicon Image, Inc. Device 7114 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 19 I/O ports at bb90 [size=8] I/O ports at bb88 [size=4] I/O ports at bb98 [size=8] I/O ports at bb8c [size=4] I/O ports at bba0 [size=16] Memory at fabffc00 (32-bit, non-prefetchable) [size=1K] Expansion ROM at fac00000 [disabled] [size=512K] Capabilities: [60] Power Management version 2 Kernel driver in use: sata_sil 01:06.0 Ethernet controller: 3Com Corporation 3c905C-TX/TX-M [Tornado] (rev 78) Subsystem: 3Com Corporation 3C905CX-TX/TX-M Fast Etherlink for PC Management NIC Flags: bus master, medium devsel, latency 64, IRQ 17 I/O ports at bc00 [size=128] Memory at facdfc00 (32-bit, non-prefetchable) [size=128] Expansion ROM at face0000 [disabled] [size=128K] Capabilities: [dc] Power Management version 2 Kernel driver in use: 3c59x 03:00.0 Ethernet controller: Intel Corporation 82573E Gigabit Ethernet Controller (Copper) (rev 03) Subsystem: Super Micro Computer Inc Device 108c Flags: bus master, fast devsel, latency 0, IRQ 70 Memory at fade0000 (32-bit, non-prefetchable) [size=128K] I/O ports at cf80 [size=32] Capabilities: [c8] Power Management version 2 Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [e0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number 00-25-90-ff-ff-24-d1-e4 Kernel driver in use: e1000e 04:00.0 Ethernet controller: Intel Corporation 82573L Gigabit Ethernet Controller Subsystem: Super Micro Computer Inc Device 109a Flags: bus master, fast devsel, latency 0, IRQ 72 Memory at faee0000 (32-bit, non-prefetchable) [size=128K] I/O ports at df80 [size=32] Capabilities: [c8] Power Management version 2 Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [e0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number 00-25-90-ff-ff-24-d1-e5 Kernel driver in use: e1000e 06:00.0 VGA compatible controller: ATI Technologies Inc RV370 5B60 [Radeon X300 (PCIE)] (prog-if 00 [VGA controller]) Subsystem: PC Partner Limited Radeon X300 (PCIE) Flags: bus master, fast devsel, latency 0, IRQ 11 Memory at d8000000 (32-bit, prefetchable) [size=128M] I/O ports at e000 [size=256] Memory at fafd0000 (32-bit, non-prefetchable) [size=64K] Expansion ROM at fafe0000 [disabled] [size=128K] Capabilities: [50] Power Management version 2 Capabilities: [58] Express Endpoint, MSI 00 Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [100] Advanced Error Reporting 06:00.1 Display controller: ATI Technologies Inc RV370 [Radeon X300SE] Subsystem: PC Partner Limited Radeon X300SE Flags: bus master, fast devsel, latency 0 Memory at fafc0000 (32-bit, non-prefetchable) [size=64K] Capabilities: [50] Power Management version 2 Capabilities: [58] Express Endpoint, MSI 00 [-- Attachment #7: sde --] [-- Type: text/plain, Size: 678 bytes --] smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Green (Adv. Format) family Device Model: WDC WD20EARS-00MVWB0 Serial Number: WD-WCAZA6281995 Firmware Version: 51.0AB51 User Capacity: 2,000,398,934,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Jul 18 13:45:07 2011 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled [-- Attachment #8: sdd --] [-- Type: text/plain, Size: 608 bytes --] smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.12 family Device Model: ST3500418AS Serial Number: 9VM1WK4N Firmware Version: CC46 User Capacity: 500,107,862,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Mon Jul 18 13:45:03 2011 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled [-- Attachment #9: sdf --] [-- Type: text/plain, Size: 678 bytes --] smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Green (Adv. Format) family Device Model: WDC WD20EARS-00MVWB0 Serial Number: WD-WCAZA5510079 Firmware Version: 51.0AB51 User Capacity: 2,000,398,934,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Jul 18 13:45:12 2011 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed ??Mb/s) - new raid5 array 2011-07-17 23:03 ` Stan Hoeppner 2011-07-18 11:52 ` Pol Hallen @ 2011-07-21 17:07 ` Pol Hallen 2011-07-22 0:05 ` Stan Hoeppner 1 sibling, 1 reply; 22+ messages in thread From: Pol Hallen @ 2011-07-21 17:07 UTC (permalink / raw) To: Stan Hoeppner; +Cc: linux-raid Hello again :-) I removed all partitions of my disks and do: dd if=/dev/zero of=/dev/sdb bs=512 count=1 for all disks.. next, I created a new partition (non-fs data) starting of 64 like below (for every devices) fdisk -lu /dev/sdb Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd2017c41 Device Boot Start End Blocks Id System /dev/sdb1 64 3907029167 1953514552 da Non-FS data created a new raid5 array: mdadm --create --verbose /dev/md0 --level=raid5 --raid-devices=5 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf cat /proc/mdstat md0 : active raid5 sdf[5] sde[3] sdd[2] sdc[1] sdb[0] 7814051840 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_] [>....................] recovery = 0.3% (6625324/1953512960) finish=971.1min speed=33411K/sec Now, I've 5 identical disks (2Tb WD) I've to wait rebuilding time after do new tests performance. The procedure that I've done is correct? thanks! :-) Pol ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed ??Mb/s) - new raid5 array 2011-07-21 17:07 ` standard performance (write speed ??Mb/s) - new raid5 array Pol Hallen @ 2011-07-22 0:05 ` Stan Hoeppner 2011-07-22 7:08 ` Pol Hallen 0 siblings, 1 reply; 22+ messages in thread From: Stan Hoeppner @ 2011-07-22 0:05 UTC (permalink / raw) To: Pol Hallen; +Cc: linux-raid On 7/21/2011 12:07 PM, Pol Hallen wrote: > Hello again :-) > > I removed all partitions of my disks and do: > > dd if=/dev/zero of=/dev/sdb bs=512 count=1 for all disks.. > > next, I created a new partition (non-fs data) starting of 64 like below (for > every devices) Why did you create partitions? The whole point of zeroing the first 512 bytes was to *eliminate* all the partitions... > Device Boot Start End Blocks Id System > /dev/sdb1 64 3907029167 1953514552 da Non-FS data > > created a new raid5 array: > > mdadm --create --verbose /dev/md0 --level=raid5 --raid-devices=5 /dev/sdb > /dev/sdc /dev/sdd /dev/sde /dev/sdf And now you create your array without using partitions... > cat /proc/mdstat > > md0 : active raid5 sdf[5] sde[3] sdd[2] sdc[1] sdb[0] > 7814051840 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] > [UUUU_] > [>....................] recovery = 0.3% (6625324/1953512960) > finish=971.1min speed=33411K/sec You created aligned partitions on all disks, and then did not use those partitions... > Now, I've 5 identical disks (2Tb WD) > > I've to wait rebuilding time after do new tests performance. > > The procedure that I've done is correct? No, you did not. It seems you merged bits and pieces from each array creation method. The array may still function properly. I've never tried what you've done here. Maybe others have the correct answer. -- Stan ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed ??Mb/s) - new raid5 array 2011-07-22 0:05 ` Stan Hoeppner @ 2011-07-22 7:08 ` Pol Hallen 2011-07-22 8:13 ` Erwan Leroux 0 siblings, 1 reply; 22+ messages in thread From: Pol Hallen @ 2011-07-22 7:08 UTC (permalink / raw) To: linux-raid > Why did you create partitions? The whole point of zeroing the first 512 > bytes was to *eliminate* all the partitions... Follow the post of Erwan Leroux, he advise creare a new partition with non-fs data :-/ > No, you did not. It seems you merged bits and pieces from each array > creation method. The array may still function properly. I've never > tried what you've done here. Maybe others have the correct answer. I wish it's everything ok :-) Pol ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed ??Mb/s) - new raid5 array 2011-07-22 7:08 ` Pol Hallen @ 2011-07-22 8:13 ` Erwan Leroux 0 siblings, 0 replies; 22+ messages in thread From: Erwan Leroux @ 2011-07-22 8:13 UTC (permalink / raw) To: polhallen; +Cc: linux-raid I never used the whole disk in my raid, so maybe my procedure is not useful in this case, but i think if you create partitions, you should create the array using them, using the following command mdadm --create --verbose /dev/md0 --level=raid5 --raid-devices=5 /dev/sd[bcdef]1 --metadata 1.2 i can't recall where i see why to specify this metadata level but in my case the default was not 1.2 so i had to. When the rebuild is done and a filesystem set , run this command dd if=/dev/zero of=/some/dir/test.dd bs=4096 count=2621440 to actually test the performance of the raid. 2011/7/22 Pol Hallen <raid1@fuckaround.org>: >> Why did you create partitions? The whole point of zeroing the first 512 >> bytes was to *eliminate* all the partitions... > > Follow the post of Erwan Leroux, he advise creare a new partition with non-fs > data :-/ > >> No, you did not. It seems you merged bits and pieces from each array >> creation method. The array may still function properly. I've never >> tried what you've done here. Maybe others have the correct answer. > > I wish it's everything ok :-) > > Pol > -- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: standard performance (write speed 20Mb/s) 2011-07-16 19:40 standard performance (write speed 20Mb/s) Pol Hallen 2011-07-17 4:26 ` Stan Hoeppner @ 2011-07-17 16:48 ` Gordon Henderson 1 sibling, 0 replies; 22+ messages in thread From: Gordon Henderson @ 2011-07-17 16:48 UTC (permalink / raw) To: linux-raid On Sat, 16 Jul 2011, Pol Hallen wrote: > Hi folks :-) > > after assembled a new hw (xeon, ich10 controller with 5 disks of 2Tb wd - > radi5) I have slow write performance: > > 20Mb/s :-((( In addition to the other comments so-far, are these drives WDC EARS drives? What's the output of hdparm -i /dev/sda If it's something like Model=WDC WD20EARS-00MVWB0, FwRev=50.0AB50, SerialNo=WD-WMAZ20098633 ^^^^ then it has 4KB sectors. If so, then you'll have to make absolutely sure the partitions are aligned on a 4KB boundary. What does the output of sfdisk -d /dev/sda look like? (where /dev/sda is one of the disks on your array) The start of each partition ought to be divisible by 64 evenly for optimal results. Gordon ^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2011-07-27 13:54 UTC | newest] Thread overview: 22+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-07-16 19:40 standard performance (write speed 20Mb/s) Pol Hallen 2011-07-17 4:26 ` Stan Hoeppner 2011-07-17 8:12 ` Pol Hallen 2011-07-17 12:11 ` John Robinson 2011-07-17 12:22 ` Iustin Pop 2011-07-17 12:51 ` John Robinson 2011-07-17 13:28 ` Iustin Pop 2011-07-18 9:04 ` John Robinson 2011-07-17 22:05 ` Stan Hoeppner 2011-07-27 5:42 ` Simon Matthews 2011-07-27 5:46 ` Roman Mamedov 2011-07-27 10:22 ` Stan Hoeppner 2011-07-27 10:26 ` John Robinson 2011-07-27 12:35 ` Joe Landman 2011-07-27 13:54 ` Stan Hoeppner 2011-07-17 23:03 ` Stan Hoeppner 2011-07-18 11:52 ` Pol Hallen 2011-07-21 17:07 ` standard performance (write speed ??Mb/s) - new raid5 array Pol Hallen 2011-07-22 0:05 ` Stan Hoeppner 2011-07-22 7:08 ` Pol Hallen 2011-07-22 8:13 ` Erwan Leroux 2011-07-17 16:48 ` standard performance (write speed 20Mb/s) Gordon Henderson
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).