* two raid5 performance
@ 2013-12-16 1:07 lilofile
2013-12-16 7:52 ` Pieter De Wit
2013-12-16 7:57 ` 答复:two " lilofile
0 siblings, 2 replies; 15+ messages in thread
From: lilofile @ 2013-12-16 1:07 UTC (permalink / raw)
Cc: linux-raid
when I Create a raid5,dd 1M block size,the performance can reached to 1GB/s
when I create another raid5,dd 1M block size ,the performance can reached to 1GB/s
when I simultaneously dd the two different raid,the total performance only reached to 1.4GB/s,why can not reached to 2GB/s.
the memory is 32GB,CPU is x5660,the stripe size is 4096
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: two raid5 performance
2013-12-16 1:07 two raid5 performance lilofile
@ 2013-12-16 7:52 ` Pieter De Wit
2013-12-16 7:57 ` 答复:two " lilofile
1 sibling, 0 replies; 15+ messages in thread
From: Pieter De Wit @ 2013-12-16 7:52 UTC (permalink / raw)
To: lilofile; +Cc: linux-raid
On 16/12/2013 14:07, lilofile wrote:
> when I Create a raid5,dd 1M block size,the performance can reached to 1GB/s
> when I create another raid5,dd 1M block size ,the performance can reached to 1GB/s
> when I simultaneously dd the two different raid,the total performance only reached to 1.4GB/s,why can not reached to 2GB/s.
>
> the memory is 32GB,CPU is x5660,the stripe size is 4096
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
What is the disks involved in the array ? How are they connected ?
^ permalink raw reply [flat|nested] 15+ messages in thread
* 答复:two raid5 performance
2013-12-16 1:07 two raid5 performance lilofile
2013-12-16 7:52 ` Pieter De Wit
@ 2013-12-16 7:57 ` lilofile
2013-12-16 12:29 ` 答复:答复:two " lilofile
` (2 more replies)
1 sibling, 3 replies; 15+ messages in thread
From: lilofile @ 2013-12-16 7:57 UTC (permalink / raw)
To: Pieter De Wit; +Cc: linux-raid
the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
------------------------------------------------------------------
发件人:Pieter De Wit <pieter@insync.za.net>
发送时间:2013年12月16日(星期一) 15:52
收件人:lilofile <lilofile@aliyun.com>; unlisted-recipients: <unlisted-recipients:>
抄 送:linux-raid <linux-raid@vger.kernel.org>
主 题:Re: two raid5 performance
On 16/12/2013 14:07, lilofile wrote:
> when I Create a raid5,dd 1M block size,the performance can reached to 1GB/s
> when I create another raid5,dd 1M block size ,the performance can reached to 1GB/s
> when I simultaneously dd the two different raid,the total performance only reached to 1.4GB/s,why can not reached to 2GB/s.
>
> the memory is 32GB,CPU is x5660,the stripe size is 4096
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
What is the disks involved in the array ? How are they connected ?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* 答复:答复:two raid5 performance
2013-12-16 7:57 ` 答复:two " lilofile
@ 2013-12-16 12:29 ` lilofile
2013-12-16 12:45 ` Tommy Apel
2013-12-16 12:57 ` 答复:答复:答复:two " lilofile
2013-12-16 13:06 ` 答复:two " Dag Nygren
2013-12-16 13:13 ` 答复:答复:two " lilofile
2 siblings, 2 replies; 15+ messages in thread
From: lilofile @ 2013-12-16 12:29 UTC (permalink / raw)
Cc: Tommy Apel, linux-raid Raid, Pieter De Wit
In dd test,I use Linux top tools,and I found 2 raid use different cores, CPU occupancy of each raid5 reach to 80% , is any lock contention between two raid?
------------------------------------------------------------------
发件人:Tommy Apel <tommyapeldk@gmail.com>
发送时间:2013年12月16日(星期一) 16:13
收件人:lilofile <lilofile@aliyun.com>
抄 送:Pieter De Wit <pieter@insync.za.net>; linux-raid Raid <linux-raid@vger.kernel.org>; Tommy Apel <tommyapeldk@gmail.com>
主 题:Re: 答复:two raid5 performance
Try to use taskset command to make sure the dd commands are handled by different cores otherwise you'll starve 1 core.
On Dec 16, 2013 8:58 AM, "lilofile" <lilofile@aliyun.com> wrote:
the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
------------------------------------------------------------------
发件人:Pieter De Wit <pieter@insync.za.net>
发送时间:2013年12月16日(星期一) 15:52
收件人:lilofile <lilofile@aliyun.com>; unlisted-recipients: <unlisted-recipients:>
抄 送:linux-raid <linux-raid@vger.kernel.org>
主 题:Re: two raid5 performance
On 16/12/2013 14:07, lilofile wrote:
> when I Create a raid5,dd 1M block size,the performance can reached to 1GB/s
> when I create another raid5,dd 1M block size ,the performance can reached to 1GB/s
> when I simultaneously dd the two different raid,the total performance only reached to 1.4GB/s,why can not reached to 2GB/s.
>
> the memory is 32GB,CPU is x5660,the stripe size is 4096
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
What is the disks involved in the array ? How are they connected ?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: 答复:答复:two raid5 performance
2013-12-16 12:29 ` 答复:答复:two " lilofile
@ 2013-12-16 12:45 ` Tommy Apel
2013-12-16 12:57 ` 答复:答复:答复:two " lilofile
1 sibling, 0 replies; 15+ messages in thread
From: Tommy Apel @ 2013-12-16 12:45 UTC (permalink / raw)
To: lilofile; +Cc: linux-raid Raid
try this, this will put a dd on different cores so that you don't
starve one core with both dd processes
taskset 1 dd if=/dev/zero of=raidset1
taskset 2 dd if=/dev/zero of=raidset2
2013/12/16 lilofile <lilofile@aliyun.com>:
> In dd test,I use Linux top tools,and I found 2 raid use different cores, CPU occupancy of each raid5 reach to 80% , is any lock contention between two raid?
>
>
> ------------------------------------------------------------------
> 发件人:Tommy Apel <tommyapeldk@gmail.com>
> 发送时间:2013年12月16日(星期一) 16:13
> 收件人:lilofile <lilofile@aliyun.com>
> 抄 送:Pieter De Wit <pieter@insync.za.net>; linux-raid Raid <linux-raid@vger.kernel.org>; Tommy Apel <tommyapeldk@gmail.com>
> 主 题:Re: 答复:two raid5 performance
>
> Try to use taskset command to make sure the dd commands are handled by different cores otherwise you'll starve 1 core.
> On Dec 16, 2013 8:58 AM, "lilofile" <lilofile@aliyun.com> wrote:
>
> the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
>
>
>
>
>
> ------------------------------------------------------------------
>
> 发件人:Pieter De Wit <pieter@insync.za.net>
>
> 发送时间:2013年12月16日(星期一) 15:52
>
> 收件人:lilofile <lilofile@aliyun.com>; unlisted-recipients: <unlisted-recipients:>
>
> 抄 送:linux-raid <linux-raid@vger.kernel.org>
>
> 主 题:Re: two raid5 performance
>
>
>
> On 16/12/2013 14:07, lilofile wrote:
>
>> when I Create a raid5,dd 1M block size,the performance can reached to 1GB/s
>
>> when I create another raid5,dd 1M block size ,the performance can reached to 1GB/s
>
>> when I simultaneously dd the two different raid,the total performance only reached to 1.4GB/s,why can not reached to 2GB/s.
>
>>
>
>> the memory is 32GB,CPU is x5660,the stripe size is 4096
>
>> --
>
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>
>> the body of a message to majordomo@vger.kernel.org
>
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> What is the disks involved in the array ? How are they connected ?
>
> --
>
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>
> the body of a message to majordomo@vger.kernel.org
>
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
--
/Tommy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* 答复:答复:答复:two raid5 performance
2013-12-16 12:29 ` 答复:答复:two " lilofile
2013-12-16 12:45 ` Tommy Apel
@ 2013-12-16 12:57 ` lilofile
2013-12-16 13:08 ` Tommy Apel
1 sibling, 1 reply; 15+ messages in thread
From: lilofile @ 2013-12-16 12:57 UTC (permalink / raw)
To: Tommy Apel; +Cc: linux-raid Raid
Yes,I test like what you say,but the result is the same.
------------------------------------------------------------------
发件人:Tommy Apel <tommyapeldk@gmail.com>
发送时间:2013年12月16日(星期一) 20:45
收件人:lilofile <lilofile@aliyun.com>
抄 送:linux-raid Raid <linux-raid@vger.kernel.org>
主 题:Re: 答复:答复:two raid5 performance
try this, this will put a dd on different cores so that you don't
starve one core with both dd processes
taskset 1 dd if=/dev/zero of=raidset1
taskset 2 dd if=/dev/zero of=raidset2
2013/12/16 lilofile <lilofile@aliyun.com>:
> In dd test,I use Linux top tools,and I found 2 raid use different cores, CPU occupancy of each raid5 reach to 80% , is any lock contention between two raid?
>
>
> ------------------------------------------------------------------
> 发件人:Tommy Apel <tommyapeldk@gmail.com>
> 发送时间:2013年12月16日(星期一) 16:13
> 收件人:lilofile <lilofile@aliyun.com>
> 抄 送:Pieter De Wit <pieter@insync.za.net>; linux-raid Raid <linux-raid@vger.kernel.org>; Tommy Apel <tommyapeldk@gmail.com>
> 主 题:Re: 答复:two raid5 performance
>
> Try to use taskset command to make sure the dd commands are handled by different cores otherwise you'll starve 1 core.
> On Dec 16, 2013 8:58 AM, "lilofile" <lilofile@aliyun.com> wrote:
>
> the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
>
>
>
>
>
> ------------------------------------------------------------------
>
> 发件人:Pieter De Wit <pieter@insync.za.net>
>
> 发送时间:2013年12月16日(星期一) 15:52
>
> 收件人:lilofile <lilofile@aliyun.com>; unlisted-recipients: <unlisted-recipients:>
>
> 抄 送:linux-raid <linux-raid@vger.kernel.org>
>
> 主 题:Re: two raid5 performance
>
>
>
> On 16/12/2013 14:07, lilofile wrote:
>
>> when I Create a raid5,dd 1M block size,the performance can reached to 1GB/s
>
>> when I create another raid5,dd 1M block size ,the performance can reached to 1GB/s
>
>> when I simultaneously dd the two different raid,the total performance only reached to 1.4GB/s,why can not reached to 2GB/s.
>
>>
>
>> the memory is 32GB,CPU is x5660,the stripe size is 4096
>
>> --
>
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>
>> the body of a message to majordomo@vger.kernel.org
>
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> What is the disks involved in the array ? How are they connected ?
>
> --
>
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>
> the body of a message to majordomo@vger.kernel.org
>
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
--
/Tommy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: 答复:two raid5 performance
2013-12-16 7:57 ` 答复:two " lilofile
2013-12-16 12:29 ` 答复:答复:two " lilofile
@ 2013-12-16 13:06 ` Dag Nygren
2013-12-16 13:32 ` Tommy Apel
2013-12-16 13:55 ` 答复:答复:two " lilofile
2013-12-16 13:13 ` 答复:答复:two " lilofile
2 siblings, 2 replies; 15+ messages in thread
From: Dag Nygren @ 2013-12-16 13:06 UTC (permalink / raw)
To: lilofile; +Cc: Pieter De Wit, linux-raid
On Monday 16 December 2013 15:57:42 lilofile wrote:
> the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
I/O bus to the card?
Best
Dag
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: 答复:答复:答复:two raid5 performance
2013-12-16 12:57 ` 答复:答复:答复:two " lilofile
@ 2013-12-16 13:08 ` Tommy Apel
0 siblings, 0 replies; 15+ messages in thread
From: Tommy Apel @ 2013-12-16 13:08 UTC (permalink / raw)
To: lilofile; +Cc: linux-raid Raid, Tommy Apel
what sort of controller do you have and how is it connected ?
are you testing on a filesystem or directly at the md device ?
2013/12/16 lilofile <lilofile@aliyun.com>:
> Yes,I test like what you say,but the result is the same.
>
>
> ------------------------------------------------------------------
> 发件人:Tommy Apel <tommyapeldk@gmail.com>
> 发送时间:2013年12月16日(星期一) 20:45
> 收件人:lilofile <lilofile@aliyun.com>
> 抄 送:linux-raid Raid <linux-raid@vger.kernel.org>
> 主 题:Re: 答复:答复:two raid5 performance
>
> try this, this will put a dd on different cores so that you don't
> starve one core with both dd processes
> taskset 1 dd if=/dev/zero of=raidset1
> taskset 2 dd if=/dev/zero of=raidset2
>
> 2013/12/16 lilofile <lilofile@aliyun.com>:
>> In dd test,I use Linux top tools,and I found 2 raid use different cores, CPU occupancy of each raid5 reach to 80% , is any lock contention between two raid?
>>
>>
>> ------------------------------------------------------------------
>> 发件人:Tommy Apel <tommyapeldk@gmail.com>
>> 发送时间:2013年12月16日(星期一) 16:13
>> 收件人:lilofile <lilofile@aliyun.com>
>> 抄 送:Pieter De Wit <pieter@insync.za.net>; linux-raid Raid <linux-raid@vger.kernel.org>; Tommy Apel <tommyapeldk@gmail.com>
>> 主 题:Re: 答复:two raid5 performance
>>
>> Try to use taskset command to make sure the dd commands are handled by different cores otherwise you'll starve 1 core.
>> On Dec 16, 2013 8:58 AM, "lilofile" <lilofile@aliyun.com> wrote:
>>
>> the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
>>
>>
>>
>>
>>
>> ------------------------------------------------------------------
>>
>> 发件人:Pieter De Wit <pieter@insync.za.net>
>>
>> 发送时间:2013年12月16日(星期一) 15:52
>>
>> 收件人:lilofile <lilofile@aliyun.com>; unlisted-recipients: <unlisted-recipients:>
>>
>> 抄 送:linux-raid <linux-raid@vger.kernel.org>
>>
>> 主 题:Re: two raid5 performance
>>
>>
>>
>> On 16/12/2013 14:07, lilofile wrote:
>>
>>> when I Create a raid5,dd 1M block size,the performance can reached to 1GB/s
>>
>>> when I create another raid5,dd 1M block size ,the performance can reached to 1GB/s
>>
>>> when I simultaneously dd the two different raid,the total performance only reached to 1.4GB/s,why can not reached to 2GB/s.
>>
>>>
>>
>>> the memory is 32GB,CPU is x5660,the stripe size is 4096
>>
>>> --
>>
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>
>>> the body of a message to majordomo@vger.kernel.org
>>
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>> What is the disks involved in the array ? How are they connected ?
>>
>> --
>>
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>
>> the body of a message to majordomo@vger.kernel.org
>>
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
>>
>
>
>
> --
>
> /Tommy
--
/Tommy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* 答复:答复:two raid5 performance
2013-12-16 7:57 ` 答复:two " lilofile
2013-12-16 12:29 ` 答复:答复:two " lilofile
2013-12-16 13:06 ` 答复:two " Dag Nygren
@ 2013-12-16 13:13 ` lilofile
2013-12-16 16:32 ` Jiang, Dave
2013-12-17 4:37 ` Stan Hoeppner
2 siblings, 2 replies; 15+ messages in thread
From: lilofile @ 2013-12-16 13:13 UTC (permalink / raw)
To: dag; +Cc: linux-raid, Pieter De Wit
mpt2sas 6Gb/s, PCIE 2.0,the card is four port,so theoretical ratio can reach to 2.4Gb/s,it is not bottleneck. I test read two raid5 using dd,
such as dd if=/dev/md0 of=/dev/zero 1M
dd if=/dev/md1 of=/dev/zero 1M
the total read bandwidth can reach to 2.3GB/s,so I/O bus is not a problem.
------------------------------------------------------------------
发件人:Dag Nygren <dag@newtech.fi>
发送时间:2013年12月16日(星期一) 21:06
收件人:lilofile <lilofile@aliyun.com>
抄 送:Pieter De Wit <pieter@insync.za.net>; linux-raid <linux-raid@vger.kernel.org>
主 题:Re: 答复:two raid5 performance
On Monday 16 December 2013 15:57:42 lilofile wrote:
> the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
I/O bus to the card?
Best
Dag
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: 答复:two raid5 performance
2013-12-16 13:06 ` 答复:two " Dag Nygren
@ 2013-12-16 13:32 ` Tommy Apel
2013-12-16 13:55 ` 答复:答复:two " lilofile
1 sibling, 0 replies; 15+ messages in thread
From: Tommy Apel @ 2013-12-16 13:32 UTC (permalink / raw)
To: lilofile; +Cc: linux-raid
Please give us the output of mdadm -D for each array aswell, and just
in case, also the model of the SSD's in use
2013/12/16 Dag Nygren <dag@newtech.fi>:
> On Monday 16 December 2013 15:57:42 lilofile wrote:
>> the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
>
> I/O bus to the card?
>
> Best
> Dag
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
/Tommy
^ permalink raw reply [flat|nested] 15+ messages in thread
* 答复:答复:two raid5 performance
2013-12-16 13:06 ` 答复:two " Dag Nygren
2013-12-16 13:32 ` Tommy Apel
@ 2013-12-16 13:55 ` lilofile
2013-12-16 14:02 ` Tommy Apel
2013-12-16 14:21 ` 答复:答复:答复:two " lilofile
1 sibling, 2 replies; 15+ messages in thread
From: lilofile @ 2013-12-16 13:55 UTC (permalink / raw)
To: Tommy Apel; +Cc: linux-raid
the result of mdadm -D is as follows:
root@host0:~# mdadm -D /dev/md126
/dev/md126:
Version : 1.2
Creation Time : Sat Dec 7 16:26:04 2013
Raid Level : raid5
Array Size : 1171499840 (1117.23 GiB 1199.62 GB)
Used Dev Size : 234299968 (223.45 GiB 239.92 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Dec 13 22:51:04 2013
State : active
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : host0:md126
UUID : aaa3075d:f25e3bd0:9dd347c5:fb58192f
Events : 158
Number Major Minor RaidDevice State
0 65 112 0 active sync /dev/sdx
1 65 96 1 active sync /dev/sdw
2 65 80 2 active sync /dev/sdv
3 65 64 3 active sync /dev/sdu
4 65 48 4 active sync /dev/sdt
6 65 32 5 active sync /dev/sds
root@host0:~#
root@host0:~# mdadm -D /dev/md129
/dev/md129:
Version : 1.2
Creation Time : Sun Dec 8 15:59:11 2013
Raid Level : raid5
Array Size : 1171498880 (1117.23 GiB 1199.61 GB)
Used Dev Size : 234299776 (223.45 GiB 239.92 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Mon Dec 16 21:50:45 2013
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
Name : host0:md129 (local to host sc0)
UUID : b80616a9:49540979:90b20aa6:1b22f67b
Events : 31
Number Major Minor RaidDevice State
0 65 16 0 active sync /dev/sdr
1 65 0 1 active sync /dev/sdq
2 8 224 2 active sync /dev/sdo
3 8 240 3 active sync /dev/sdp
4 8 208 4 active sync /dev/sdn
5 8 192 5 active sync /dev/sdm
root@host0:~#
------------------------------------------------------------------
发件人:Tommy Apel <tommyapeldk@gmail.com>
发送时间:2013年12月16日(星期一) 21:32
收件人:lilofile <lilofile@aliyun.com>
抄 送:linux-raid <linux-raid@vger.kernel.org>
主 题:Re: 答复:two raid5 performance
Please give us the output of mdadm -D for each array aswell, and just
in case, also the model of the SSD's in use
2013/12/16 Dag Nygren <dag@newtech.fi>:
> On Monday 16 December 2013 15:57:42 lilofile wrote:
>> the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
>
> I/O bus to the card?
>
> Best
> Dag
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
/Tommy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: 答复:答复:two raid5 performance
2013-12-16 13:55 ` 答复:答复:two " lilofile
@ 2013-12-16 14:02 ` Tommy Apel
2013-12-16 14:21 ` 答复:答复:答复:two " lilofile
1 sibling, 0 replies; 15+ messages in thread
From: Tommy Apel @ 2013-12-16 14:02 UTC (permalink / raw)
To: lilofile; +Cc: linux-raid
First off you chunksizes are different on the two volumes and secondly
if you're looking for throughput performance move them up to like 512K
or maybe more
2013/12/16 lilofile <lilofile@aliyun.com>:
> the result of mdadm -D is as follows:
>
> root@host0:~# mdadm -D /dev/md126
> /dev/md126:
> Version : 1.2
> Creation Time : Sat Dec 7 16:26:04 2013
> Raid Level : raid5
> Array Size : 1171499840 (1117.23 GiB 1199.62 GB)
> Used Dev Size : 234299968 (223.45 GiB 239.92 GB)
> Raid Devices : 6
> Total Devices : 6
> Persistence : Superblock is persistent
>
> Intent Bitmap : Internal
>
> Update Time : Fri Dec 13 22:51:04 2013
> State : active
> Active Devices : 6
> Working Devices : 6
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Name : host0:md126
> UUID : aaa3075d:f25e3bd0:9dd347c5:fb58192f
> Events : 158
>
> Number Major Minor RaidDevice State
> 0 65 112 0 active sync /dev/sdx
> 1 65 96 1 active sync /dev/sdw
> 2 65 80 2 active sync /dev/sdv
> 3 65 64 3 active sync /dev/sdu
> 4 65 48 4 active sync /dev/sdt
> 6 65 32 5 active sync /dev/sds
> root@host0:~#
>
>
> root@host0:~# mdadm -D /dev/md129
> /dev/md129:
> Version : 1.2
> Creation Time : Sun Dec 8 15:59:11 2013
> Raid Level : raid5
> Array Size : 1171498880 (1117.23 GiB 1199.61 GB)
> Used Dev Size : 234299776 (223.45 GiB 239.92 GB)
> Raid Devices : 6
> Total Devices : 6
> Persistence : Superblock is persistent
>
> Update Time : Mon Dec 16 21:50:45 2013
> State : clean
> Active Devices : 6
> Working Devices : 6
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Name : host0:md129 (local to host sc0)
> UUID : b80616a9:49540979:90b20aa6:1b22f67b
> Events : 31
>
> Number Major Minor RaidDevice State
> 0 65 16 0 active sync /dev/sdr
> 1 65 0 1 active sync /dev/sdq
> 2 8 224 2 active sync /dev/sdo
> 3 8 240 3 active sync /dev/sdp
> 4 8 208 4 active sync /dev/sdn
> 5 8 192 5 active sync /dev/sdm
> root@host0:~#
>
>
> ------------------------------------------------------------------
> 发件人:Tommy Apel <tommyapeldk@gmail.com>
> 发送时间:2013年12月16日(星期一) 21:32
> 收件人:lilofile <lilofile@aliyun.com>
> 抄 送:linux-raid <linux-raid@vger.kernel.org>
> 主 题:Re: 答复:two raid5 performance
>
> Please give us the output of mdadm -D for each array aswell, and just
> in case, also the model of the SSD's in use
>
> 2013/12/16 Dag Nygren <dag@newtech.fi>:
>> On Monday 16 December 2013 15:57:42 lilofile wrote:
>>> the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
>>
>> I/O bus to the card?
>>
>> Best
>> Dag
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
> --
>
> /Tommy
--
/Tommy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* 答复:答复:答复:two raid5 performance
2013-12-16 13:55 ` 答复:答复:two " lilofile
2013-12-16 14:02 ` Tommy Apel
@ 2013-12-16 14:21 ` lilofile
1 sibling, 0 replies; 15+ messages in thread
From: lilofile @ 2013-12-16 14:21 UTC (permalink / raw)
To: Tommy Apel; +Cc: linux-raid
really the chunksizes of two raid is different in this test,but I use same chunksize in two raid already, the result is the same.
the question is why write performance of each single raid5 can reached to 1GB/s(dd),dd write two raid5 can only reach to 1.4GB/s,
Theoretically total performance can reached to 2GB/s,loss 30% .
is any contention in soft raid5?
------------------------------------------------------------------
发件人:Tommy Apel <tommyapeldk@gmail.com>
发送时间:2013年12月16日(星期一) 22:02
收件人:lilofile <lilofile@aliyun.com>
抄 送:linux-raid <linux-raid@vger.kernel.org>
主 题:Re: 答复:答复:two raid5 performance
First off you chunksizes are different on the two volumes and secondly
if you're looking for throughput performance move them up to like 512K
or maybe more
2013/12/16 lilofile <lilofile@aliyun.com>:
> the result of mdadm -D is as follows:
>
> root@host0:~# mdadm -D /dev/md126
> /dev/md126:
> Version : 1.2
> Creation Time : Sat Dec 7 16:26:04 2013
> Raid Level : raid5
> Array Size : 1171499840 (1117.23 GiB 1199.62 GB)
> Used Dev Size : 234299968 (223.45 GiB 239.92 GB)
> Raid Devices : 6
> Total Devices : 6
> Persistence : Superblock is persistent
>
> Intent Bitmap : Internal
>
> Update Time : Fri Dec 13 22:51:04 2013
> State : active
> Active Devices : 6
> Working Devices : 6
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Name : host0:md126
> UUID : aaa3075d:f25e3bd0:9dd347c5:fb58192f
> Events : 158
>
> Number Major Minor RaidDevice State
> 0 65 112 0 active sync /dev/sdx
> 1 65 96 1 active sync /dev/sdw
> 2 65 80 2 active sync /dev/sdv
> 3 65 64 3 active sync /dev/sdu
> 4 65 48 4 active sync /dev/sdt
> 6 65 32 5 active sync /dev/sds
> root@host0:~#
>
>
> root@host0:~# mdadm -D /dev/md129
> /dev/md129:
> Version : 1.2
> Creation Time : Sun Dec 8 15:59:11 2013
> Raid Level : raid5
> Array Size : 1171498880 (1117.23 GiB 1199.61 GB)
> Used Dev Size : 234299776 (223.45 GiB 239.92 GB)
> Raid Devices : 6
> Total Devices : 6
> Persistence : Superblock is persistent
>
> Update Time : Mon Dec 16 21:50:45 2013
> State : clean
> Active Devices : 6
> Working Devices : 6
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 128K
>
> Name : host0:md129 (local to host sc0)
> UUID : b80616a9:49540979:90b20aa6:1b22f67b
> Events : 31
>
> Number Major Minor RaidDevice State
> 0 65 16 0 active sync /dev/sdr
> 1 65 0 1 active sync /dev/sdq
> 2 8 224 2 active sync /dev/sdo
> 3 8 240 3 active sync /dev/sdp
> 4 8 208 4 active sync /dev/sdn
> 5 8 192 5 active sync /dev/sdm
> root@host0:~#
>
>
> ------------------------------------------------------------------
> 发件人:Tommy Apel <tommyapeldk@gmail.com>
> 发送时间:2013年12月16日(星期一) 21:32
> 收件人:lilofile <lilofile@aliyun.com>
> 抄 送:linux-raid <linux-raid@vger.kernel.org>
> 主 题:Re: 答复:two raid5 performance
>
> Please give us the output of mdadm -D for each array aswell, and just
> in case, also the model of the SSD's in use
>
> 2013/12/16 Dag Nygren <dag@newtech.fi>:
>> On Monday 16 December 2013 15:57:42 lilofile wrote:
>>> the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
>>
>> I/O bus to the card?
>>
>> Best
>> Dag
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
> --
>
> /Tommy
--
/Tommy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: 答复:答复:two raid5 performance
2013-12-16 13:13 ` 答复:答复:two " lilofile
@ 2013-12-16 16:32 ` Jiang, Dave
2013-12-17 4:37 ` Stan Hoeppner
1 sibling, 0 replies; 15+ messages in thread
From: Jiang, Dave @ 2013-12-16 16:32 UTC (permalink / raw)
To: lilofile; +Cc: dag, linux-raid, Pieter De Wit
On Mon, 2013-12-16 at 13:13 +0000, lilofile wrote:
> mpt2sas 6Gb/s, PCIE 2.0,the card is four port,so theoretical ratio can reach to 2.4Gb/s,it is not bottleneck. I test read two raid5 using dd,
> such as dd if=/dev/md0 of=/dev/zero 1M
> dd if=/dev/md1 of=/dev/zero 1M
> the total read bandwidth can reach to 2.3GB/s,so I/O bus is not a problem.
Did you check and see if the controller is using MSIX? I've encountered
some mpt2sas defaulted to intx by the driver. Also see if changing rq
affinity helps performance?
echo 2 > /sys/block/sdX/queue/rq_affinity
This should route the HBA interrupts to the core that sent the I/O
request.
>
> ------------------------------------------------------------------
> 发件人:Dag Nygren <dag@newtech.fi>
> 发送时间:2013年12月16日(星期一) 21:06
> 收件人:lilofile <lilofile@aliyun.com>
> 抄 送:Pieter De Wit <pieter@insync.za.net>; linux-raid <linux-raid@vger.kernel.org>
> 主 题:Re: 答复:two raid5 performance
>
> On Monday 16 December 2013 15:57:42 lilofile wrote:
> > the disks involved in the array is sTEC ssd disk,they are connected by mpt2sas 6Gb/s
>
> I/O bus to the card?
>
> Best
> Dag
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: 答复:答复:two raid5 performance
2013-12-16 13:13 ` 答复:答复:two " lilofile
2013-12-16 16:32 ` Jiang, Dave
@ 2013-12-17 4:37 ` Stan Hoeppner
1 sibling, 0 replies; 15+ messages in thread
From: Stan Hoeppner @ 2013-12-17 4:37 UTC (permalink / raw)
To: lilofile; +Cc: linux-raid
On 12/16/2013 7:13 AM, lilofile wrote:
> mpt2sas 6Gb/s, PCIE 2.0,the card is four port,so theoretical ratio can reach to 2.4Gb/s,it is not bottleneck. I test read two raid5 using dd,
> such as dd if=/dev/md0 of=/dev/zero 1M
> dd if=/dev/md1 of=/dev/zero 1M
> the total read bandwidth can reach to 2.3GB/s,so I/O bus is not a problem.
Why are you using dd again? I explained to you in your previous thread
why dd will never saturate your SSDs with write IO. Use FIO. If you
don't know how to make FIO do what you want then ask.
BTW, don't start a new thread for the same issue. Your last thread and
this thread deal with the same RAID5 on STEC SSDs issue. By starting a
new thread everyone loses context and history, which are critically
important when keeping track of performance tests and configurations.
I can't help but point out some irony here. You're concerned with
throughput, yet you're connecting 12 SSDs, ~500 MB/s each, to an SAS
backplane which connects via 4-lane 6G SAS to the HBA. With RAID5
that's 5 GB/s SAS hardware throughput funneled through a 2.4 GB/s
SFF-8088 cable. So once you test properly, and see the write throughput
you already have, you'll see you're limited to half the hardware
throughput by your cabling/backplane. For reads you already are seeing
this limit.
--
Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2013-12-17 4:37 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-16 1:07 two raid5 performance lilofile
2013-12-16 7:52 ` Pieter De Wit
2013-12-16 7:57 ` 答复:two " lilofile
2013-12-16 12:29 ` 答复:答复:two " lilofile
2013-12-16 12:45 ` Tommy Apel
2013-12-16 12:57 ` 答复:答复:答复:two " lilofile
2013-12-16 13:08 ` Tommy Apel
2013-12-16 13:06 ` 答复:two " Dag Nygren
2013-12-16 13:32 ` Tommy Apel
2013-12-16 13:55 ` 答复:答复:two " lilofile
2013-12-16 14:02 ` Tommy Apel
2013-12-16 14:21 ` 答复:答复:答复:two " lilofile
2013-12-16 13:13 ` 答复:答复:two " lilofile
2013-12-16 16:32 ` Jiang, Dave
2013-12-17 4:37 ` Stan Hoeppner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).