* raid10 layout for 2xSSDs @ 2009-11-16 14:29 Kasper Sandberg 2009-11-16 15:26 ` Goswin von Brederlow 2009-11-16 16:08 ` Christopher Chen 0 siblings, 2 replies; 14+ messages in thread From: Kasper Sandberg @ 2009-11-16 14:29 UTC (permalink / raw) To: linux-raid Hello. I've been wanting to create a raid10 array of two SSDs, and I am currently considering the layout. As i understand it, near layout is similar to raid1, and will only provide a speedup if theres 2 reads at the same time, not a single sequential read. so the choice is really between far and offset. As i see it, the difference is, that offset tries to reduce the seeking for writing compared to far, but that if you dont consider the seeking penalty, average sequential write speed across the entire array should be roughly the same with offset and far, with offset perhaps being a tad more "stable", is this a correct assumption? if it is, that would mean offset provides a higher "garantueed" speed than far, but with a lower maximum speed. mvh. Kasper Sandberg ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-16 14:29 raid10 layout for 2xSSDs Kasper Sandberg @ 2009-11-16 15:26 ` Goswin von Brederlow 2009-11-16 16:13 ` Keld Jørn Simonsen 2009-11-16 16:31 ` Robin Hill 2009-11-16 16:08 ` Christopher Chen 1 sibling, 2 replies; 14+ messages in thread From: Goswin von Brederlow @ 2009-11-16 15:26 UTC (permalink / raw) To: Kasper Sandberg; +Cc: linux-raid Kasper Sandberg <postmaster@metanurb.dk> writes: > Hello. > > I've been wanting to create a raid10 array of two SSDs, and I am > currently considering the layout. > > As i understand it, near layout is similar to raid1, and will only > provide a speedup if theres 2 reads at the same time, not a single > sequential read. > > so the choice is really between far and offset. As i see it, the > difference is, that offset tries to reduce the seeking for writing > compared to far, but that if you dont consider the seeking penalty, > average sequential write speed across the entire array should be roughly > the same with offset and far, with offset perhaps being a tad more > "stable", is this a correct assumption? if it is, that would mean offset > provides a higher "garantueed" speed than far, but with a lower maximum > speed. > > mvh. > Kasper Sandberg Doesn't offset have the copies of each stripe right next to each other (just rotated). So writing one stripe would actualy write a 2 block continous chunk per device. With far copies the stripes are far from each other and you get 2 seperate continious chunks per device. What I'm aiming at is that offset might better fit into erase blocks, cause less internal fragmentation on the disk and give better wear leveling. Might improve speed and lifetime. But that is just a thought. Maybe test and do ask Intel (or other vendors) about it. MfG Goswin ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-16 15:26 ` Goswin von Brederlow @ 2009-11-16 16:13 ` Keld Jørn Simonsen 2009-11-17 4:34 ` Goswin von Brederlow 2009-11-16 16:31 ` Robin Hill 1 sibling, 1 reply; 14+ messages in thread From: Keld Jørn Simonsen @ 2009-11-16 16:13 UTC (permalink / raw) To: Goswin von Brederlow; +Cc: Kasper Sandberg, linux-raid On Mon, Nov 16, 2009 at 04:26:32PM +0100, Goswin von Brederlow wrote: > Kasper Sandberg <postmaster@metanurb.dk> writes: > > > Hello. > > > > I've been wanting to create a raid10 array of two SSDs, and I am > > currently considering the layout. > > > > As i understand it, near layout is similar to raid1, and will only > > provide a speedup if theres 2 reads at the same time, not a single > > sequential read. > > > > so the choice is really between far and offset. As i see it, the > > difference is, that offset tries to reduce the seeking for writing > > compared to far, but that if you dont consider the seeking penalty, > > average sequential write speed across the entire array should be roughly > > the same with offset and far, with offset perhaps being a tad more > > "stable", is this a correct assumption? if it is, that would mean offset > > provides a higher "garantueed" speed than far, but with a lower maximum > > speed. > > > > mvh. > > Kasper Sandberg > > Doesn't offset have the copies of each stripe right next to each other > (just rotated). So writing one stripe would actualy write a 2 block > continous chunk per device. > > With far copies the stripes are far from each other and you get 2 > seperate continious chunks per device. > > What I'm aiming at is that offset might better fit into erase blocks, > cause less internal fragmentation on the disk and give better wear > leveling. Might improve speed and lifetime. But that is just a > thought. Maybe test and do ask Intel (or other vendors) about it. I think the caching of the file system levies out all of this, if we talk SSD. The presumption on this is that there is no rotational latency with SSD, and that no head movement. The caching means that for writing, more buffers are chained together and can be written at once. For near, logical blocks 1-8 can be written to sector 0 of disk 1 in one go, and logical blocks 1-8 can be written to sector 0 of disk 2 in one go. For far it will be for disk 1: block 1, 3, 5, and 7 to sector 0, and block 2, 4, 6 and 8 to sector n/2 - n being the number of sectors on the diskpartition. For far and disk 2, it will be blocks 2, 4, 6 and 8 to sector 0, and blocks 1, 3, 5 and 7 to sector n/2. caching thus reduces seeking significantly, from once per block, to once per flushing of the cache (syncing). Similarily the cache also would almost eliminate seeking for the offset layout. but I would like to see some numbers on this, for SSD. Why don't you try it out and tell us what you find? Best regards keld ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-16 16:13 ` Keld Jørn Simonsen @ 2009-11-17 4:34 ` Goswin von Brederlow 2009-11-17 15:05 ` Kasper Sandberg 0 siblings, 1 reply; 14+ messages in thread From: Goswin von Brederlow @ 2009-11-17 4:34 UTC (permalink / raw) To: Keld Jorn Simonsen; +Cc: Goswin von Brederlow, Kasper Sandberg, linux-raid Keld Jørn Simonsen <keld@keldix.com> writes: > On Mon, Nov 16, 2009 at 04:26:32PM +0100, Goswin von Brederlow wrote: >> Kasper Sandberg <postmaster@metanurb.dk> writes: >> >> > Hello. >> > >> > I've been wanting to create a raid10 array of two SSDs, and I am >> > currently considering the layout. >> > >> > As i understand it, near layout is similar to raid1, and will only >> > provide a speedup if theres 2 reads at the same time, not a single >> > sequential read. >> > >> > so the choice is really between far and offset. As i see it, the >> > difference is, that offset tries to reduce the seeking for writing >> > compared to far, but that if you dont consider the seeking penalty, >> > average sequential write speed across the entire array should be roughly >> > the same with offset and far, with offset perhaps being a tad more >> > "stable", is this a correct assumption? if it is, that would mean offset >> > provides a higher "garantueed" speed than far, but with a lower maximum >> > speed. >> > >> > mvh. >> > Kasper Sandberg >> >> Doesn't offset have the copies of each stripe right next to each other >> (just rotated). So writing one stripe would actualy write a 2 block >> continous chunk per device. >> >> With far copies the stripes are far from each other and you get 2 >> seperate continious chunks per device. >> >> What I'm aiming at is that offset might better fit into erase blocks, >> cause less internal fragmentation on the disk and give better wear >> leveling. Might improve speed and lifetime. But that is just a >> thought. Maybe test and do ask Intel (or other vendors) about it. > > I think the caching of the file system levies out all of this, if we > talk SSD. The presumption on this is that there is no rotational latency > with SSD, and that no head movement. Filesystem has nothing to do with this. It caches the same for both situations. The only change happens on the block layer. > The caching means that for writing, more buffers are chained together > and can be written at once. For near, logical blocks 1-8 > can be written to sector 0 of disk 1 in one go, and logical blocks > 1-8 can be written to sector 0 of disk 2 in one go. Which is what I was saying. > For far it will be for disk 1: block 1, 3, 5, and 7 to sector 0, and > block 2, 4, 6 and 8 to sector n/2 - n being the number of sectors on the > diskpartition. For far and disk 2, it will be blocks 2, 4, 6 and 8 to > sector 0, and blocks 1, 3, 5 and 7 to sector n/2. caching thus reduces > seeking significantly, from once per block, to once per flushing of the > cache (syncing). Similarily the cache also would almost eliminate > seeking for the offset layout. There is no seeking (head movement) and no rotational latency involved. That part is completly irelevant. The important part is that you now have 4 IO operations of half the size comapred to the 2 IO operations of the offset case. The speed and wear will depends on the quality of the SSD, how well it copes with small IO. > but I would like to see some numbers on this, for SSD. > Why don't you try it out and tell us what you find? I would be interested in this myself. I don't have an SSD yet but I'm tempted to buy. When you test please also test random access. I would guess that in any sequential test the amount of caching going on will make all IO operations so big that no difference shows. > Best regards > keld MfG Goswin -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-17 4:34 ` Goswin von Brederlow @ 2009-11-17 15:05 ` Kasper Sandberg 0 siblings, 0 replies; 14+ messages in thread From: Kasper Sandberg @ 2009-11-17 15:05 UTC (permalink / raw) To: Goswin von Brederlow; +Cc: Keld Jorn Simonsen, linux-raid On Tue, 2009-11-17 at 05:34 +0100, Goswin von Brederlow wrote: > Keld Jørn Simonsen <keld@keldix.com> writes: > > > On Mon, Nov 16, 2009 at 04:26:32PM +0100, Goswin von Brederlow wrote: > >> Kasper Sandberg <postmaster@metanurb.dk> writes: > >> > >> > Hello. > >> > > >> > I've been wanting to create a raid10 array of two SSDs, and I am > >> > currently considering the layout. > >> > > >> > As i understand it, near layout is similar to raid1, and will only > >> > provide a speedup if theres 2 reads at the same time, not a single > >> > sequential read. > >> > > >> > so the choice is really between far and offset. As i see it, the > >> > difference is, that offset tries to reduce the seeking for writing > >> > compared to far, but that if you dont consider the seeking penalty, > >> > average sequential write speed across the entire array should be roughly > >> > the same with offset and far, with offset perhaps being a tad more > >> > "stable", is this a correct assumption? if it is, that would mean offset > >> > provides a higher "garantueed" speed than far, but with a lower maximum > >> > speed. > >> > > >> > mvh. > >> > Kasper Sandberg > >> > >> Doesn't offset have the copies of each stripe right next to each other > >> (just rotated). So writing one stripe would actualy write a 2 block > >> continous chunk per device. > >> > >> With far copies the stripes are far from each other and you get 2 > >> seperate continious chunks per device. > >> > >> What I'm aiming at is that offset might better fit into erase blocks, > >> cause less internal fragmentation on the disk and give better wear > >> leveling. Might improve speed and lifetime. But that is just a > >> thought. Maybe test and do ask Intel (or other vendors) about it. > > > > I think the caching of the file system levies out all of this, if we > > talk SSD. The presumption on this is that there is no rotational latency > > with SSD, and that no head movement. > > Filesystem has nothing to do with this. It caches the same for both > situations. The only change happens on the block layer. > > > The caching means that for writing, more buffers are chained together > > and can be written at once. For near, logical blocks 1-8 > > can be written to sector 0 of disk 1 in one go, and logical blocks > > 1-8 can be written to sector 0 of disk 2 in one go. > > Which is what I was saying. > > > For far it will be for disk 1: block 1, 3, 5, and 7 to sector 0, and > > block 2, 4, 6 and 8 to sector n/2 - n being the number of sectors on the > > diskpartition. For far and disk 2, it will be blocks 2, 4, 6 and 8 to > > sector 0, and blocks 1, 3, 5 and 7 to sector n/2. caching thus reduces > > seeking significantly, from once per block, to once per flushing of the > > cache (syncing). Similarily the cache also would almost eliminate > > seeking for the offset layout. > > There is no seeking (head movement) and no rotational latency > involved. That part is completly irelevant. > > The important part is that you now have 4 IO operations of half the > size comapred to the 2 IO operations of the offset case. The speed and > wear will depends on the quality of the SSD, how well it copes with > small IO. Very interresting, i have some older SSDs where they are slower when doing a SMALLER write, so in this case offset should be alot better. > > > but I would like to see some numbers on this, for SSD. > > Why don't you try it out and tell us what you find? > > I would be interested in this myself. I don't have an SSD yet but I'm > tempted to buy. When you test please also test random access. I would > guess that in any sequential test the amount of caching going on will > make all IO operations so big that no difference shows. > > > Best regards > > keld > > MfG > Goswin > -- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-16 15:26 ` Goswin von Brederlow 2009-11-16 16:13 ` Keld Jørn Simonsen @ 2009-11-16 16:31 ` Robin Hill 2009-11-16 16:38 ` Christopher Chen 2009-11-17 4:36 ` Goswin von Brederlow 1 sibling, 2 replies; 14+ messages in thread From: Robin Hill @ 2009-11-16 16:31 UTC (permalink / raw) To: linux-raid [-- Attachment #1: Type: text/plain, Size: 1229 bytes --] On Mon Nov 16, 2009 at 04:26:32PM +0100, Goswin von Brederlow wrote: > What I'm aiming at is that offset might better fit into erase blocks, > cause less internal fragmentation on the disk and give better wear > leveling. Might improve speed and lifetime. But that is just a > thought. Maybe test and do ask Intel (or other vendors) about it. > I very much doubt this will make any difference. With SSDs you have to throw out any preconceptions of internal layout you may have. You have absolutely no idea (or control of) where two consecutive blocks will actually get written. Fragmentation and seek time are thus irrelevant (or uncontrollable anyway). I don't see how any RAID-10 layout would perform better than another with SSDs, unless there's internal optimisations/constraints which affect sequential reading from multiple devices. I'm not aware of any though - RAID-10 n2 may be the same layout as RAID-1 but it's an entirely separate piece of code. Cheers, Robin -- ___ ( ' } | Robin Hill <robin@robinhill.me.uk> | / / ) | Little Jim says .... | // !! | "He fallen in de water !!" | [-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --] ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-16 16:31 ` Robin Hill @ 2009-11-16 16:38 ` Christopher Chen 2009-11-16 16:52 ` Robin Hill 2009-11-17 4:36 ` Goswin von Brederlow 1 sibling, 1 reply; 14+ messages in thread From: Christopher Chen @ 2009-11-16 16:38 UTC (permalink / raw) To: linux-raid On Mon, Nov 16, 2009 at 8:31 AM, Robin Hill <robin@robinhill.me.uk> wrote: > On Mon Nov 16, 2009 at 04:26:32PM +0100, Goswin von Brederlow wrote: > >> What I'm aiming at is that offset might better fit into erase blocks, >> cause less internal fragmentation on the disk and give better wear >> leveling. Might improve speed and lifetime. But that is just a >> thought. Maybe test and do ask Intel (or other vendors) about it. >> > I very much doubt this will make any difference. With SSDs you have to > throw out any preconceptions of internal layout you may have. You have > absolutely no idea (or control of) where two consecutive blocks will > actually get written. Fragmentation and seek time are thus irrelevant > (or uncontrollable anyway). > > I don't see how any RAID-10 layout would perform better than another > with SSDs, unless there's internal optimisations/constraints which > affect sequential reading from multiple devices. I'm not aware of any > though - RAID-10 n2 may be the same layout as RAID-1 but it's an > entirely separate piece of code. Don't forget that RAID-1 also does balanced reads. -- Chris Chen <muffaleta@gmail.com> "The fact that yours is better than anyone else's is not a guarantee that it's any good." -- Seen on a wall -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-16 16:38 ` Christopher Chen @ 2009-11-16 16:52 ` Robin Hill 0 siblings, 0 replies; 14+ messages in thread From: Robin Hill @ 2009-11-16 16:52 UTC (permalink / raw) To: linux-raid [-- Attachment #1: Type: text/plain, Size: 1643 bytes --] On Mon Nov 16, 2009 at 08:38:34AM -0800, Christopher Chen wrote: > On Mon, Nov 16, 2009 at 8:31 AM, Robin Hill <robin@robinhill.me.uk> wrote: > > On Mon Nov 16, 2009 at 04:26:32PM +0100, Goswin von Brederlow wrote: > > > >> What I'm aiming at is that offset might better fit into erase blocks, > >> cause less internal fragmentation on the disk and give better wear > >> leveling. Might improve speed and lifetime. But that is just a > >> thought. Maybe test and do ask Intel (or other vendors) about it. > >> > > I very much doubt this will make any difference. With SSDs you have to > > throw out any preconceptions of internal layout you may have. You have > > absolutely no idea (or control of) where two consecutive blocks will > > actually get written. Fragmentation and seek time are thus irrelevant > > (or uncontrollable anyway). > > > > I don't see how any RAID-10 layout would perform better than another > > with SSDs, unless there's internal optimisations/constraints which > > affect sequential reading from multiple devices. I'm not aware of any > > though - RAID-10 n2 may be the same layout as RAID-1 but it's an > > entirely separate piece of code. > > Don't forget that RAID-1 also does balanced reads. > Only for parallel reads. A single sequential read will only access a single disk, whereas I believe for RAID-10 it will access both disks. Cheers, Robin -- ___ ( ' } | Robin Hill <robin@robinhill.me.uk> | / / ) | Little Jim says .... | // !! | "He fallen in de water !!" | [-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --] ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-16 16:31 ` Robin Hill 2009-11-16 16:38 ` Christopher Chen @ 2009-11-17 4:36 ` Goswin von Brederlow 1 sibling, 0 replies; 14+ messages in thread From: Goswin von Brederlow @ 2009-11-17 4:36 UTC (permalink / raw) To: linux-raid Robin Hill <robin@robinhill.me.uk> writes: > On Mon Nov 16, 2009 at 04:26:32PM +0100, Goswin von Brederlow wrote: > >> What I'm aiming at is that offset might better fit into erase blocks, >> cause less internal fragmentation on the disk and give better wear >> leveling. Might improve speed and lifetime. But that is just a >> thought. Maybe test and do ask Intel (or other vendors) about it. >> > I very much doubt this will make any difference. With SSDs you have to > throw out any preconceptions of internal layout you may have. You have > absolutely no idea (or control of) where two consecutive blocks will > actually get written. Fragmentation and seek time are thus irrelevant > (or uncontrollable anyway). > > I don't see how any RAID-10 layout would perform better than another > with SSDs, unless there's internal optimisations/constraints which > affect sequential reading from multiple devices. I'm not aware of any > though - RAID-10 n2 may be the same layout as RAID-1 but it's an > entirely separate piece of code. > > Cheers, > Robin Depending on the SSD in question the limiting factor will be the number of IO operations per second. Some SSDs have shown that they can write the same number of 1 byte blocks per second as they can write 64k bytes per second. If offset writes 2x 64k but far writes 4x 32k then far will be half the speed on such a cheap SSD. MfG Goswin ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-16 14:29 raid10 layout for 2xSSDs Kasper Sandberg 2009-11-16 15:26 ` Goswin von Brederlow @ 2009-11-16 16:08 ` Christopher Chen 2009-11-16 21:02 ` Kasper Sandberg 2009-11-17 4:46 ` Goswin von Brederlow 1 sibling, 2 replies; 14+ messages in thread From: Christopher Chen @ 2009-11-16 16:08 UTC (permalink / raw) To: Kasper Sandberg; +Cc: linux-raid On Mon, Nov 16, 2009 at 6:29 AM, Kasper Sandberg <postmaster@metanurb.dk> wrote: > Hello. > > I've been wanting to create a raid10 array of two SSDs, and I am > currently considering the layout. > > As i understand it, near layout is similar to raid1, and will only > provide a speedup if theres 2 reads at the same time, not a single > sequential read. > > so the choice is really between far and offset. As i see it, the > difference is, that offset tries to reduce the seeking for writing > compared to far, but that if you dont consider the seeking penalty, > average sequential write speed across the entire array should be roughly > the same with offset and far, with offset perhaps being a tad more > "stable", is this a correct assumption? if it is, that would mean offset > provides a higher "garantueed" speed than far, but with a lower maximum > speed. Do you plan to have more than two devices in the array? Raid 10 isn't magic. If you don't have more than do devices, I suppose your seek time might be half for reads (and higher for writes), but you won't be able to do any striping. I'm a bit confused as to the number of people popping in recently wanting to run raid 10 on two disk "arrays". cc -- Chris Chen <muffaleta@gmail.com> "The fact that yours is better than anyone else's is not a guarantee that it's any good." -- Seen on a wall ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-16 16:08 ` Christopher Chen @ 2009-11-16 21:02 ` Kasper Sandberg 2009-11-16 21:19 ` Majed B. 2009-11-17 4:46 ` Goswin von Brederlow 1 sibling, 1 reply; 14+ messages in thread From: Kasper Sandberg @ 2009-11-16 21:02 UTC (permalink / raw) To: Christopher Chen; +Cc: linux-raid On Mon, 2009-11-16 at 08:08 -0800, Christopher Chen wrote: > On Mon, Nov 16, 2009 at 6:29 AM, Kasper Sandberg <postmaster@metanurb.dk> wrote: > > Hello. > > > > I've been wanting to create a raid10 array of two SSDs, and I am > > currently considering the layout. > > > > As i understand it, near layout is similar to raid1, and will only > > provide a speedup if theres 2 reads at the same time, not a single > > sequential read. > > > > so the choice is really between far and offset. As i see it, the > > difference is, that offset tries to reduce the seeking for writing > > compared to far, but that if you dont consider the seeking penalty, > > average sequential write speed across the entire array should be roughly > > the same with offset and far, with offset perhaps being a tad more > > "stable", is this a correct assumption? if it is, that would mean offset > > provides a higher "garantueed" speed than far, but with a lower maximum > > speed. > > Do you plan to have more than two devices in the array? Raid 10 isn't no > magic. If you don't have more than do devices, I suppose your seek > time might be half for reads (and higher for writes), but you won't be > able to do any striping. > > I'm a bit confused as to the number of people popping in recently > wanting to run raid 10 on two disk "arrays". to get the doubled singlestream sequential read performance.. > > cc > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-16 21:02 ` Kasper Sandberg @ 2009-11-16 21:19 ` Majed B. 2009-11-16 21:33 ` Kasper Sandberg 0 siblings, 1 reply; 14+ messages in thread From: Majed B. @ 2009-11-16 21:19 UTC (permalink / raw) To: LinuxRaid Wouldn't you be getting the same data twice? or would the kernel request the half the data from one disk & the other from the other disk? (file1-> chunk1@disk1 & chunk2@disk2, chunk3@disk1 & chunk3@disk2) On Tue, Nov 17, 2009 at 12:02 AM, Kasper Sandberg <postmaster@metanurb.dk> wrote: > > On Mon, 2009-11-16 at 08:08 -0800, Christopher Chen wrote: > > On Mon, Nov 16, 2009 at 6:29 AM, Kasper Sandberg <postmaster@metanurb.dk> wrote: > > > Hello. > > > > > > I've been wanting to create a raid10 array of two SSDs, and I am > > > currently considering the layout. > > > > > > As i understand it, near layout is similar to raid1, and will only > > > provide a speedup if theres 2 reads at the same time, not a single > > > sequential read. > > > > > > so the choice is really between far and offset. As i see it, the > > > difference is, that offset tries to reduce the seeking for writing > > > compared to far, but that if you dont consider the seeking penalty, > > > average sequential write speed across the entire array should be roughly > > > the same with offset and far, with offset perhaps being a tad more > > > "stable", is this a correct assumption? if it is, that would mean offset > > > provides a higher "garantueed" speed than far, but with a lower maximum > > > speed. > > > > Do you plan to have more than two devices in the array? Raid 10 isn't > no > > magic. If you don't have more than do devices, I suppose your seek > > time might be half for reads (and higher for writes), but you won't be > > able to do any striping. > > > > I'm a bit confused as to the number of people popping in recently > > wanting to run raid 10 on two disk "arrays". > to get the doubled singlestream sequential read performance.. > > > > cc > > > > -- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Majed B. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-16 21:19 ` Majed B. @ 2009-11-16 21:33 ` Kasper Sandberg 0 siblings, 0 replies; 14+ messages in thread From: Kasper Sandberg @ 2009-11-16 21:33 UTC (permalink / raw) To: Majed B.; +Cc: LinuxRaid On Tue, 2009-11-17 at 00:19 +0300, Majed B. wrote: > Wouldn't you be getting the same data twice? or would the kernel > request the half the data from one disk & the other from the other > disk? (file1-> chunk1@disk1 & chunk2@disk2, chunk3@disk1 & > chunk3@disk2) it will request different chunks from different disks, to speed up > > On Tue, Nov 17, 2009 at 12:02 AM, Kasper Sandberg > <postmaster@metanurb.dk> wrote: > > > > On Mon, 2009-11-16 at 08:08 -0800, Christopher Chen wrote: > > > On Mon, Nov 16, 2009 at 6:29 AM, Kasper Sandberg <postmaster@metanurb.dk> wrote: > > > > Hello. > > > > > > > > I've been wanting to create a raid10 array of two SSDs, and I am > > > > currently considering the layout. > > > > > > > > As i understand it, near layout is similar to raid1, and will only > > > > provide a speedup if theres 2 reads at the same time, not a single > > > > sequential read. > > > > > > > > so the choice is really between far and offset. As i see it, the > > > > difference is, that offset tries to reduce the seeking for writing > > > > compared to far, but that if you dont consider the seeking penalty, > > > > average sequential write speed across the entire array should be roughly > > > > the same with offset and far, with offset perhaps being a tad more > > > > "stable", is this a correct assumption? if it is, that would mean offset > > > > provides a higher "garantueed" speed than far, but with a lower maximum > > > > speed. > > > > > > Do you plan to have more than two devices in the array? Raid 10 isn't > > no > > > magic. If you don't have more than do devices, I suppose your seek > > > time might be half for reads (and higher for writes), but you won't be > > > able to do any striping. > > > > > > I'm a bit confused as to the number of people popping in recently > > > wanting to run raid 10 on two disk "arrays". > > to get the doubled singlestream sequential read performance.. > > > > > > cc > > > > > > > -- > > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > -- > Majed B. > -- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid10 layout for 2xSSDs 2009-11-16 16:08 ` Christopher Chen 2009-11-16 21:02 ` Kasper Sandberg @ 2009-11-17 4:46 ` Goswin von Brederlow 1 sibling, 0 replies; 14+ messages in thread From: Goswin von Brederlow @ 2009-11-17 4:46 UTC (permalink / raw) To: Christopher Chen; +Cc: Kasper Sandberg, linux-raid Christopher Chen <muffaleta@gmail.com> writes: > On Mon, Nov 16, 2009 at 6:29 AM, Kasper Sandberg <postmaster@metanurb.dk> wrote: >> Hello. >> >> I've been wanting to create a raid10 array of two SSDs, and I am >> currently considering the layout. >> >> As i understand it, near layout is similar to raid1, and will only >> provide a speedup if theres 2 reads at the same time, not a single >> sequential read. >> >> so the choice is really between far and offset. As i see it, the >> difference is, that offset tries to reduce the seeking for writing >> compared to far, but that if you dont consider the seeking penalty, >> average sequential write speed across the entire array should be roughly >> the same with offset and far, with offset perhaps being a tad more >> "stable", is this a correct assumption? if it is, that would mean offset >> provides a higher "garantueed" speed than far, but with a lower maximum >> speed. > > Do you plan to have more than two devices in the array? Raid 10 isn't > magic. If you don't have more than do devices, I suppose your seek > time might be half for reads (and higher for writes), but you won't be > able to do any striping. > > I'm a bit confused as to the number of people popping in recently > wanting to run raid 10 on two disk "arrays". > > cc I think you are missing the fact that linux has a special raid10 module. This is not just a raid0 over raid1 (or raid1 over raid0). The raid10 module is much more flexible and allows to have X copies of the data on Y disks (for any X <= Y) in different layouts. The layouts are like this (for 2 copies on 2 disks): near (same as raid1): Disk A: 0 1 2 3 4 5 6 7 8 9 Disk B: 0 1 2 3 4 5 6 7 8 9 offset: Disk A: 0 1 2 3 4 5 6 7 8 9 Disk B: 1 0 3 2 5 4 7 6 9 8 far: Disk A: 0 2 4 6 8 1 3 5 7 9 Disk B: 1 3 5 7 9 0 2 4 6 8 In the case of offset and far copies you can see that the data is striped like in raid0 and the raid10 module will read data from multiple drives in parallel even with a single stream. In raid10 far mode the read will also (afaik, not 100% sure) always come from the first half of the disk. With rotational disks that is usualy the faster part. So you not only get double the read speed from striping but also more speed from the disks itself. For example my disk does 80MB/s at the start, 60MB/s in the middle and 40MB/s at the end. Reads will only use the 80-60MB/s range now. Jupey. MfG Goswin ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2009-11-17 15:05 UTC | newest] Thread overview: 14+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2009-11-16 14:29 raid10 layout for 2xSSDs Kasper Sandberg 2009-11-16 15:26 ` Goswin von Brederlow 2009-11-16 16:13 ` Keld Jørn Simonsen 2009-11-17 4:34 ` Goswin von Brederlow 2009-11-17 15:05 ` Kasper Sandberg 2009-11-16 16:31 ` Robin Hill 2009-11-16 16:38 ` Christopher Chen 2009-11-16 16:52 ` Robin Hill 2009-11-17 4:36 ` Goswin von Brederlow 2009-11-16 16:08 ` Christopher Chen 2009-11-16 21:02 ` Kasper Sandberg 2009-11-16 21:19 ` Majed B. 2009-11-16 21:33 ` Kasper Sandberg 2009-11-17 4:46 ` Goswin von Brederlow
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).