* SSD & mechanical disc in RAID 1 @ 2010-01-09 17:53 Wil Reichert 2010-01-09 19:23 ` Keld Jørn Simonsen 2010-02-01 21:10 ` David Rees 0 siblings, 2 replies; 8+ messages in thread From: Wil Reichert @ 2010-01-09 17:53 UTC (permalink / raw) To: linux raid Has anyone ever tried putting an SSD and a mechanical disc in RAID 1 using write-mostly? The goal would be to extol the speed & latency virtues of an SSD while retaining the integrity of the traditional storage. Would this setup even work as I expect it to? Wil ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SSD & mechanical disc in RAID 1 2010-01-09 17:53 SSD & mechanical disc in RAID 1 Wil Reichert @ 2010-01-09 19:23 ` Keld Jørn Simonsen 2010-01-22 16:37 ` Goswin von Brederlow 2010-02-01 21:10 ` David Rees 1 sibling, 1 reply; 8+ messages in thread From: Keld Jørn Simonsen @ 2010-01-09 19:23 UTC (permalink / raw) To: Wil Reichert; +Cc: linux raid On Sat, Jan 09, 2010 at 09:53:11AM -0800, Wil Reichert wrote: > Has anyone ever tried putting an SSD and a mechanical disc in RAID 1 > using write-mostly? The goal would be to extol the speed & latency > virtues of an SSD while retaining the integrity of the traditional > storage. Would this setup even work as I expect it to? I think you are better served by trying out RAID10 in the far or offset layouts. RAID1 would per se not gain from the low latency times of SSD. RAID1 should function the same with hard disk as with SSD, and with hard disk it would not make sense to multiplex reading of a file, as skipping and reading a block essentially takes the same time. Best regards Keld ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SSD & mechanical disc in RAID 1 2010-01-09 19:23 ` Keld Jørn Simonsen @ 2010-01-22 16:37 ` Goswin von Brederlow 2010-01-31 20:21 ` Aryeh Gregor 2010-02-01 20:06 ` Bill Davidsen 0 siblings, 2 replies; 8+ messages in thread From: Goswin von Brederlow @ 2010-01-22 16:37 UTC (permalink / raw) To: Keld Jorn Simonsen; +Cc: Wil Reichert, linux raid Keld Jørn Simonsen <keld@keldix.com> writes: > On Sat, Jan 09, 2010 at 09:53:11AM -0800, Wil Reichert wrote: >> Has anyone ever tried putting an SSD and a mechanical disc in RAID 1 >> using write-mostly? The goal would be to extol the speed & latency >> virtues of an SSD while retaining the integrity of the traditional >> storage. Would this setup even work as I expect it to? > > > I think you are better served by trying out RAID10 in the far or offset > layouts. RAID1 would per se not gain from the low latency times of > SSD. RAID1 should function the same with hard disk as with SSD, > and with hard disk it would not make sense to multiplex reading > of a file, as skipping and reading a block essentially takes the same > time. > > Best regards > Keld With raid10 in the far or offset layout every (large) read and write would always access both the SSD and rotating disk. That would make every operation wait for the rotating disk to seek. I think his initial idea of raid1 is verry good. With write-mostly all reads should come from the fast SSD. I would also add write-behind. That way writes will not wait for the rotating disk and get the full SSD speed as well. MfG Goswin -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SSD & mechanical disc in RAID 1 2010-01-22 16:37 ` Goswin von Brederlow @ 2010-01-31 20:21 ` Aryeh Gregor 2010-02-01 20:56 ` Wil Reichert 2010-02-01 20:06 ` Bill Davidsen 1 sibling, 1 reply; 8+ messages in thread From: Aryeh Gregor @ 2010-01-31 20:21 UTC (permalink / raw) To: Goswin von Brederlow; +Cc: Keld Jorn Simonsen, Wil Reichert, linux raid On Fri, Jan 22, 2010 at 11:37 AM, Goswin von Brederlow <goswin-v-b@web.de> wrote: > I think his initial idea of raid1 is verry good. With write-mostly all > reads should come from the fast SSD. I would also add write-behind. > That way writes will not wait for the rotating disk and get the full > SSD speed as well. Recently I got an Intel X25-M 80G, and have been trying this out. I figured some people here might be interested in the results. I've mirrored the SSD to a slow 80G partition on an old spinning-rust 160G disk, which is set to write-mostly, with write-behind of 16384 (the max). The bitmap is on a separate unmirrored partition on the SSD. My root filesystem is on it, ext4 on top of LVM, using 60G out of 73.77G allocated to LVM, with a separate 500M-ish mirrored boot partition at the start, and a gap in between for storing unmirrored things (like all my RAID bitmaps). I did six runs of bonnie++ overnight: 1) 128 files, both disks working. 2) 1024 files, both disks working. 3) 128 files, conventional disk failed and removed. 4) 1024 files, conventional disk failed and removed. 5) Same as (2). 6) Same as (4). (Note that in this case, the disk ended up being failed but not removed, but this should make no difference.) At the end of this e-mail, I've attached the full script I ran and its full output. These are the bonnie++ numbers for the six tests, best viewed in a fixed-width font on a pretty wide screen (126 chars): | Sequential out | Sequential in | Random | | Sequential create | Random create | |per char| block |rewrite |per char| block | Seeks | | Create | Read | Delete | Create | Read |Delete | monoid,6G,47571,90,55343,30,41390,24,55803,96,233594,54,4923.0,36, 128,25973,94,+++++,+++,15910,41,27179,85,+++++,+++,11922,38 monoid,6G,48358,90,55378,25,41338,25,56883,95,233689,52,4233.0,29,1024,19819,70,64728, 75, 848, 4,14928,60,55484, 79, 463, 3 monoid,6G,48751,90,77485,27,45263,25,55660,94,238757,49,5107.7,33, 128,28027,86,+++++,+++,20161,49,29618,88,+++++,+++,14780,43 monoid,6G,44503,83,73894,29,42975,26,56463,95,237173,53,5052.9,36,1024,18073,73,66151, 81, 5348,28,18102,65,60213, 77, 3441,23 monoid,6G,48580,91,55567,24,40748,22,56803,95,237006,49,4828.2,29,1024,20332,67,73175, 80, 783, 4,13615,45,58066, 74, 489, 3 monoid,6G,48966,92,77517,29,43999,24,56863,94,198557,38,3553.8,16,1024,19273,68,72414, 86, 6326,31,16499,60,56438, 75, 2784,15 (Where two numbers are given, the first is KB out per second and the second is CPU usage, as far as I can figure.) Note that there was lots of memory free here, about 2.5G (-/+ buffers/cache as reported by free). My conclusions: 1) The file creation tests with only 128 files should just be ignored, they look like they're in-memory. The 1024-file tests look more plausible. 2) All reads perform comparably whether the extra disk is present or not, as expected. There's some variation, but then, I wasn't being very scientific. 3) Tests 3, 4, and 6 (only SSD) are about 35% faster on the sequential out rewrite test. The per-char sequential out test was CPU-bound, and everything performed the same. Everything also performed the same on the rewrite test; I'm not sure why. 4) Sequential file creation is about the same between tests 4 and 6 (only SSD) and tests 2 and 5 (both disks). Random file creation gives the disk+SSD about a 20% lead. But both sequential and random deletion have an order-of-magnitude difference between the two. I don't know why this might be -- lots of buffering for some operations but not others? Overall, this seems like a very feasible setup, and I'll certainly be sticking with it, even though it will obviously slow down some writes. Hope this data will be useful (or at least interesting) to someone. The script that I ran from the at job follows, with a couple of comments added. I ran it using ionice -c1 to reduce the effects of any concurrent operations. #!/bin/bash cd /tmp echo Test 1; echo free -m bonnie -u 0 -n 128 echo; echo Test 2; echo free -m echo bonnie -u 0 -n 1024 echo; echo Test 3; echo free -m echo mdadm --fail /dev/md1 /dev/sda2 sleep 5 mdadm --remove /dev/md1 /dev/sda2 echo bonnie -u 0 -n 128 echo; echo Test 4; echo free -m echo bonnie -u 0 -n 1024 echo free -m echo; echo 'Re-adding sda2 and waiting for sync'; echo sudo mdadm --add /dev/md1 --write-mostly /dev/sda2 sleep 1800 cat /proc/mdstat echo; echo Test 5; echo free -m echo bonnie -u 0 -n 1024 echo mdadm --fail /dev/md1 /dev/sda2 # The second command was run too fast and failed, I forgot the sleep 5 from above. mdadm --remove /dev/md1 /dev/sda2 echo; echo Test 6; echo free -m echo bonnie -u 0 -n 1024 echo free -m echo # Also failed, since the device hadn't been removed. sudo mdadm --add /dev/md1 --write-mostly /dev/sda2 Output of script: Test 1 total used free shared buffers cached Mem: 3024 2085 938 0 589 973 -/+ buffers/cache: 522 2501 Swap: 1023 526 497 Using uid:0, gid:0. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP monoid 6G 47571 90 55343 30 41390 24 55803 96 233594 54 4923 36 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 128 25973 94 +++++ +++ 15910 41 27179 85 +++++ +++ 11922 38 monoid,6G,47571,90,55343,30,41390,24,55803,96,233594,54,4923.0,36,128,25973,94,+++++,+++,15910,41,27179,85,+++++,+++,11922,38 Test 2 total used free shared buffers cached Mem: 3024 544 2480 0 51 30 -/+ buffers/cache: 462 2562 Swap: 1023 526 497 Using uid:0, gid:0. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP monoid 6G 48358 90 55378 25 41338 25 56883 95 233689 52 4233 29 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1024 19819 70 64728 75 848 4 14928 60 55484 79 463 3 monoid,6G,48358,90,55378,25,41338,25,56883,95,233689,52,4233.0,29,1024,19819,70,64728,75,848,4,14928,60,55484,79,463,3 Test 3 total used free shared buffers cached Mem: 3024 788 2235 0 332 50 -/+ buffers/cache: 405 2619 Swap: 1023 565 458 mdadm: set /dev/sda2 faulty in /dev/md1 mdadm: hot removed /dev/sda2 Using uid:0, gid:0. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP monoid 6G 48751 90 77485 27 45263 25 55660 94 238757 49 5108 33 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 128 28027 86 +++++ +++ 20161 49 29618 88 +++++ +++ 14780 43 monoid,6G,48751,90,77485,27,45263,25,55660,94,238757,49,5107.7,33,128,28027,86,+++++,+++,20161,49,29618,88,+++++,+++,14780,43 Test 4 total used free shared buffers cached Mem: 3024 441 2582 0 48 25 -/+ buffers/cache: 367 2656 Swap: 1023 594 429 Using uid:0, gid:0. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP monoid 6G 44503 83 73894 29 42975 26 56463 95 237173 53 5053 36 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1024 18073 73 66151 81 5348 28 18102 65 60213 77 3441 23 monoid,6G,44503,83,73894,29,42975,26,56463,95,237173,53,5052.9,36,1024,18073,73,66151,81,5348,28,18102,65,60213,77,3441,23 total used free shared buffers cached Mem: 3024 734 2289 0 329 42 -/+ buffers/cache: 362 2661 Swap: 1023 609 414 Re-adding sda2 and waiting for sync mdadm: re-added /dev/sda2 Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdb1[0] sda1[1](W) 521984 blocks [2/2] [UU] bitmap: 0/64 pages [0KB], 4KB chunk, file: /mnt/ssd-junk/boot-bitmap md3 : active raid10 sdc2[2] sdd2[1] 488102656 blocks super 1.2 64K chunks 2 far-copies [2/2] [UU] bitmap: 1/466 pages [4KB], 512KB chunk, file: /mnt/ssd-junk/extra-bitmap md1 : active raid1 sda2[1](W) sdb2[0] 77352896 blocks [2/2] [UU] bitmap: 29/296 pages [116KB], 128KB chunk, file: /mnt/ssd-junk/root-bitmap unused devices: <none> Test 5 total used free shared buffers cached Mem: 3024 1374 1649 0 357 621 -/+ buffers/cache: 396 2627 Swap: 1023 602 421 Using uid:0, gid:0. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP monoid 6G 48580 91 55567 24 40748 22 56803 95 237006 49 4828 29 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1024 20332 67 73175 80 783 4 13615 45 58066 74 489 3 monoid,6G,48580,91,55567,24,40748,22,56803,95,237006,49,4828.2,29,1024,20332,67,73175,80,783,4,13615,45,58066,74,489,3 mdadm: set /dev/sda2 faulty in /dev/md1 mdadm: hot remove failed for /dev/sda2: Device or resource busy Test 6 total used free shared buffers cached Mem: 3024 720 2303 0 335 40 -/+ buffers/cache: 345 2678 Swap: 1023 653 370 Using uid:0, gid:0. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP monoid 6G 48966 92 77517 29 43999 24 56863 94 198557 38 3554 16 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 1024 19273 68 72414 86 6326 31 16499 60 56438 75 2784 15 monoid,6G,48966,92,77517,29,43999,24,56863,94,198557,38,3553.8,16,1024,19273,68,72414,86,6326,31,16499,60,56438,75,2784,15 total used free shared buffers cached Mem: 3024 2174 849 0 545 1261 -/+ buffers/cache: 366 2657 Swap: 1023 674 349 mdadm: Cannot open /dev/sda2: Device or resource busy ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SSD & mechanical disc in RAID 1 2010-01-31 20:21 ` Aryeh Gregor @ 2010-02-01 20:56 ` Wil Reichert 0 siblings, 0 replies; 8+ messages in thread From: Wil Reichert @ 2010-02-01 20:56 UTC (permalink / raw) To: Aryeh Gregor; +Cc: Goswin von Brederlow, Keld Jorn Simonsen, linux raid On Sun, Jan 31, 2010 at 12:21 PM, Aryeh Gregor <Simetrical+list@gmail.com> wrote: > On Fri, Jan 22, 2010 at 11:37 AM, Goswin von Brederlow > <goswin-v-b@web.de> wrote: >> I think his initial idea of raid1 is verry good. With write-mostly all >> reads should come from the fast SSD. I would also add write-behind. >> That way writes will not wait for the rotating disk and get the full >> SSD speed as well. > > Recently I got an Intel X25-M 80G, and have been trying this out. I > figured some people here might be interested in the results. I've > mirrored the SSD to a slow 80G partition on an old spinning-rust 160G > disk, which is set to write-mostly, with write-behind of 16384 (the > max). The bitmap is on a separate unmirrored partition on the SSD. > My root filesystem is on it, ext4 on top of LVM, using 60G out of > 73.77G allocated to LVM, with a separate 500M-ish mirrored boot > partition at the start, and a gap in between for storing unmirrored > things (like all my RAID bitmaps). > > I did six runs of bonnie++ overnight: > > 1) 128 files, both disks working. > 2) 1024 files, both disks working. > 3) 128 files, conventional disk failed and removed. > 4) 1024 files, conventional disk failed and removed. > 5) Same as (2). > 6) Same as (4). (Note that in this case, the disk ended up being > failed but not removed, but this should make no difference.) > > At the end of this e-mail, I've attached the full script I ran and its > full output. These are the bonnie++ numbers for the six tests, best > viewed in a fixed-width font on a pretty wide screen (126 chars): > > | Sequential out | Sequential in | Random | > | Sequential create | Random create | > |per char| block |rewrite |per char| block | Seeks | > | Create | Read | Delete | Create | Read |Delete | > monoid,6G,47571,90,55343,30,41390,24,55803,96,233594,54,4923.0,36, > 128,25973,94,+++++,+++,15910,41,27179,85,+++++,+++,11922,38 > monoid,6G,48358,90,55378,25,41338,25,56883,95,233689,52,4233.0,29,1024,19819,70,64728, > 75, 848, 4,14928,60,55484, 79, 463, 3 > monoid,6G,48751,90,77485,27,45263,25,55660,94,238757,49,5107.7,33, > 128,28027,86,+++++,+++,20161,49,29618,88,+++++,+++,14780,43 > monoid,6G,44503,83,73894,29,42975,26,56463,95,237173,53,5052.9,36,1024,18073,73,66151, > 81, 5348,28,18102,65,60213, 77, 3441,23 > monoid,6G,48580,91,55567,24,40748,22,56803,95,237006,49,4828.2,29,1024,20332,67,73175, > 80, 783, 4,13615,45,58066, 74, 489, 3 > monoid,6G,48966,92,77517,29,43999,24,56863,94,198557,38,3553.8,16,1024,19273,68,72414, > 86, 6326,31,16499,60,56438, 75, 2784,15 > > (Where two numbers are given, the first is KB out per second and the > second is CPU usage, as far as I can figure.) Note that there was lots > of memory free here, about 2.5G (-/+ buffers/cache as reported by > free). My conclusions: > > 1) The file creation tests with only 128 files should just be ignored, > they look like they're in-memory. The 1024-file tests look more > plausible. > > 2) All reads perform comparably whether the extra disk is present or > not, as expected. There's some variation, but then, I wasn't being > very scientific. > > 3) Tests 3, 4, and 6 (only SSD) are about 35% faster on the sequential > out rewrite test. The per-char sequential out test was CPU-bound, and > everything performed the same. Everything also performed the same on > the rewrite test; I'm not sure why. > > 4) Sequential file creation is about the same between tests 4 and 6 > (only SSD) and tests 2 and 5 (both disks). Random file creation gives > the disk+SSD about a 20% lead. But both sequential and random > deletion have an order-of-magnitude difference between the two. I > don't know why this might be -- lots of buffering for some operations > but not others? > > Overall, this seems like a very feasible setup, and I'll certainly be > sticking with it, even though it will obviously slow down some writes. > Hope this data will be useful (or at least interesting) to someone. > > The script that I ran from the at job follows, with a couple of > comments added. I ran it using ionice -c1 to reduce the effects of > any concurrent operations. > > > > #!/bin/bash > cd /tmp > echo Test 1; echo > free -m > bonnie -u 0 -n 128 > echo; echo Test 2; echo > free -m > echo > bonnie -u 0 -n 1024 > echo; echo Test 3; echo > free -m > echo > mdadm --fail /dev/md1 /dev/sda2 > sleep 5 > mdadm --remove /dev/md1 /dev/sda2 > echo > bonnie -u 0 -n 128 > echo; echo Test 4; echo > free -m > echo > bonnie -u 0 -n 1024 > echo > free -m > echo; echo 'Re-adding sda2 and waiting for sync'; echo > sudo mdadm --add /dev/md1 --write-mostly /dev/sda2 > sleep 1800 > cat /proc/mdstat > echo; echo Test 5; echo > free -m > echo > bonnie -u 0 -n 1024 > echo > mdadm --fail /dev/md1 /dev/sda2 > # The second command was run too fast and failed, I forgot the sleep 5 > from above. > mdadm --remove /dev/md1 /dev/sda2 > echo; echo Test 6; echo > free -m > echo > bonnie -u 0 -n 1024 > echo > free -m > echo > # Also failed, since the device hadn't been removed. > sudo mdadm --add /dev/md1 --write-mostly /dev/sda2 > > > > > Output of script: > > > > > Test 1 > > total used free shared buffers cached > Mem: 3024 2085 938 0 589 973 > -/+ buffers/cache: 522 2501 > Swap: 1023 526 497 > Using uid:0, gid:0. > Writing with putc()...done > Writing intelligently...done > Rewriting...done > Reading with getc()...done > Reading intelligently...done > start 'em...done...done...done... > Create files in sequential order...done. > Stat files in sequential order...done. > Delete files in sequential order...done. > Create files in random order...done. > Stat files in random order...done. > Delete files in random order...done. > Version 1.03c ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > monoid 6G 47571 90 55343 30 41390 24 55803 96 233594 54 4923 36 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 128 25973 94 +++++ +++ 15910 41 27179 85 +++++ +++ 11922 38 > monoid,6G,47571,90,55343,30,41390,24,55803,96,233594,54,4923.0,36,128,25973,94,+++++,+++,15910,41,27179,85,+++++,+++,11922,38 > > Test 2 > > total used free shared buffers cached > Mem: 3024 544 2480 0 51 30 > -/+ buffers/cache: 462 2562 > Swap: 1023 526 497 > > Using uid:0, gid:0. > Writing with putc()...done > Writing intelligently...done > Rewriting...done > Reading with getc()...done > Reading intelligently...done > start 'em...done...done...done... > Create files in sequential order...done. > Stat files in sequential order...done. > Delete files in sequential order...done. > Create files in random order...done. > Stat files in random order...done. > Delete files in random order...done. > Version 1.03c ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > monoid 6G 48358 90 55378 25 41338 25 56883 95 233689 52 4233 29 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 1024 19819 70 64728 75 848 4 14928 60 55484 79 463 3 > monoid,6G,48358,90,55378,25,41338,25,56883,95,233689,52,4233.0,29,1024,19819,70,64728,75,848,4,14928,60,55484,79,463,3 > > Test 3 > > total used free shared buffers cached > Mem: 3024 788 2235 0 332 50 > -/+ buffers/cache: 405 2619 > Swap: 1023 565 458 > > mdadm: set /dev/sda2 faulty in /dev/md1 > mdadm: hot removed /dev/sda2 > > Using uid:0, gid:0. > Writing with putc()...done > Writing intelligently...done > Rewriting...done > Reading with getc()...done > Reading intelligently...done > start 'em...done...done...done... > Create files in sequential order...done. > Stat files in sequential order...done. > Delete files in sequential order...done. > Create files in random order...done. > Stat files in random order...done. > Delete files in random order...done. > Version 1.03c ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > monoid 6G 48751 90 77485 27 45263 25 55660 94 238757 49 5108 33 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 128 28027 86 +++++ +++ 20161 49 29618 88 +++++ +++ 14780 43 > monoid,6G,48751,90,77485,27,45263,25,55660,94,238757,49,5107.7,33,128,28027,86,+++++,+++,20161,49,29618,88,+++++,+++,14780,43 > > Test 4 > > total used free shared buffers cached > Mem: 3024 441 2582 0 48 25 > -/+ buffers/cache: 367 2656 > Swap: 1023 594 429 > > Using uid:0, gid:0. > Writing with putc()...done > Writing intelligently...done > Rewriting...done > Reading with getc()...done > Reading intelligently...done > start 'em...done...done...done... > Create files in sequential order...done. > Stat files in sequential order...done. > Delete files in sequential order...done. > Create files in random order...done. > Stat files in random order...done. > Delete files in random order...done. > Version 1.03c ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > monoid 6G 44503 83 73894 29 42975 26 56463 95 237173 53 5053 36 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 1024 18073 73 66151 81 5348 28 18102 65 60213 77 3441 23 > monoid,6G,44503,83,73894,29,42975,26,56463,95,237173,53,5052.9,36,1024,18073,73,66151,81,5348,28,18102,65,60213,77,3441,23 > > total used free shared buffers cached > Mem: 3024 734 2289 0 329 42 > -/+ buffers/cache: 362 2661 > Swap: 1023 609 414 > > Re-adding sda2 and waiting for sync > > mdadm: re-added /dev/sda2 > Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] > [raid4] [raid10] > md2 : active raid1 sdb1[0] sda1[1](W) > 521984 blocks [2/2] [UU] > bitmap: 0/64 pages [0KB], 4KB chunk, file: /mnt/ssd-junk/boot-bitmap > > md3 : active raid10 sdc2[2] sdd2[1] > 488102656 blocks super 1.2 64K chunks 2 far-copies [2/2] [UU] > bitmap: 1/466 pages [4KB], 512KB chunk, file: /mnt/ssd-junk/extra-bitmap > > md1 : active raid1 sda2[1](W) sdb2[0] > 77352896 blocks [2/2] [UU] > bitmap: 29/296 pages [116KB], 128KB chunk, file: /mnt/ssd-junk/root-bitmap > > unused devices: <none> > > Test 5 > > total used free shared buffers cached > Mem: 3024 1374 1649 0 357 621 > -/+ buffers/cache: 396 2627 > Swap: 1023 602 421 > > Using uid:0, gid:0. > Writing with putc()...done > Writing intelligently...done > Rewriting...done > Reading with getc()...done > Reading intelligently...done > start 'em...done...done...done... > Create files in sequential order...done. > Stat files in sequential order...done. > Delete files in sequential order...done. > Create files in random order...done. > Stat files in random order...done. > Delete files in random order...done. > Version 1.03c ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > monoid 6G 48580 91 55567 24 40748 22 56803 95 237006 49 4828 29 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 1024 20332 67 73175 80 783 4 13615 45 58066 74 489 3 > monoid,6G,48580,91,55567,24,40748,22,56803,95,237006,49,4828.2,29,1024,20332,67,73175,80,783,4,13615,45,58066,74,489,3 > > mdadm: set /dev/sda2 faulty in /dev/md1 > mdadm: hot remove failed for /dev/sda2: Device or resource busy > > Test 6 > > total used free shared buffers cached > Mem: 3024 720 2303 0 335 40 > -/+ buffers/cache: 345 2678 > Swap: 1023 653 370 > > Using uid:0, gid:0. > Writing with putc()...done > Writing intelligently...done > Rewriting...done > Reading with getc()...done > Reading intelligently...done > start 'em...done...done...done... > Create files in sequential order...done. > Stat files in sequential order...done. > Delete files in sequential order...done. > Create files in random order...done. > Stat files in random order...done. > Delete files in random order...done. > Version 1.03c ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > monoid 6G 48966 92 77517 29 43999 24 56863 94 198557 38 3554 16 > ------Sequential Create------ --------Random Create-------- > -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- > files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP > 1024 19273 68 72414 86 6326 31 16499 60 56438 75 2784 15 > monoid,6G,48966,92,77517,29,43999,24,56863,94,198557,38,3553.8,16,1024,19273,68,72414,86,6326,31,16499,60,56438,75,2784,15 > > total used free shared buffers cached > Mem: 3024 2174 849 0 545 1261 > -/+ buffers/cache: 366 2657 > Swap: 1023 674 349 > > mdadm: Cannot open /dev/sda2: Device or resource busy > Thanks for providing this information, its quite useful. I suspect I'll be trying something like this myself in the near future. Wil -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SSD & mechanical disc in RAID 1 2010-01-22 16:37 ` Goswin von Brederlow 2010-01-31 20:21 ` Aryeh Gregor @ 2010-02-01 20:06 ` Bill Davidsen 1 sibling, 0 replies; 8+ messages in thread From: Bill Davidsen @ 2010-02-01 20:06 UTC (permalink / raw) To: Goswin von Brederlow; +Cc: Keld Jorn Simonsen, Wil Reichert, linux raid Goswin von Brederlow wrote: > Keld Jørn Simonsen <keld@keldix.com> writes: > > >> On Sat, Jan 09, 2010 at 09:53:11AM -0800, Wil Reichert wrote: >> >>> Has anyone ever tried putting an SSD and a mechanical disc in RAID 1 >>> using write-mostly? The goal would be to extol the speed & latency >>> virtues of an SSD while retaining the integrity of the traditional >>> storage. Would this setup even work as I expect it to? >>> >> I think you are better served by trying out RAID10 in the far or offset >> layouts. RAID1 would per se not gain from the low latency times of >> SSD. RAID1 should function the same with hard disk as with SSD, >> and with hard disk it would not make sense to multiplex reading >> of a file, as skipping and reading a block essentially takes the same >> time. >> >> Best regards >> Keld >> > > With raid10 in the far or offset layout every (large) read and write > would always access both the SSD and rotating disk. That would make > every operation wait for the rotating disk to seek. > > > I think his initial idea of raid1 is verry good. With write-mostly all > reads should come from the fast SSD. I would also add write-behind. > That way writes will not wait for the rotating disk and get the full > SSD speed as well. > I had an SSD waiting to be deployed, and I tried putting a journal on it and using a mount with data=journal. Supposedly when the data is written to the journal the write will be "complete" and effective write speed for many small random writes will be higher. I don't have the results handy, but they were not so great that I went out and got another SDD. -- Bill Davidsen <davidsen@tmr.com> "We can't solve today's problems by using the same thinking we used in creating them." - Einstein -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SSD & mechanical disc in RAID 1 2010-01-09 17:53 SSD & mechanical disc in RAID 1 Wil Reichert 2010-01-09 19:23 ` Keld Jørn Simonsen @ 2010-02-01 21:10 ` David Rees 2010-02-02 14:07 ` Aryeh Gregor 1 sibling, 1 reply; 8+ messages in thread From: David Rees @ 2010-02-01 21:10 UTC (permalink / raw) To: Wil Reichert; +Cc: linux raid On Sat, Jan 9, 2010 at 9:53 AM, Wil Reichert <wil.reichert@gmail.com> wrote: > Has anyone ever tried putting an SSD and a mechanical disc in RAID 1 > using write-mostly? The goal would be to extol the speed & latency > virtues of an SSD while retaining the integrity of the traditional > storage. Would this setup even work as I expect it to? I have such a setup using a 30GB OCZ Vertex and an old 120GB Seagate 7200.7 (IDE!). Read performance is exactly as you'd expect - very fast as all reads come from the SSD so it's as fast as the SSD (though I haven't benchmarked it to verify). At least my seat of the pants (SOTP) measurement in comparison to other SSDs seems very similar. Write performance is also what you'd expect, the same as an old IDE drive. For a budget setup, it does what I expected it to. I suspect that if you used write-behind with a write-intent bitmap stored only on the SSD, you'd get some of that performance back, but potentially lose a bit of reliability. -Dave -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SSD & mechanical disc in RAID 1 2010-02-01 21:10 ` David Rees @ 2010-02-02 14:07 ` Aryeh Gregor 0 siblings, 0 replies; 8+ messages in thread From: Aryeh Gregor @ 2010-02-02 14:07 UTC (permalink / raw) To: David Rees; +Cc: Wil Reichert, linux raid On Mon, Feb 1, 2010 at 4:10 PM, David Rees <drees76@gmail.com> wrote: > I suspect that if you used write-behind with a write-intent bitmap > stored only on the SSD, you'd get some of that performance back, but > potentially lose a bit of reliability. You'd only lose data here if the SSD died and the machine crashed *at the same time*, right? Doesn't seem like a big deal to me -- if the events are even a minute or two apart, you'd be fine (right?). By contrast, if I understand correctly, you can get data corruption in RAID5 if you lose a disk and then the machine crashes *any* time before a new disk is resynced. ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2010-02-02 14:07 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2010-01-09 17:53 SSD & mechanical disc in RAID 1 Wil Reichert 2010-01-09 19:23 ` Keld Jørn Simonsen 2010-01-22 16:37 ` Goswin von Brederlow 2010-01-31 20:21 ` Aryeh Gregor 2010-02-01 20:56 ` Wil Reichert 2010-02-01 20:06 ` Bill Davidsen 2010-02-01 21:10 ` David Rees 2010-02-02 14:07 ` Aryeh Gregor
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).