linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Multiple SSDs - RAID-1, -10, or stacked? TRIM?
@ 2013-10-09 12:31 Andy Smith
  2013-10-09 13:00 ` Roberto Spadim
                   ` (4 more replies)
  0 siblings, 5 replies; 20+ messages in thread
From: Andy Smith @ 2013-10-09 12:31 UTC (permalink / raw)
  To: linux-raid

Hello,

Due to increasing load of random read IOPS I am considering using 8
SSDs and md in my next server, instead of 8 SATA HDDs with
battery-backed hardware RAID. I am thinking of using Crucial m500s.

Are there any gotchas to be aware of? I haven't much experience with
SSDs.

If these were normal HDDs then (aside from small partitions for
/boot) I'd just RAID-10 for the main bulk of the storage. Is there
any reason not to do that with SSDs currently?

I think I read somewhere that offline TRIM is only supported by md
for RAID-1, is that correct? If so, should I be finding a way to use
four pairs of RAID-1s, or does it not matter?

Any insights appreciated.

Cheers,
Andy

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 12:31 Multiple SSDs - RAID-1, -10, or stacked? TRIM? Andy Smith
@ 2013-10-09 13:00 ` Roberto Spadim
  2013-10-09 13:27 ` David Brown
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 20+ messages in thread
From: Roberto Spadim @ 2013-10-09 13:00 UTC (permalink / raw)
  To: Linux-RAID

well some time ago i asked about a bcache with ssd and raid devices
maybe, but only maybe, a bcache with two ssd raid1(10) + hdd raid1(10)
could help you

in my workload (50% read/write) the raid1 is faster than raid10,
because i have more disks to read different tables / threads
maybe only changing the raid layout could help too

2013/10/9 Andy Smith <andy@strugglers.net>:
> Hello,
>
> Due to increasing load of random read IOPS I am considering using 8
> SSDs and md in my next server, instead of 8 SATA HDDs with
> battery-backed hardware RAID. I am thinking of using Crucial m500s.
>
> Are there any gotchas to be aware of? I haven't much experience with
> SSDs.
>
> If these were normal HDDs then (aside from small partitions for
> /boot) I'd just RAID-10 for the main bulk of the storage. Is there
> any reason not to do that with SSDs currently?
>
> I think I read somewhere that offline TRIM is only supported by md
> for RAID-1, is that correct? If so, should I be finding a way to use
> four pairs of RAID-1s, or does it not matter?
>
> Any insights appreciated.
>
> Cheers,
> Andy
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Roberto Spadim

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 12:31 Multiple SSDs - RAID-1, -10, or stacked? TRIM? Andy Smith
  2013-10-09 13:00 ` Roberto Spadim
@ 2013-10-09 13:27 ` David Brown
  2013-10-09 13:52   ` Roberto Spadim
  2013-10-09 14:46 ` Ian Pilcher
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 20+ messages in thread
From: David Brown @ 2013-10-09 13:27 UTC (permalink / raw)
  To: linux-raid

On 09/10/13 14:31, Andy Smith wrote:
> Hello,
> 
> Due to increasing load of random read IOPS I am considering using 8
> SSDs and md in my next server, instead of 8 SATA HDDs with
> battery-backed hardware RAID. I am thinking of using Crucial m500s.
> 
> Are there any gotchas to be aware of? I haven't much experience with
> SSDs.
> 
> If these were normal HDDs then (aside from small partitions for
> /boot) I'd just RAID-10 for the main bulk of the storage. Is there
> any reason not to do that with SSDs currently?
> 
> I think I read somewhere that offline TRIM is only supported by md
> for RAID-1, is that correct? If so, should I be finding a way to use
> four pairs of RAID-1s, or does it not matter?
> 
> Any insights appreciated.
> 
> Cheers,
> Andy

For two hard disks, raid10 (with either f2 or o2 layout - n2 is almost
identical to normal raid1) can be a lot faster than raid1, because you
get the striping for big data (especially large reads), you get the
faster read throughput because your data is on the fast outer edge of
the disk, and your read latency is better because your head movement is
smaller.  Your writes are a bit slower because they are scattered about
the disk.

But for two SSDs, raid10 (f2, o2) has far fewer benefits because you
have no head movement - it is only large reads that can be faster.  If
you need IOPS - and presumably multiple parallel accesses - this is no
help.  raid10 has extra complexity and thus extra latency (which will
not be noticeably with HDs, but might be with SSDs), and limitations on
resizing and reshaping.

Extrapolating to 8 disks, I think therefore 4 sets of raid1 pair is
likely to be faster.  As to what you should do with these sets, that
depends on the application.  XFS over a linear join might be your best
bet - raid0 will work but you probably want a large chunk size because
you want to avoid striped reads and writes in order to get high IOPs.

Don't worry too much about TRIM if your SSDs are decent and you have
plenty of overprovisioning, but offline TRIM is worth doing when
supported.  (Never use online TRIM.)



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 13:27 ` David Brown
@ 2013-10-09 13:52   ` Roberto Spadim
  0 siblings, 0 replies; 20+ messages in thread
From: Roberto Spadim @ 2013-10-09 13:52 UTC (permalink / raw)
  To: David Brown; +Cc: Linux-RAID

just one mark
on the other thread, decent drive = enterprise level, a home level
decent drive is samsung 840 pro (if i'm not wrong), i used ocz vetex2
without problem but i don't know if it's really a good drive, but
worked in my workload...

2013/10/9 David Brown <david.brown@hesbynett.no>:
> On 09/10/13 14:31, Andy Smith wrote:
>> Hello,
>>
>> Due to increasing load of random read IOPS I am considering using 8
>> SSDs and md in my next server, instead of 8 SATA HDDs with
>> battery-backed hardware RAID. I am thinking of using Crucial m500s.
>>
>> Are there any gotchas to be aware of? I haven't much experience with
>> SSDs.
>>
>> If these were normal HDDs then (aside from small partitions for
>> /boot) I'd just RAID-10 for the main bulk of the storage. Is there
>> any reason not to do that with SSDs currently?
>>
>> I think I read somewhere that offline TRIM is only supported by md
>> for RAID-1, is that correct? If so, should I be finding a way to use
>> four pairs of RAID-1s, or does it not matter?
>>
>> Any insights appreciated.
>>
>> Cheers,
>> Andy
>
> For two hard disks, raid10 (with either f2 or o2 layout - n2 is almost
> identical to normal raid1) can be a lot faster than raid1, because you
> get the striping for big data (especially large reads), you get the
> faster read throughput because your data is on the fast outer edge of
> the disk, and your read latency is better because your head movement is
> smaller.  Your writes are a bit slower because they are scattered about
> the disk.
>
> But for two SSDs, raid10 (f2, o2) has far fewer benefits because you
> have no head movement - it is only large reads that can be faster.  If
> you need IOPS - and presumably multiple parallel accesses - this is no
> help.  raid10 has extra complexity and thus extra latency (which will
> not be noticeably with HDs, but might be with SSDs), and limitations on
> resizing and reshaping.
>
> Extrapolating to 8 disks, I think therefore 4 sets of raid1 pair is
> likely to be faster.  As to what you should do with these sets, that
> depends on the application.  XFS over a linear join might be your best
> bet - raid0 will work but you probably want a large chunk size because
> you want to avoid striped reads and writes in order to get high IOPs.
>
> Don't worry too much about TRIM if your SSDs are decent and you have
> plenty of overprovisioning, but offline TRIM is worth doing when
> supported.  (Never use online TRIM.)
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Roberto Spadim

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 12:31 Multiple SSDs - RAID-1, -10, or stacked? TRIM? Andy Smith
  2013-10-09 13:00 ` Roberto Spadim
  2013-10-09 13:27 ` David Brown
@ 2013-10-09 14:46 ` Ian Pilcher
  2013-10-09 16:21   ` David Brown
  2013-10-10  9:15 ` Stan Hoeppner
  2013-10-11  8:42 ` David Brown
  4 siblings, 1 reply; 20+ messages in thread
From: Ian Pilcher @ 2013-10-09 14:46 UTC (permalink / raw)
  To: linux-raid

On 10/09/2013 07:31 AM, Andy Smith wrote:
> Any insights appreciated.

Consider the probable failure mode of an SSD.  An SSD is more likely
than an HDD to (1) die with absolutely no warning and (2) die due to
the pattern of data written to it over its lifetime (and the way those
writes interact with the SSD's controller/firmware).

#2 in particular means that there is potentially a much higher
correlation between the failures of SSDs in an array than there is of
HDDs.  (And #1 means that the consequences will be more catastrophic.)

I would recommend using SSDs with two different types on controllers.

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
Sometimes there's nothing left to do but crash and burn...or die trying.
========================================================================


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 14:46 ` Ian Pilcher
@ 2013-10-09 16:21   ` David Brown
  2013-10-09 17:33     ` Ian Pilcher
                       ` (2 more replies)
  0 siblings, 3 replies; 20+ messages in thread
From: David Brown @ 2013-10-09 16:21 UTC (permalink / raw)
  To: Ian Pilcher; +Cc: linux-raid

On 09/10/13 16:46, Ian Pilcher wrote:
> On 10/09/2013 07:31 AM, Andy Smith wrote:
>> Any insights appreciated.
> 
> Consider the probable failure mode of an SSD.  An SSD is more likely
> than an HDD to (1) die with absolutely no warning and (2) die due to
> the pattern of data written to it over its lifetime (and the way those
> writes interact with the SSD's controller/firmware).
> 
> #2 in particular means that there is potentially a much higher
> correlation between the failures of SSDs in an array than there is of
> HDDs.  (And #1 means that the consequences will be more catastrophic.)
> 
> I would recommend using SSDs with two different types on controllers.
> 

Do you have any references for these claims?

I would believe that /if/ an SSD was going to die, it is likely to do so
without warning - it is likely to be the controller that has died.  But
I can think of no reason why the controller on an SSD is more likely to
die than the controller on an HD - and HD's have so many more ways to
die (often slowly and noisily).

Modern SSDs are not going to suffer from wear in any realistic
environment.  You have to be intentionally abusive - a decent SSD will
be fine with /years/ of continuous high-speed writes.  Even then, you
will get write failures long before you have read failures.

That leaves firmware bugs as a possible explanation for such worries -
and that also applies to HD's.


But with that aside, having different manufacturers and models for the
two halves of a raid1 pair is not a bad idea regardless of whether you
have SSD's or HD's - it avoids the risk of a double failure due to a bad
production batch.

David


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 16:21   ` David Brown
@ 2013-10-09 17:33     ` Ian Pilcher
  2013-10-09 18:04       ` Roberto Spadim
  2013-10-09 19:08       ` David Brown
  2013-10-10  6:14     ` Mikael Abrahamsson
  2013-10-10 16:18     ` Art -kwaak- van Breemen
  2 siblings, 2 replies; 20+ messages in thread
From: Ian Pilcher @ 2013-10-09 17:33 UTC (permalink / raw)
  To: linux-raid

On 10/09/2013 11:21 AM, David Brown wrote:
> Do you have any references for these claims?

If you mean real data, then no.  I am simply reasoning (hopefully
rationally) from the nature of the different device types, along with
anecdotes from people that have experienced SSD failures.

> I would believe that /if/ an SSD was going to die, it is likely to do so
> without warning - it is likely to be the controller that has died.  But
> I can think of no reason why the controller on an SSD is more likely to
> die than the controller on an HD - and HD's have so many more ways to
> die (often slowly and noisily).

Agreed.  The interesting thing is that a bunch of less reliable devices
that die slowly and noisily may be collectively more reliable over the
long term than a bunch of more reliable devices that die completely at
the same time.  I.e. does the additional variability in failures created
by the mechanical components of HDDs reduce the correlation of the
failures enough to make the entire array more reliable over some period
of time?

(I would expect this effect to be most pronounced when populating an
array with similar SSDs/HDDs.)

> That leaves firmware bugs as a possible explanation for such worries -
> and that also applies to HD's.

Agreed, but see my reasoning above.

> But with that aside, having different manufacturers and models for the
> two halves of a raid1 pair is not a bad idea regardless of whether you
> have SSD's or HD's - it avoids the risk of a double failure due to a bad
> production batch.

Is it potentially even more important when using SSDs, though?  I
believe that the answer is yes.

I guess we just have to wait for Google to migrate their entire
infrastructure for SSDs and publish a new study ...

-- 
========================================================================
Ian Pilcher                                         arequipeno@gmail.com
Sometimes there's nothing left to do but crash and burn...or die trying.
========================================================================


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 17:33     ` Ian Pilcher
@ 2013-10-09 18:04       ` Roberto Spadim
  2013-10-09 19:08       ` David Brown
  1 sibling, 0 replies; 20+ messages in thread
From: Roberto Spadim @ 2013-10-09 18:04 UTC (permalink / raw)
  Cc: Linux-RAID

> Is it potentially even more important when using SSDs, though?  I
> believe that the answer is yes.
>
> I guess we just have to wait for Google to migrate their entire
> infrastructure for SSDs and publish a new study ...


maybe not... facebook use ssd as cache for hdd check flashcache -
https://github.com/facebook/flashcache/ it's a high write/read
workload, maybe ssd+hdd is better

-- 
Roberto Spadim

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 17:33     ` Ian Pilcher
  2013-10-09 18:04       ` Roberto Spadim
@ 2013-10-09 19:08       ` David Brown
  2013-10-09 20:35         ` SSD reliability; was: " Matt Garman
  1 sibling, 1 reply; 20+ messages in thread
From: David Brown @ 2013-10-09 19:08 UTC (permalink / raw)
  To: Ian Pilcher; +Cc: linux-raid

On 09/10/13 19:33, Ian Pilcher wrote:
> On 10/09/2013 11:21 AM, David Brown wrote:
>> Do you have any references for these claims?
> 
> If you mean real data, then no.  I am simply reasoning (hopefully
> rationally) from the nature of the different device types, along with
> anecdotes from people that have experienced SSD failures.

OK.  Anecdotal evidence is not to be ignored - you wouldn't have posted
if you didn't think it was realistic.  But it gets harder to credit for
each step away from personal experience.

> 
>> I would believe that /if/ an SSD was going to die, it is likely to do so
>> without warning - it is likely to be the controller that has died.  But
>> I can think of no reason why the controller on an SSD is more likely to
>> die than the controller on an HD - and HD's have so many more ways to
>> die (often slowly and noisily).
> 
> Agreed.  The interesting thing is that a bunch of less reliable devices
> that die slowly and noisily may be collectively more reliable over the
> long term than a bunch of more reliable devices that die completely at
> the same time.  I.e. does the additional variability in failures created
> by the mechanical components of HDDs reduce the correlation of the
> failures enough to make the entire array more reliable over some period
> of time?
> 
> (I would expect this effect to be most pronounced when populating an
> array with similar SSDs/HDDs.)

I agree with that, but it does not take into account the general greater
reliability and expected lifetime of SSD's compared to HD's.  In some
cases, you might consider "slow and noisy death" as being almost as good
as "working fine" - if you have enough redundancy (raid6), hot spares,
and replacement drives on hand then a "slow and noisy dying" disk does
not noticeably reduce the reliability of the whole array.  Then it comes
down to the likelihood of sudden and complete failure of the disks.  I
have not seen evidence that SSD's are more prone to that the HD's.

<http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html>

This is two years old - we can expect that for serious SSD's, the
failure rate will have gone down since then as the technology matures
(we are not discussing the low-end, where price has higher priority than
reliability).  Hard disks are already a mature product - there has been
little change in the reliability figures in past years.

The only conclusion the study shows is that reliability figures from SSD
vendors are no more realistic than those from HD vendors, and that there
is not enough data to give concrete results.  My reading of the figures
is that SSD's look more reliable, but the trend can't be proven without
longer-term studies.

One thing that is clear from other studies is that SSDs can suffer
catastrophic failures with unexpected power faults, in a way that HD's
do not.  A good UPS, and perhaps redundant power supplies, is a good idea.



> 
>> That leaves firmware bugs as a possible explanation for such worries -
>> and that also applies to HD's.
> 
> Agreed, but see my reasoning above.
> 
>> But with that aside, having different manufacturers and models for the
>> two halves of a raid1 pair is not a bad idea regardless of whether you
>> have SSD's or HD's - it avoids the risk of a double failure due to a bad
>> production batch.
> 
> Is it potentially even more important when using SSDs, though?  I
> believe that the answer is yes.
> 

I think I agree there.

> I guess we just have to wait for Google to migrate their entire
> infrastructure for SSDs and publish a new study ...
> 

Even then it will be difficult to know for sure, as the SSD devices are
still changing too rapidly.  Wait another 5 years or so, then get Google
to migrate to SSDs and collect 5 years of data.  Then we will have
something we can rely on!


^ permalink raw reply	[flat|nested] 20+ messages in thread

* SSD reliability; was: Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 19:08       ` David Brown
@ 2013-10-09 20:35         ` Matt Garman
  2013-10-09 21:17           ` David Brown
  2013-10-09 21:46           ` Brian Candler
  0 siblings, 2 replies; 20+ messages in thread
From: Matt Garman @ 2013-10-09 20:35 UTC (permalink / raw)
  To: linux-raid


Regarding the SSD reliability discussion: this probably means
nothing because sample size is so small, but anyway, FWIW: I've had
two SSDs suddenly just die.  That is, they become completely
ivisible to the OS/BIOS.  This is for my personal/home
infrastructure, meaning the total number of SSDs I've had in my
hands is less maybe a dozen or so.

The two drives that died were cheapie Kingston drives, and very
low-capacity at that.  (One was a 16 GB drive; Kingston sent me a 64
GB drive for my warranty replacement.  I think the other was maybe
32 GB, but I don't remember.)  I don't recall their exact vintage,
but they were old enough that their tiny capacity wasn't embarassing
when purchased, but young enough to still be under warranty.

At any rate, I have different but related question: does anyone have
any thoughts with regards to using an SSD as a WORM (write-once,
read-many) drive?  For example, a big media collection in a home
server.

Ignoring the cost aspect, the nice thing about SSDs are their small
size and neglible power consumption (and therefore low heat
production).  As mentioned previously in this thread, SSD at least
removes the "mechanical" risks from a storage system.  So what
happens if you completely fill up an SSD, then never modify it after
that, i.e. mount it read-only?

I understand that the set bits in NAND memory slowly degrade over
time, so it's clearly not true WORM media.  But what kind of
timescale might one expect before bit rot becomes a problem?  And
what if one were to use some kind of parity scheme (raid5/6, zfs,
snapraid) to ocassionaly "refresh" the NAND memory?

FWIW, I also asked about this on the ServeTheHome forums[1].

In general, seems there's a market for true WORM storage (but at
SOHO prices of course!).  Something like mdisc[2], but in modern
mechanical-HDD capacities and prices.  :) 

[1] http://forums.servethehome.com/hard-drives-solid-state-drives/2453-ssd-worm.html

[2] http://www.mdisc.com/what-is-mdisc/



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: SSD reliability; was: Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 20:35         ` SSD reliability; was: " Matt Garman
@ 2013-10-09 21:17           ` David Brown
  2013-10-09 21:46           ` Brian Candler
  1 sibling, 0 replies; 20+ messages in thread
From: David Brown @ 2013-10-09 21:17 UTC (permalink / raw)
  To: Matt Garman; +Cc: linux-raid

On 09/10/13 22:35, Matt Garman wrote:
> 
> Regarding the SSD reliability discussion: this probably means
> nothing because sample size is so small, but anyway, FWIW: I've had
> two SSDs suddenly just die.  That is, they become completely
> ivisible to the OS/BIOS.  This is for my personal/home
> infrastructure, meaning the total number of SSDs I've had in my
> hands is less maybe a dozen or so.

That's the trouble with statistics on SSD's - /all/ the current sample
sizes are too small, or have run for too short a time.  Still, a single
failure is enough to remind us that SSD's are not infallible.

> 
> The two drives that died were cheapie Kingston drives, and very
> low-capacity at that.  (One was a 16 GB drive; Kingston sent me a 64
> GB drive for my warranty replacement.  I think the other was maybe
> 32 GB, but I don't remember.)  I don't recall their exact vintage,
> but they were old enough that their tiny capacity wasn't embarassing
> when purchased, but young enough to still be under warranty.

Warranties are perhaps the best judge we have of SSD reliability.  Some
devices are now sold with 5 year warranties.  When you consider the low
margins in a competitive market, and the high costs of returns and
warranty replacements, the manufacturer is expecting a very low number
of failed drives within that 5 year period.  Until we have the 5 year
large-sample history to learn from, manufacturer's expectations are a
reasonable guide.

> 
> At any rate, I have different but related question: does anyone have
> any thoughts with regards to using an SSD as a WORM (write-once,
> read-many) drive?  For example, a big media collection in a home
> server.

Ignoring cost, an SSD will do a fine job whether you write to it many
times or just once.  (But since it may die at any time, don't forget a
backup copy!)

> 
> Ignoring the cost aspect, the nice thing about SSDs are their small
> size and neglible power consumption (and therefore low heat
> production).  As mentioned previously in this thread, SSD at least
> removes the "mechanical" risks from a storage system.  So what
> happens if you completely fill up an SSD, then never modify it after
> that, i.e. mount it read-only?

What are you expecting to happen?  You will be able to read from it at
high speed.

I can't imagine this will have a significant effect on its lifetime,
since SSD failures are not write-related (unless you bump into a
firmware bug, I suppose).  It has been a long time since SSD's could
wear out - even with a fairly small and cheap drive, you can write 100
GB a day for a decade without suffering wear effects.

> 
> I understand that the set bits in NAND memory slowly degrade over
> time, so it's clearly not true WORM media.  But what kind of
> timescale might one expect before bit rot becomes a problem?  And
> what if one were to use some kind of parity scheme (raid5/6, zfs,
> snapraid) to ocassionaly "refresh" the NAND memory?

Most media degrades a little over time.  But I have never heard of flash
(NAND or NOR) actually /losing/ bits over time.  The cells get lower
margins as you repeatedly erase and re-write them, but as noted above
you will not see that on an SSD with any realistic usage.  It is normal
for the datasheets for the flash chips to have information about minimum
ages to retain data - but these figures are based on simulated ageing
(very high temperatures, very high or very low voltages, etc.),
specified for worst-case conditions (extremes of voltage ranges and
temperatures of typically 85 or 105 C), and given with wide margins.  I
think you can be fairly sure that anything you write to a NAND chip
today will be around for at least as long as you will.

The other electronics on the SSD is a different matter, of course.
Electrolytic capacitors dry out, oscillator crystals change frequency
from ageing, piezoelectric effects crack ceramic capacitors,
heating/cooling cycles stress chip pins, electromigration causes voids
and increased resistance on solder joints, etc.  (Anyone who things
SSD's have no moving parts has not studied semiconductors and material
sciences - lots of bits move and wear out, if you have a good enough
microscope.)

So your SSD will suddenly die one day, regardless of how much you write
to it.  "Refreshing" by forcing a raid re-build on the disk will not help.

> 
> FWIW, I also asked about this on the ServeTheHome forums[1].
> 
> In general, seems there's a market for true WORM storage (but at
> SOHO prices of course!).  Something like mdisc[2], but in modern
> mechanical-HDD capacities and prices.  :) 
> 
> [1] http://forums.servethehome.com/hard-drives-solid-state-drives/2453-ssd-worm.html
> 
> [2] http://www.mdisc.com/what-is-mdisc/
> 

I'm sure the m-disc will last longer than a normal DVD, but I'd take its
1000 year claims with a pinch of salt.  I'll be happy to be proved wrong
in 3013...

I agree that there is such a market for long-term digital storage.  At
the moment, it looks like cloud storage is the choice - then it is
somebody else's problem to rotate the data onto new hard disks as old
ones falter.  I've seen some articles about holographic storage in
quartz crystals, which should last a long time - but there is a way to
go before they reach HDD capacities and prices!


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: SSD reliability; was: Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 20:35         ` SSD reliability; was: " Matt Garman
  2013-10-09 21:17           ` David Brown
@ 2013-10-09 21:46           ` Brian Candler
  1 sibling, 0 replies; 20+ messages in thread
From: Brian Candler @ 2013-10-09 21:46 UTC (permalink / raw)
  To: Matt Garman; +Cc: linux-raid

On 09/10/2013 21:35, Matt Garman wrote:
> In general, seems there's a market for true WORM storage (but at
> SOHO prices of course!).
Certainly is.
http://www.theregister.co.uk/2013/08/13/facebook_calls_for_worst_flas_possible/


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 16:21   ` David Brown
  2013-10-09 17:33     ` Ian Pilcher
@ 2013-10-10  6:14     ` Mikael Abrahamsson
  2013-10-10 16:18     ` Art -kwaak- van Breemen
  2 siblings, 0 replies; 20+ messages in thread
From: Mikael Abrahamsson @ 2013-10-10  6:14 UTC (permalink / raw)
  To: linux-raid

On Wed, 9 Oct 2013, David Brown wrote:

> I would believe that /if/ an SSD was going to die, it is likely to do so 
> without warning - it is likely to be the controller that has died.  But 
> I can think of no reason why the controller on an SSD is more likely to 
> die than the controller on an HD - and HD's have so many more ways to 
> die (often slowly and noisily).

I have a lot higher failure rates on SSDs when than HDDs, all of them 
sudden complete deaths. Those were all first-generation drives from Intel 
and OCZ, but still...

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 12:31 Multiple SSDs - RAID-1, -10, or stacked? TRIM? Andy Smith
                   ` (2 preceding siblings ...)
  2013-10-09 14:46 ` Ian Pilcher
@ 2013-10-10  9:15 ` Stan Hoeppner
  2013-10-10 20:37   ` Andy Smith
  2013-10-11  8:42 ` David Brown
  4 siblings, 1 reply; 20+ messages in thread
From: Stan Hoeppner @ 2013-10-10  9:15 UTC (permalink / raw)
  To: linux-raid

On 10/9/2013 7:31 AM, Andy Smith wrote:
> Hello,

Hello Andy.

> Due to increasing load of random read IOPS I am considering using 8
                            ^^^^^^^^^^^^^^^^

The data has to be written before it can be read.  Are you at all
concerned with write throughput, either random or sequential?  Please
read on.

> SSDs and md in my next server, instead of 8 SATA HDDs with
> battery-backed hardware RAID. I am thinking of using Crucial m500s.
> 
> Are there any gotchas to be aware of? I haven't much experience with
> SSDs.

Yes, there is one major gotcha WRT md/RAID and SSDs, which to this point
nobody has mentioned in this thread, possibly because it pertains to
writes, not reads.  Note my question posed to you up above.  Since I've
answered this question in detail at least a dozen times on this mailing
list, I'll simply refer you to one of my recent archived posts for the
details:

http://permalink.gmane.org/gmane.linux.raid/43984

> If these were normal HDDs then (aside from small partitions for
> /boot) I'd just RAID-10 for the main bulk of the storage. Is there
> any reason not to do that with SSDs currently?

The answer to this questions lies behind the link above.

> I think I read somewhere that offline TRIM is only supported by md
> for RAID-1, is that correct? If so, should I be finding a way to use
> four pairs of RAID-1s, or does it not matter?

Yes, but not because of TRIM.  But of course, you already read that in
the gmane post above.  That thread is void of another option I've
written about many a time, which someone attempted to parrot earlier.

Layer an md linear array atop RAID1 pairs, and format it with XFS.  XFS
is unique among Linux filesystems in that it uses what are called
allocation groups.  Take a pie (XFS filesystem atop linear array of 4x
RAID1 SSD pairs) and cut 4 slices (AGs).  That's basically what XFS does
with the blocks of the underlying device.  Now create 4 directories.
Now write four 1GB files, each into one directory, simultaneously.  XFS
just wrote each 1GB file to a different SSD, all in parallel.  If each
SSD can write at 500MB/s, you just achieved 2GB/s throughput, -without-
using a striped array.  No other filesystem can achieve this kind of
throughput without a striped array underneath.  And yes, TRIM will work
with this setup, both DISCARD and batch fitrim.

Allocation groups enable fantastic parallelism in XFS with a linear
array over mirrors, and this setup is perfect for both random write and
read workloads.  But AGs on a linear array can also cause a bottleneck
if the user doesn't do a little planning of directory and data layout.
In the scenario above we have 4 allocation groups, AG0-AG3, each
occupying one SSD.  The first directory you create will be created in
AG0 (SSD0), the 2nd AG1 (SSD1), the 3rd AG2 (SSD2), and the 4th AG3
(SSD3).  The 5th directory will be created on AG0, as well as the 9th,
and so on.  So you should already see the potential problem here.  If
you put all of your files in a single directory, or in multiple
directories that all reside within the same AG, they will all end up on
only one of your 4 SSDs.  Or at least up to the point you run out of
free space, in which case XFS will "spill" new files into the next AG.

To be clear, the need for careful directory/file layout to achieve
parallel throughput pertains only to the linear concatenation storage
architecture described above.  If one is using XFS atop a striped array
then throughput, either sequential or parallel, is -not- limited by
file/dir placement across the AGs, as all AGs are striped across the disks.

-- 
Stan


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 16:21   ` David Brown
  2013-10-09 17:33     ` Ian Pilcher
  2013-10-10  6:14     ` Mikael Abrahamsson
@ 2013-10-10 16:18     ` Art -kwaak- van Breemen
  2 siblings, 0 replies; 20+ messages in thread
From: Art -kwaak- van Breemen @ 2013-10-10 16:18 UTC (permalink / raw)
  To: linux-raid

Hi,

On Wed, Oct 09, 2013 at 06:21:15PM +0200, David Brown wrote:
> Do you have any references for these claims?

A vertex 2 + intel M25 combo on a dell sas 6/i controller (I
know, sas controllers are bad, but you can't get blades with a
reliable ahci interface):
The vertex2 occasionally gets rejected by the controller.
This needs a power cycle on the ssd before it works again.
Doing a raid check doesn't reveal any data corruption on the
vertex 2.

So compatability bugs are a reason.

> I would believe that /if/ an SSD was going to die, it is likely to do so
> without warning - it is likely to be the controller that has died.  But
> I can think of no reason why the controller on an SSD is more likely to
> die than the controller on an HD - and HD's have so many more ways to
> die (often slowly and noisily).
> 
> Modern SSDs are not going to suffer from wear in any realistic
> environment.  You have to be intentionally abusive - a decent SSD will
> be fine with /years/ of continuous high-speed writes.  Even then, you
> will get write failures long before you have read failures.
> 
> That leaves firmware bugs as a possible explanation for such worries -
> and that also applies to HD's.

We had those too, but usually they are no that destructive. The
amount of problems with vertex 2 rejects are pretty high. The
intel SSD's were rejected once. And normal sas disk firmware
hangups were about 2 or so, fixed with a harddisk firmware
upgrade. In usage percentage, the sas disks are more stable than
the sata SSD's.

> But with that aside, having different manufacturers and models for the
> two halves of a raid1 pair is not a bad idea regardless of whether you
> have SSD's or HD's - it avoids the risk of a double failure due to a bad
> production batch.

Yes, completely correct: I had a bunch of WD-RE2 on my home nas,
with raid1 with an old kernel.
They all had media errors at the same time, and md-raid1 at that
time could only reject the whole disk :-(.

Any way: my biggest fear with SSD is that the meta data get's
corrupted and so the blocks get shuffled. I've never seen it
happen on an SSD, only rejects of the drive and from hearsay a
total failure of the drive.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-10  9:15 ` Stan Hoeppner
@ 2013-10-10 20:37   ` Andy Smith
  2013-10-11  8:30     ` David Brown
  2013-10-11  9:37     ` Stan Hoeppner
  0 siblings, 2 replies; 20+ messages in thread
From: Andy Smith @ 2013-10-10 20:37 UTC (permalink / raw)
  To: linux-raid

Hi Stan,

(Thanks everyone else who's responded so far, too -- I'm paying
attention with interest)

On Thu, Oct 10, 2013 at 04:15:08AM -0500, Stan Hoeppner wrote:
> On 10/9/2013 7:31 AM, Andy Smith wrote:
> > Are there any gotchas to be aware of? I haven't much experience with
> > SSDs.
> 
> Yes, there is one major gotcha WRT md/RAID and SSDs, which to this point
> nobody has mentioned in this thread, possibly because it pertains to
> writes, not reads.  Note my question posed to you up above.  Since I've
> answered this question in detail at least a dozen times on this mailing
> list, I'll simply refer you to one of my recent archived posts for the
> details:
> 
> http://permalink.gmane.org/gmane.linux.raid/43984

When I first read that link I thought perhaps you were referring to
write performance dropping off a cliff due to SSD garbage caching
routines that kicked in, but then I read the rest of the thread and
I think maybe you were hinting at the single write thread issue you
talk about more in:

    http://www.spinics.net/lists/raid/msg44211.html

Is that the case?

> To be clear, the need for careful directory/file layout to achieve
> parallel throughput pertains only to the linear concatenation storage
> architecture described above.  If one is using XFS atop a striped array
> then throughput, either sequential or parallel, is -not- limited by
> file/dir placement across the AGs, as all AGs are striped across the disks.

So, in summary do you recommend the stacked RAID-0 on top of RAID-1
pairs instead of a RAID-10, where write performance may otherwise be
bottlenecked by md's single write thread?

Write ops are a fraction of the random reads and using RAID with a
battery-backed write cache solved that problem, but it does need to
scale linearly with whatever improvement we can get for the read
ops, so I would think it will still be something worth thinking
about, so thanks for pointing that out.

Thanks,
Andy

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-10 20:37   ` Andy Smith
@ 2013-10-11  8:30     ` David Brown
  2013-10-11  9:37     ` Stan Hoeppner
  1 sibling, 0 replies; 20+ messages in thread
From: David Brown @ 2013-10-11  8:30 UTC (permalink / raw)
  To: linux-raid

On 10/10/13 22:37, Andy Smith wrote:
> Hi Stan,
> 
> (Thanks everyone else who's responded so far, too -- I'm paying
> attention with interest)
> 
> On Thu, Oct 10, 2013 at 04:15:08AM -0500, Stan Hoeppner wrote:
>> On 10/9/2013 7:31 AM, Andy Smith wrote:
>>> Are there any gotchas to be aware of? I haven't much experience with
>>> SSDs.
>>
>> Yes, there is one major gotcha WRT md/RAID and SSDs, which to this point
>> nobody has mentioned in this thread, possibly because it pertains to
>> writes, not reads.  Note my question posed to you up above.  Since I've
>> answered this question in detail at least a dozen times on this mailing
>> list, I'll simply refer you to one of my recent archived posts for the
>> details:
>>
>> http://permalink.gmane.org/gmane.linux.raid/43984
> 
> When I first read that link I thought perhaps you were referring to
> write performance dropping off a cliff due to SSD garbage caching
> routines that kicked in, but then I read the rest of the thread and
> I think maybe you were hinting at the single write thread issue you
> talk about more in:
> 
>     http://www.spinics.net/lists/raid/msg44211.html
> 
> Is that the case?
> 
>> To be clear, the need for careful directory/file layout to achieve
>> parallel throughput pertains only to the linear concatenation storage
>> architecture described above.  If one is using XFS atop a striped array
>> then throughput, either sequential or parallel, is -not- limited by
>> file/dir placement across the AGs, as all AGs are striped across the disks.
> 
> So, in summary do you recommend the stacked RAID-0 on top of RAID-1
> pairs instead of a RAID-10, where write performance may otherwise be
> bottlenecked by md's single write thread?

I'll try and save Stan the effort in replying to this.

No, he is /not/ recommending RAID-0 on top of RAID-1 pairs.  He is
recommending XFS on a linear stripe of RAID-1 pairs.  There is a /huge/
difference here - what is best depends on your workload, but for any
workload for which 8 SSD's are better than 8 HD's, the XFS solution will
almost certainly be better.

At the bottom layer, you have raid-1 pairs.  These are simple, reliable,
and fast (being simple, there is little extra overhead to limit IOPs).
You can consider advice in other threads about mixing different SSD
types.  And being plain raid-1, you have plenty of flexibility - you can
add extra drives, re-size, etc., at any time.  So far, so good.

On top of that, you have two main choices.

If you want a simple system, you can make a RAID-0 stripe.  Then you can
partition as you want, and use whatever filesystems you need.  RAID-0
gives you excellent large-file performance - reads and writes are
striped across all disks.  But this also means that large reads cause
extra latency for other accesses.  If you are aiming for maximum
throughput on large reads, that's fine - if you are aiming for minimum
latency on lots of parallel accesses, it's much worse.  This can be
mitigated somewhat by having large chunk sizes on the RAID-0 (I'm saying
this from theory, not from experience - so take advice from others too,
and try benchmarking if you can).

The second choice is to use a linear concatenation of the RAID-1 pairs.
 There is no striping - the parts are just attached logically after each
other.  For most file systems, this would not be efficient - the
filesystem would just use the first raid1 pair until it filled up, then
move to the next one, and so on.  But XFS is designed specifically for
such arrangements.  It splits the array into "allocation groups", which
are divided across the array.  Each directory on the disk is put into
one of the allocation groups.  This means that if you make four
directories, all accesses to one directory will go to one pair, while
all accesses to the other directories will go to other pairs.  If you
have a reasonable number of directories, and accesses are distributed
across these directories, then XFS on linear cat gives greater
parallelism and lower latencies than you can get in any other way.  A
disadvantage is that it only works with a single full XFS across the
whole array, though you can probably partition the raid1 pairs into a
small section (for /boot, emergency swap, /, or whatever you need) and a
main partition that is used for the XFS.  Another point with XFS is you
/really/ need an UPS, or you need to use barrier options that lower
performance (this applies to all filesystems, but I believe it is more
so with XFS).



> 
> Write ops are a fraction of the random reads and using RAID with a
> battery-backed write cache solved that problem, but it does need to
> scale linearly with whatever improvement we can get for the read
> ops, so I would think it will still be something worth thinking
> about, so thanks for pointing that out.
> 
> Thanks,
> Andy
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-09 12:31 Multiple SSDs - RAID-1, -10, or stacked? TRIM? Andy Smith
                   ` (3 preceding siblings ...)
  2013-10-10  9:15 ` Stan Hoeppner
@ 2013-10-11  8:42 ` David Brown
  2013-10-11 11:00   ` Art -kwaak- van Breemen
  4 siblings, 1 reply; 20+ messages in thread
From: David Brown @ 2013-10-11  8:42 UTC (permalink / raw)
  To: linux-raid

On 09/10/13 14:31, Andy Smith wrote:
> Hello,
> 
> Due to increasing load of random read IOPS I am considering using 8
> SSDs and md in my next server, instead of 8 SATA HDDs with
> battery-backed hardware RAID. I am thinking of using Crucial m500s.
> 
> Are there any gotchas to be aware of? I haven't much experience with
> SSDs.
> 
> If these were normal HDDs then (aside from small partitions for
> /boot) I'd just RAID-10 for the main bulk of the storage. Is there
> any reason not to do that with SSDs currently?
> 
> I think I read somewhere that offline TRIM is only supported by md
> for RAID-1, is that correct? If so, should I be finding a way to use
> four pairs of RAID-1s, or does it not matter?
> 
> Any insights appreciated.
> 
> Cheers,
> Andy


If you are worried about the reliability of the SSDs, then one
possibility is to throw a couple of HD's into the mix.  I've never done
this, but I think it should work.

Set up a couple of hard disks, capacity at least 8 times an SSD, in a
raid1 pair.  Partition the raid1 into 8 partitions of SSD size.

Then for each pair of SSD, make a triple mirror with the two SSD's, and
a "write-mostly" hard disk raid partition.  All writes will be then get
an extra copy on the hard disk, while reads will always come from the
SSD's, and therefore be at the same low latency and high speed as
before, and will not use up the HD's bandwidth on reads.  And if both
your SSD's die, you have the extra copy on the hard disk.  It will run
like treacle, but it will run.

If you find that the harddisk pair is a bottleneck on write speeds, you
can add a write-intent bitmap to the raid1 sets and enable
"write-behind".  Then writes will be "completed" when the data is on
both SSD's - the writes to the harddisk are queued and will complete
when time allows.  This will greatly reduce the latency on writes, but
of course it means that if the SSD's die while there are pending HD
writes in the queue, the filesystem and applications will think data is
safe on the disk even though it is not yet there.  Only you can decide
the balance here.



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-10 20:37   ` Andy Smith
  2013-10-11  8:30     ` David Brown
@ 2013-10-11  9:37     ` Stan Hoeppner
  1 sibling, 0 replies; 20+ messages in thread
From: Stan Hoeppner @ 2013-10-11  9:37 UTC (permalink / raw)
  To: linux-raid

On 10/10/2013 3:37 PM, Andy Smith wrote:
> Hi Stan,
> 
> (Thanks everyone else who's responded so far, too -- I'm paying
> attention with interest)
> 
> On Thu, Oct 10, 2013 at 04:15:08AM -0500, Stan Hoeppner wrote:
>> On 10/9/2013 7:31 AM, Andy Smith wrote:
>>> Are there any gotchas to be aware of? I haven't much experience with
>>> SSDs.
>>
>> Yes, there is one major gotcha WRT md/RAID and SSDs, which to this point
>> nobody has mentioned in this thread, possibly because it pertains to
>> writes, not reads.  Note my question posed to you up above.  Since I've
>> answered this question in detail at least a dozen times on this mailing
>> list, I'll simply refer you to one of my recent archived posts for the
>> details:
>>
>> http://permalink.gmane.org/gmane.linux.raid/43984
> 
> When I first read that link I thought perhaps you were referring to
> write performance dropping off a cliff due to SSD garbage caching
> routines that kicked in, 

I referenced that thread because it covers two possible causes of
bottlenecking of SSD performance, both with and without md/RAID.

> but then I read the rest of the thread and
> I think maybe you were hinting at the single write thread issue you
> talk about more in:
> 
>     http://www.spinics.net/lists/raid/msg44211.html
> 
> Is that the case?

Yes, but I wasn't hinting.  This is precisely why Shaohua Li has been
working on a set of patches to make md's write path threaded,
eliminating the single core bottleneck.  Once they're complete and in
distro kernels one should no longer need to create layered RAID levels
for max SSD throughput.

>> To be clear, the need for careful directory/file layout to achieve
>> parallel throughput pertains only to the linear concatenation storage
>> architecture described above.  If one is using XFS atop a striped array
>> then throughput, either sequential or parallel, is -not- limited by
>> file/dir placement across the AGs, as all AGs are striped across the disks.
> 
> So, in summary do you recommend the stacked RAID-0 on top of RAID-1

No.  Striping across SSDs increases write amplification on all the
drives and will wear the flash cells out more quickly than if not
striping.  I'd guess most people don't consider this when creating a
striped array of SSDs.  Then again, most people think EXT -is- the Linux
filesystem, and that the world is flat...

> pairs instead of a RAID-10, where write performance may otherwise be
> bottlenecked by md's single write thread?

Something to keep in mind when discussing the single thread write
bottleneck is that we're talking about maximizing the investment in SSD
throughput with "all" multi-core CPUs, mostly the lower clocked models.
 It's a relative discussion, not absolute.

If you dig around the archives you'll find a thread in which I helped
Adam Goryachev tune his md/RAID5 of 5x480GB Intel consumer SSDs, LSI
9211-8i, to achieve read/write of 2.5GB/s and 1.6 GB/s respectively.
That's 1.6GB/s with a single md write thread.  I don't recall which
LGA1155 CPU he was using.  IIRC it was 2.9-3.6GHz.

With a CPU with fast single core performance, multi-channel DRAM, and no
bottlenecks in the IO path (PCIe/system chipset connection), one
core/one write thread may be more than sufficient for most md/RAID
levels and workloads.

> Write ops are a fraction of the random reads and using RAID with a
> battery-backed write cache solved that problem, but it does need to
> scale linearly with whatever improvement we can get for the read
> ops, so I would think it will still be something worth thinking
> about, so thanks for pointing that out.

What is the target of the random read/write IOPS?  A single large file,
multiple small files, or a mix of the two?  Before I can give you any
real advice I need to know what the workload is actually doing.  Without
knowing the workload everything is guesswork.

In fact I should have asked this up front.  The workload drives
everything.  Always has, always will.

-- 
Stan


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: Multiple SSDs - RAID-1, -10, or stacked? TRIM?
  2013-10-11  8:42 ` David Brown
@ 2013-10-11 11:00   ` Art -kwaak- van Breemen
  0 siblings, 0 replies; 20+ messages in thread
From: Art -kwaak- van Breemen @ 2013-10-11 11:00 UTC (permalink / raw)
  To: linux-raid

Hi,

On Fri, Oct 11, 2013 at 10:42:07AM +0200, David Brown wrote:
> Then for each pair of SSD, make a triple mirror with the two SSD's, and
> a "write-mostly" hard disk raid partition.  All writes will be then get
> an extra copy on the hard disk, while reads will always come from the
> SSD's, and therefore be at the same low latency and high speed as
> before, and will not use up the HD's bandwidth on reads.  And if both
> your SSD's die, you have the extra copy on the hard disk.  It will run
> like treacle, but it will run.

I've done that and found some bugs. Neil has already fixed that
btw :-).
Write-mostly with a write-behind works mostly correct, all O will
be lagging waiting for the disk to catch up, but reads are really
really fast. In our case we had a single hard disk paired with a
single SSD. The blade could not hold more and we were not
confident enough with SSD. Well, if the SSD breaks, so does your
server, but then of high load ;-). (Actually the server doesn't
break, but your application will).

Personally I would try a raid-1 with two different manufacturers
SSD (or some brands you have really tested under continuous high
load), bcache and then the old array.
And I mean try: as in have a redundant setup.

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2013-10-11 11:00 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-09 12:31 Multiple SSDs - RAID-1, -10, or stacked? TRIM? Andy Smith
2013-10-09 13:00 ` Roberto Spadim
2013-10-09 13:27 ` David Brown
2013-10-09 13:52   ` Roberto Spadim
2013-10-09 14:46 ` Ian Pilcher
2013-10-09 16:21   ` David Brown
2013-10-09 17:33     ` Ian Pilcher
2013-10-09 18:04       ` Roberto Spadim
2013-10-09 19:08       ` David Brown
2013-10-09 20:35         ` SSD reliability; was: " Matt Garman
2013-10-09 21:17           ` David Brown
2013-10-09 21:46           ` Brian Candler
2013-10-10  6:14     ` Mikael Abrahamsson
2013-10-10 16:18     ` Art -kwaak- van Breemen
2013-10-10  9:15 ` Stan Hoeppner
2013-10-10 20:37   ` Andy Smith
2013-10-11  8:30     ` David Brown
2013-10-11  9:37     ` Stan Hoeppner
2013-10-11  8:42 ` David Brown
2013-10-11 11:00   ` Art -kwaak- van Breemen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).