* Best configuration for bcache/md cache or other cache using ssd
@ 2013-09-17 19:20 Roberto Spadim
2013-09-18 13:59 ` Drew
0 siblings, 1 reply; 28+ messages in thread
From: Roberto Spadim @ 2013-09-17 19:20 UTC (permalink / raw)
To: Linux-RAID
Hi guys, i have one doubt...
i will buy one server and my client want a configuration with SSD + HDD
i don't know yet what SSD will be used if anyone have a suggestion
it's wellcome...
HDD will be a SATA 7200rpm with 2TB, SSD will be ~ 160GB or less (must
be cheap and enterprise level)
he wants a RAID1 level over HDD and a cache (bcache/or md cache layer
of kernel 3.9) over this md device (raid1)
i'm thinking something like...
raid1 over ssd + raid1 over hdd
use ssd raid as a cache to hdd raid, is this ok?
well that's my problem:
1) suggestion about ssd device (size don't need to be 160GB maybe a
better device with bigger space exists and i don't know...)
2) what configuration should i do to create the md device
3) i will make this (cache) only for HOME partition, others partitions
will only have md raid1 of hdd drives
thanks guys
--
Roberto Spadim
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-17 19:20 Best configuration for bcache/md cache or other cache using ssd Roberto Spadim
@ 2013-09-18 13:59 ` Drew
[not found] ` <CAH3kUhHin5PfjDCNFjD8eypNML=0YrkQp14DrCADc2StcODdaw@mail.gmail.com>
0 siblings, 1 reply; 28+ messages in thread
From: Drew @ 2013-09-18 13:59 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Linux-RAID
> HDD will be a SATA 7200rpm with 2TB, SSD will be ~ 160GB or less (must
> be cheap and enterprise level)
>
> he wants a RAID1 level over HDD and a cache (bcache/or md cache layer
> of kernel 3.9) over this md device (raid1)
What is the client's workload? What is he trying to do with this machine?
At first glance (without any information) the requirements don't make
sense. Placing a SSD based cache in front of a pair of slow 7200RPM
SATA's sounds like he's trying to get SSD performance at SATA prices.
It could work but if he's hammering the disks enough to need SSD
performance, those SATA's just won't keep up.
--
Drew
"Nothing in life is to be feared. It is only to be understood."
--Marie Curie
"This started out as a hobby and spun horribly out of control."
-Unknown
^ permalink raw reply [flat|nested] 28+ messages in thread
* Fwd: Best configuration for bcache/md cache or other cache using ssd
[not found] ` <CAH3kUhHin5PfjDCNFjD8eypNML=0YrkQp14DrCADc2StcODdaw@mail.gmail.com>
@ 2013-09-18 15:39 ` Drew
2013-09-18 16:00 ` Mark Knecht
2013-09-18 15:51 ` Fwd: " Roberto Spadim
1 sibling, 1 reply; 28+ messages in thread
From: Drew @ 2013-09-18 15:39 UTC (permalink / raw)
To: Linux RAID Mailing List
Forwarded as only I got the reply.
---------- Forwarded message ----------
From: Roberto Spadim <roberto@spadim.com.br>
Date: Wed, Sep 18, 2013 at 7:41 AM
Subject: Re: Best configuration for bcache/md cache or other cache using ssd
To: Drew <drew.kay@gmail.com>
Sorry guys, this time i don't have a full knowledge about the
workload, but from what he told me, he want fast writes with hdd but i
could check if small ssd devices could help
After install linux with raid1 i will install apache mariadb and php
at this machine, in other words it's a database and web server load,
but i don't know what size of app and database will run yet
Btw, ssd with bcache or dm cache could help hdd (this must be
enterprise level) writes, right?
Any idea what the best method to test what kernel drive could give
superior performace? I'm thinking about install the bcache, and after
make a backup and install dm cache and check what's better, any other
idea?
Em 18/09/2013 11:00, "Drew" <drew.kay@gmail.com> escreveu:
> > HDD will be a SATA 7200rpm with 2TB, SSD will be ~ 160GB or less (must
> > be cheap and enterprise level)
> >
> > he wants a RAID1 level over HDD and a cache (bcache/or md cache layer
> > of kernel 3.9) over this md device (raid1)
>
> What is the client's workload? What is he trying to do with this machine?
>
> At first glance (without any information) the requirements don't make
> sense. Placing a SSD based cache in front of a pair of slow 7200RPM
> SATA's sounds like he's trying to get SSD performance at SATA prices.
> It could work but if he's hammering the disks enough to need SSD
> performance, those SATA's just won't keep up.
>
>
> --
> Drew
>
> "Nothing in life is to be feared. It is only to be understood."
> --Marie Curie
>
> "This started out as a hobby and spun horribly out of control."
> -Unknown
--
Drew
"Nothing in life is to be feared. It is only to be understood."
--Marie Curie
"This started out as a hobby and spun horribly out of control."
-Unknown
^ permalink raw reply [flat|nested] 28+ messages in thread
* Fwd: Best configuration for bcache/md cache or other cache using ssd
[not found] ` <CAH3kUhHin5PfjDCNFjD8eypNML=0YrkQp14DrCADc2StcODdaw@mail.gmail.com>
2013-09-18 15:39 ` Fwd: " Drew
@ 2013-09-18 15:51 ` Roberto Spadim
2013-09-18 16:07 ` Tommy Apel
2013-09-18 17:15 ` Drew
1 sibling, 2 replies; 28+ messages in thread
From: Roberto Spadim @ 2013-09-18 15:51 UTC (permalink / raw)
To: Linux-RAID
Sorry guys, this time i don't have a full knowledge about the
workload, but from what he told me, he want fast writes with hdd but i
could check if small ssd devices could help
After install linux with raid1 i will install apache mariadb and php
at this machine, in other words it's a database and web server load,
but i don't know what size of app and database will run yet
Btw, ssd with bcache or dm cache could help hdd (this must be
enterprise level) writes, right?
Any idea what the best method to test what kernel drive could give
superior performace? I'm thinking about install the bcache, and after
make a backup and install dm cache and check what's better, any other
idea?
Em 18/09/2013 11:00, "Drew" <drew.kay@gmail.com> escreveu:
> > HDD will be a SATA 7200rpm with 2TB, SSD will be ~ 160GB or less (must
> > be cheap and enterprise level)
> >
> > he wants a RAID1 level over HDD and a cache (bcache/or md cache layer
> > of kernel 3.9) over this md device (raid1)
>
> What is the client's workload? What is he trying to do with this machine?
>
> At first glance (without any information) the requirements don't make
> sense. Placing a SSD based cache in front of a pair of slow 7200RPM
> SATA's sounds like he's trying to get SSD performance at SATA prices.
> It could work but if he's hammering the disks enough to need SSD
> performance, those SATA's just won't keep up.
>
>
> --
> Drew
>
> "Nothing in life is to be feared. It is only to be understood."
> --Marie Curie
>
> "This started out as a hobby and spun horribly out of control."
> -Unknown
--
Roberto Spadim
SPAEmpresarial
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-18 15:39 ` Fwd: " Drew
@ 2013-09-18 16:00 ` Mark Knecht
0 siblings, 0 replies; 28+ messages in thread
From: Mark Knecht @ 2013-09-18 16:00 UTC (permalink / raw)
To: Drew; +Cc: Linux RAID Mailing List
On Wed, Sep 18, 2013 at 8:39 AM, Drew <drew.kay@gmail.com> wrote:
> Forwarded as only I got the reply.
>
> ---------- Forwarded message ----------
> From: Roberto Spadim <roberto@spadim.com.br>
> Date: Wed, Sep 18, 2013 at 7:41 AM
> Subject: Re: Best configuration for bcache/md cache or other cache using ssd
> To: Drew <drew.kay@gmail.com>
>
>
> Sorry guys, this time i don't have a full knowledge about the
> workload, but from what he told me, he want fast writes with hdd but i
> could check if small ssd devices could help
> After install linux with raid1 i will install apache mariadb and php
> at this machine, in other words it's a database and web server load,
> but i don't know what size of app and database will run yet
>
> Btw, ssd with bcache or dm cache could help hdd (this must be
> enterprise level) writes, right?
> Any idea what the best method to test what kernel drive could give
> superior performace? I'm thinking about install the bcache, and after
> make a backup and install dm cache and check what's better, any other
> idea?
>
> Em 18/09/2013 11:00, "Drew" <drew.kay@gmail.com> escreveu:
>
>> > HDD will be a SATA 7200rpm with 2TB, SSD will be ~ 160GB or less (must
>> > be cheap and enterprise level)
>> >
>> > he wants a RAID1 level over HDD and a cache (bcache/or md cache layer
>> > of kernel 3.9) over this md device (raid1)
>>
>> What is the client's workload? What is he trying to do with this machine?
>>
>> At first glance (without any information) the requirements don't make
>> sense. Placing a SSD based cache in front of a pair of slow 7200RPM
>> SATA's sounds like he's trying to get SSD performance at SATA prices.
>> It could work but if he's hammering the disks enough to need SSD
>> performance, those SATA's just won't keep up.
As the OP doesn't have a workload I throw one out as I've been
considering doing something similar for a few months.
I work at home on a Gentoo Linux box. (i980x, 24GB, five 500GB RE3
drives configured as a 1.4GB RAID6.) Mostly I work in Windows writing
software for trading, and then also trading. When trading I have a
moderate amount of real-time data arriving and being processed by 3
Windows VMs. 2 VMs are Virtualbox Win7 machines, the 3rd is a VMWare
XP machine. RAID6 was chosen as I have a lot of DVD video stored here
as a backup and wanted extra physical security. I sometimes play video
on the machine, but mostly 1TB of the RAID is just storage. The VMs
account for maybe 200GB of disk space.
My 'issue' with the machine is there are a lot of lags in (I think)
the RAID6 subsystem when the machine gets loaded up with a lot of
things going on in the VMs. It gets a _lot_ worse if maybe I'm ripping
a DVD in Gentoo.
I was considering using dm-cache with either a single SSD of a two SSD
RAID1. Will it help? MY worry looking in the dm-cache forums are a
number of people complaining that using dm-cache led to their disk
subsystem being corrupted.
This is likely not enough info for a quantitative answer. I'm willing
to collect some data if pointed toward tools that don't require a PhD
to run them.
Cheers,
Mark
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-18 15:51 ` Fwd: " Roberto Spadim
@ 2013-09-18 16:07 ` Tommy Apel
[not found] ` <CAH3kUhEWUe=20ovmd5BT3kzmYn25YS3Np5R3jPiJDBEAhAOb_A@mail.gmail.com>
2013-09-18 17:15 ` Drew
1 sibling, 1 reply; 28+ messages in thread
From: Tommy Apel @ 2013-09-18 16:07 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Linux-RAID
dm cache is AFAIK still Experimental, the size of the ssd won't matter
too much in this case as you're only severing a 2TB backend and you'll
be able to use sequential writes to the backing store at 150MB/s or
so, once the writes are completed on the ssd they will be strained
together and flushed to permanent storage at full sequential speed for
the most time plus you'll have linux fs buffers as well so I'm not
really too sure how much the ssd will actually contribute to the setup
in total. The ssd caching will in most cases only benefit you if you
have a lot of small random write IO.
/Tommy
2013/9/18 Roberto Spadim <roberto@spadim.com.br>:
> Sorry guys, this time i don't have a full knowledge about the
> workload, but from what he told me, he want fast writes with hdd but i
> could check if small ssd devices could help
> After install linux with raid1 i will install apache mariadb and php
> at this machine, in other words it's a database and web server load,
> but i don't know what size of app and database will run yet
>
> Btw, ssd with bcache or dm cache could help hdd (this must be
> enterprise level) writes, right?
> Any idea what the best method to test what kernel drive could give
> superior performace? I'm thinking about install the bcache, and after
> make a backup and install dm cache and check what's better, any other
> idea?
>
> Em 18/09/2013 11:00, "Drew" <drew.kay@gmail.com> escreveu:
>
>> > HDD will be a SATA 7200rpm with 2TB, SSD will be ~ 160GB or less (must
>> > be cheap and enterprise level)
>> >
>> > he wants a RAID1 level over HDD and a cache (bcache/or md cache layer
>> > of kernel 3.9) over this md device (raid1)
>>
>> What is the client's workload? What is he trying to do with this machine?
>>
>> At first glance (without any information) the requirements don't make
>> sense. Placing a SSD based cache in front of a pair of slow 7200RPM
>> SATA's sounds like he's trying to get SSD performance at SATA prices.
>> It could work but if he's hammering the disks enough to need SSD
>> performance, those SATA's just won't keep up.
>>
>>
>> --
>> Drew
>>
>> "Nothing in life is to be feared. It is only to be understood."
>> --Marie Curie
>>
>> "This started out as a hobby and spun horribly out of control."
>> -Unknown
>
>
> --
> Roberto Spadim
> SPAEmpresarial
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
[not found] ` <CAH3kUhEWUe=20ovmd5BT3kzmYn25YS3Np5R3jPiJDBEAhAOb_A@mail.gmail.com>
@ 2013-09-18 16:27 ` Tommy Apel
0 siblings, 0 replies; 28+ messages in thread
From: Tommy Apel @ 2013-09-18 16:27 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Linux-RAID
bcache should be the safe bet here if you ask me.
none of them are discussed in this list, this is for md raid.
databases do both but the relay on the filesystem confirming the write
before they move on so you vm cache won't help here but the ssd will
as the cache will confirm the write and make the database belive the
data was in fact written to permanent storage.
another thing you might be looking at aswell is recovery from
powerfailure.... may I advice you to get a UPS for this setup.
/Tommy
2013/9/18 Roberto Spadim <roberto@spadim.com.br>:
> humm, i didn't search about dm cache and bcache devel status, what is more
> production safe? bcache or dmcache?
> dmcache should be discussed in linux-raid right? what about bcache?
>
> yes i'm thinking about ssd cache for small random writes, but i don't know
> how a database do small random writes or it use only small sequencial
> writes, i should consider this, right? i read some docs talking about only
> random access being cached and sequencial are 'lost' and read from primary
> disk (hdd in my case)
>
>
> 2013/9/18 Tommy Apel <tommyapeldk@gmail.com>
>>
>> dm cache is AFAIK still Experimental, the size of the ssd won't matter
>> too much in this case as you're only severing a 2TB backend and you'll
>> be able to use sequential writes to the backing store at 150MB/s or
>> so, once the writes are completed on the ssd they will be strained
>> together and flushed to permanent storage at full sequential speed for
>> the most time plus you'll have linux fs buffers as well so I'm not
>> really too sure how much the ssd will actually contribute to the setup
>> in total. The ssd caching will in most cases only benefit you if you
>> have a lot of small random write IO.
>>
>> /Tommy
>>
>>
>> 2013/9/18 Roberto Spadim <roberto@spadim.com.br>:
>> > Sorry guys, this time i don't have a full knowledge about the
>> > workload, but from what he told me, he want fast writes with hdd but i
>> > could check if small ssd devices could help
>> > After install linux with raid1 i will install apache mariadb and php
>> > at this machine, in other words it's a database and web server load,
>> > but i don't know what size of app and database will run yet
>> >
>> > Btw, ssd with bcache or dm cache could help hdd (this must be
>> > enterprise level) writes, right?
>> > Any idea what the best method to test what kernel drive could give
>> > superior performace? I'm thinking about install the bcache, and after
>> > make a backup and install dm cache and check what's better, any other
>> > idea?
>> >
>> > Em 18/09/2013 11:00, "Drew" <drew.kay@gmail.com> escreveu:
>> >
>> >> > HDD will be a SATA 7200rpm with 2TB, SSD will be ~ 160GB or less
>> >> > (must
>> >> > be cheap and enterprise level)
>> >> >
>> >> > he wants a RAID1 level over HDD and a cache (bcache/or md cache layer
>> >> > of kernel 3.9) over this md device (raid1)
>> >>
>> >> What is the client's workload? What is he trying to do with this
>> >> machine?
>> >>
>> >> At first glance (without any information) the requirements don't make
>> >> sense. Placing a SSD based cache in front of a pair of slow 7200RPM
>> >> SATA's sounds like he's trying to get SSD performance at SATA prices.
>> >> It could work but if he's hammering the disks enough to need SSD
>> >> performance, those SATA's just won't keep up.
>> >>
>> >>
>> >> --
>> >> Drew
>> >>
>> >> "Nothing in life is to be feared. It is only to be understood."
>> >> --Marie Curie
>> >>
>> >> "This started out as a hobby and spun horribly out of control."
>> >> -Unknown
>> >
>> >
>> > --
>> > Roberto Spadim
>> > SPAEmpresarial
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> > the body of a message to majordomo@vger.kernel.org
>> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
>
> --
> Roberto Spadim
> SPAEmpresarial
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-18 15:51 ` Fwd: " Roberto Spadim
2013-09-18 16:07 ` Tommy Apel
@ 2013-09-18 17:15 ` Drew
2013-09-18 17:33 ` Roberto Spadim
1 sibling, 1 reply; 28+ messages in thread
From: Drew @ 2013-09-18 17:15 UTC (permalink / raw)
To: Linux-RAID
On Wed, Sep 18, 2013 at 8:51 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
> Sorry guys, this time i don't have a full knowledge about the
> workload, but from what he told me, he want fast writes with hdd but i
> could check if small ssd devices could help
> After install linux with raid1 i will install apache mariadb and php
> at this machine, in other words it's a database and web server load,
> but i don't know what size of app and database will run yet
>
> Btw, ssd with bcache or dm cache could help hdd (this must be
> enterprise level) writes, right?
> Any idea what the best method to test what kernel drive could give
> superior performace? I'm thinking about install the bcache, and after
> make a backup and install dm cache and check what's better, any other
> idea?
We still need to know what size datasets are going to be used. And
also given it's a webserver, how big of a pipe does he have?
Given a typical webserver in a colo w/ 10Mbps pipe, I think the
suggested config is overkill. For a webserver the 7200 SATA's should
be able to deliver enough data to keep apache happy.
In the database side, depends on how intensive the workload is. I see
a lot of webservers where the 7200's are just fine because the I/O
demands from the database are low. Blog/CMS systems like wordpress
will be harder on the database but again it depends on how heavy the
access is to the server. How many visitors/hour does he expect to
serve?
--
Drew
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-18 17:15 ` Drew
@ 2013-09-18 17:33 ` Roberto Spadim
2013-09-19 2:26 ` Stan Hoeppner
0 siblings, 1 reply; 28+ messages in thread
From: Roberto Spadim @ 2013-09-18 17:33 UTC (permalink / raw)
To: Drew; +Cc: Linux-RAID
Well the internet link here is 100mbps, i think the workload will be a
bit more than only 100 users, it's a second webserver+database server
He is trying to use a cheaper server with more disk performace, Brazil
costs are too high to allow a full ssd system or 15k rpm sas harddisks
For mariadb server i'm studing if the thread-pool scheduler will be
used instead of one thread per connection but "it's not my problem"
the final user will select what is better for database scheduler
In other words i think the work load will not be a simple web server
cms/blog, i don't know yet how it will work, it's a black/gray box to
me, today he have sata enterprise hdd 7200rpm at servers (dell server
r420 if i'm not wrong) and is studing if a ssd could help, that's my
'job' (hobby) in this task
2013/9/18 Drew <drew.kay@gmail.com>:
> On Wed, Sep 18, 2013 at 8:51 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
>> Sorry guys, this time i don't have a full knowledge about the
>> workload, but from what he told me, he want fast writes with hdd but i
>> could check if small ssd devices could help
>> After install linux with raid1 i will install apache mariadb and php
>> at this machine, in other words it's a database and web server load,
>> but i don't know what size of app and database will run yet
>>
>> Btw, ssd with bcache or dm cache could help hdd (this must be
>> enterprise level) writes, right?
>> Any idea what the best method to test what kernel drive could give
>> superior performace? I'm thinking about install the bcache, and after
>> make a backup and install dm cache and check what's better, any other
>> idea?
>
> We still need to know what size datasets are going to be used. And
> also given it's a webserver, how big of a pipe does he have?
>
> Given a typical webserver in a colo w/ 10Mbps pipe, I think the
> suggested config is overkill. For a webserver the 7200 SATA's should
> be able to deliver enough data to keep apache happy.
>
> In the database side, depends on how intensive the workload is. I see
> a lot of webservers where the 7200's are just fine because the I/O
> demands from the database are low. Blog/CMS systems like wordpress
> will be harder on the database but again it depends on how heavy the
> access is to the server. How many visitors/hour does he expect to
> serve?
>
>
> --
> Drew
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Roberto Spadim
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-18 17:33 ` Roberto Spadim
@ 2013-09-19 2:26 ` Stan Hoeppner
2013-09-19 3:42 ` Roberto Spadim
0 siblings, 1 reply; 28+ messages in thread
From: Stan Hoeppner @ 2013-09-19 2:26 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Drew, Linux-RAID
On 9/18/2013 12:33 PM, Roberto Spadim wrote:
> Well the internet link here is 100mbps, i think the workload will be a
> bit more than only 100 users, it's a second webserver+database server
> He is trying to use a cheaper server with more disk performace, Brazil
> costs are too high to allow a full ssd system or 15k rpm sas harddisks
> For mariadb server i'm studing if the thread-pool scheduler will be
> used instead of one thread per connection but "it's not my problem"
> the final user will select what is better for database scheduler
> In other words i think the work load will not be a simple web server
> cms/blog, i don't know yet how it will work, it's a black/gray box to
> me, today he have sata enterprise hdd 7200rpm at servers (dell server
> r420 if i'm not wrong) and is studing if a ssd could help, that's my
> 'job' (hobby) in this task
Based on the information provided it sounds like the machine is seek
bound. The simplest, and best, solution to this problem is simply
installing a [B|F]BWC RAID card w/512KB cache. Synchronous writes are
acked when committed to RAID cache instead of the platter. This will
yield ~130,000 burst write TPS before hitting the spindles, or ~130,000
writes in flight. This is far more performance than you can achieve
with a low end enterprise SSD, for about the same cost. It's fully
transparent and performance is known and guaranteed, unlike the recent
kernel based block IO caching hacks targeting SSDs as fast read/write
buffers.
You can use the onboard RAID firmware to create RAID1s or a RAID10, or
you can expose each disk individually and use md/RAID while still
benefiting from the write caching, though for only a handful of disks
you're better off using the firmware RAID. Another advantage is that
you can use parity RAID (controller firmware only) and avoid some of the
RMW penalty, as the read blocks will be in controller cache. I.e. you
can use three 7.2K disks, get the same capacity as a four disk RAID10,
with equal read performance and nearly the same write performance.
Write heavy DB workloads are a post child for hardware caching RAID devices.
--
Stan
> 2013/9/18 Drew <drew.kay@gmail.com>:
>> On Wed, Sep 18, 2013 at 8:51 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
>>> Sorry guys, this time i don't have a full knowledge about the
>>> workload, but from what he told me, he want fast writes with hdd but i
>>> could check if small ssd devices could help
>>> After install linux with raid1 i will install apache mariadb and php
>>> at this machine, in other words it's a database and web server load,
>>> but i don't know what size of app and database will run yet
>>>
>>> Btw, ssd with bcache or dm cache could help hdd (this must be
>>> enterprise level) writes, right?
>>> Any idea what the best method to test what kernel drive could give
>>> superior performace? I'm thinking about install the bcache, and after
>>> make a backup and install dm cache and check what's better, any other
>>> idea?
>>
>> We still need to know what size datasets are going to be used. And
>> also given it's a webserver, how big of a pipe does he have?
>>
>> Given a typical webserver in a colo w/ 10Mbps pipe, I think the
>> suggested config is overkill. For a webserver the 7200 SATA's should
>> be able to deliver enough data to keep apache happy.
>>
>> In the database side, depends on how intensive the workload is. I see
>> a lot of webservers where the 7200's are just fine because the I/O
>> demands from the database are low. Blog/CMS systems like wordpress
>> will be harder on the database but again it depends on how heavy the
>> access is to the server. How many visitors/hour does he expect to
>> serve?
>>
>>
>> --
>> Drew
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-19 2:26 ` Stan Hoeppner
@ 2013-09-19 3:42 ` Roberto Spadim
2013-09-19 7:47 ` Stan Hoeppner
0 siblings, 1 reply; 28+ messages in thread
From: Roberto Spadim @ 2013-09-19 3:42 UTC (permalink / raw)
To: Stan Hoeppner; +Cc: Drew, Linux-RAID
nice, in other words, is better spend money with hardware raid cards
right? any special card that i should look?
2013/9/18 Stan Hoeppner <stan@hardwarefreak.com>:
> On 9/18/2013 12:33 PM, Roberto Spadim wrote:
>> Well the internet link here is 100mbps, i think the workload will be a
>> bit more than only 100 users, it's a second webserver+database server
>> He is trying to use a cheaper server with more disk performace, Brazil
>> costs are too high to allow a full ssd system or 15k rpm sas harddisks
>> For mariadb server i'm studing if the thread-pool scheduler will be
>> used instead of one thread per connection but "it's not my problem"
>> the final user will select what is better for database scheduler
>> In other words i think the work load will not be a simple web server
>> cms/blog, i don't know yet how it will work, it's a black/gray box to
>> me, today he have sata enterprise hdd 7200rpm at servers (dell server
>> r420 if i'm not wrong) and is studing if a ssd could help, that's my
>> 'job' (hobby) in this task
>
> Based on the information provided it sounds like the machine is seek
> bound. The simplest, and best, solution to this problem is simply
> installing a [B|F]BWC RAID card w/512KB cache. Synchronous writes are
> acked when committed to RAID cache instead of the platter. This will
> yield ~130,000 burst write TPS before hitting the spindles, or ~130,000
> writes in flight. This is far more performance than you can achieve
> with a low end enterprise SSD, for about the same cost. It's fully
> transparent and performance is known and guaranteed, unlike the recent
> kernel based block IO caching hacks targeting SSDs as fast read/write
> buffers.
>
> You can use the onboard RAID firmware to create RAID1s or a RAID10, or
> you can expose each disk individually and use md/RAID while still
> benefiting from the write caching, though for only a handful of disks
> you're better off using the firmware RAID. Another advantage is that
> you can use parity RAID (controller firmware only) and avoid some of the
> RMW penalty, as the read blocks will be in controller cache. I.e. you
> can use three 7.2K disks, get the same capacity as a four disk RAID10,
> with equal read performance and nearly the same write performance.
>
> Write heavy DB workloads are a post child for hardware caching RAID devices.
>
> --
> Stan
>
>
>
>
>> 2013/9/18 Drew <drew.kay@gmail.com>:
>>> On Wed, Sep 18, 2013 at 8:51 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
>>>> Sorry guys, this time i don't have a full knowledge about the
>>>> workload, but from what he told me, he want fast writes with hdd but i
>>>> could check if small ssd devices could help
>>>> After install linux with raid1 i will install apache mariadb and php
>>>> at this machine, in other words it's a database and web server load,
>>>> but i don't know what size of app and database will run yet
>>>>
>>>> Btw, ssd with bcache or dm cache could help hdd (this must be
>>>> enterprise level) writes, right?
>>>> Any idea what the best method to test what kernel drive could give
>>>> superior performace? I'm thinking about install the bcache, and after
>>>> make a backup and install dm cache and check what's better, any other
>>>> idea?
>>>
>>> We still need to know what size datasets are going to be used. And
>>> also given it's a webserver, how big of a pipe does he have?
>>>
>>> Given a typical webserver in a colo w/ 10Mbps pipe, I think the
>>> suggested config is overkill. For a webserver the 7200 SATA's should
>>> be able to deliver enough data to keep apache happy.
>>>
>>> In the database side, depends on how intensive the workload is. I see
>>> a lot of webservers where the 7200's are just fine because the I/O
>>> demands from the database are low. Blog/CMS systems like wordpress
>>> will be harder on the database but again it depends on how heavy the
>>> access is to the server. How many visitors/hour does he expect to
>>> serve?
>>>
>>>
>>> --
>>> Drew
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
>>
>
--
Roberto Spadim
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-19 3:42 ` Roberto Spadim
@ 2013-09-19 7:47 ` Stan Hoeppner
2013-09-19 15:30 ` Roberto Spadim
0 siblings, 1 reply; 28+ messages in thread
From: Stan Hoeppner @ 2013-09-19 7:47 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Drew, Linux-RAID
On 9/18/2013 10:42 PM, Roberto Spadim wrote:
> nice, in other words, is better spend money with hardware raid cards
> right?
If it's my money, yes, absolutely. RAID BBWC will run circles around an
SSD with a random write workload. The cycle time on DDR2 SDRAM is 10s
of nanoseconds. Write latency on flash cells is 50-100 microseconds.
Do the math.
Random write apps such as transactional databases rarely, if ever,
saturate the BBWC faster than it can flush and free pages, so the
additional capacity of an SSD yields no benefit. Additionally, good
RAID firmware will take some of the randomness out of the write pattern
by flushing nearby LBA sectors in a single IO to the drives, increasing
the effectiveness of TCQ/NCQ, thereby reducing seeks. This in essence
increases the random IO throughput of the drives.
In summary, yes, a good caching RAID controller w/BBU will yield vastly
superior performance compared to SSD for most random write workloads,
simply due to instantaneous ACK to fsync and friends.
> any special card that i should look?
If this R420 is the 4x3.5" model then the LSI 9260-4i is suitable. If
it's the 8x2.5" drive model then the LSI 9260-8i is suitable. Both have
512MB of cache DRAM. In both cases you'd use the LSI00161/ LSIiBBU07
BBU for lower cost instead of the flash option. These two models have
the lowest MSRP of the LSI RAID cards having both large cache and BBU
support.
In the 8x2.5" case you could also use the Dell PERC 710, which has built
in FBWC. Probably more expensive than the LSI branded cards. All of
Dell's RAID cards are rebranded LSI cards, or OEM produced by LSI for
Dell with Dell branded firmware. I.e. it's the same product, same
performance, just a different name on it.
Adaptec also has decent RAID cards. The bottom end doesn't support BBU
so steer clear of those, i.e. 6405e/6805e, etc.
Don't use Areca, HighPoint, Promise, etc. They're simply not in the
same league as the enterprise vendors above. If you have problems with
optimizing their cards, drivers, firmware, etc for a specific workload,
their support is simply non existent. You're on your own.
> 2013/9/18 Stan Hoeppner <stan@hardwarefreak.com>:
>> On 9/18/2013 12:33 PM, Roberto Spadim wrote:
>>> Well the internet link here is 100mbps, i think the workload will be a
>>> bit more than only 100 users, it's a second webserver+database server
>>> He is trying to use a cheaper server with more disk performace, Brazil
>>> costs are too high to allow a full ssd system or 15k rpm sas harddisks
>>> For mariadb server i'm studing if the thread-pool scheduler will be
>>> used instead of one thread per connection but "it's not my problem"
>>> the final user will select what is better for database scheduler
>>> In other words i think the work load will not be a simple web server
>>> cms/blog, i don't know yet how it will work, it's a black/gray box to
>>> me, today he have sata enterprise hdd 7200rpm at servers (dell server
>>> r420 if i'm not wrong) and is studing if a ssd could help, that's my
>>> 'job' (hobby) in this task
>>
>> Based on the information provided it sounds like the machine is seek
>> bound. The simplest, and best, solution to this problem is simply
>> installing a [B|F]BWC RAID card w/512KB cache. Synchronous writes are
>> acked when committed to RAID cache instead of the platter. This will
>> yield ~130,000 burst write TPS before hitting the spindles, or ~130,000
>> writes in flight. This is far more performance than you can achieve
>> with a low end enterprise SSD, for about the same cost. It's fully
>> transparent and performance is known and guaranteed, unlike the recent
>> kernel based block IO caching hacks targeting SSDs as fast read/write
>> buffers.
>>
>> You can use the onboard RAID firmware to create RAID1s or a RAID10, or
>> you can expose each disk individually and use md/RAID while still
>> benefiting from the write caching, though for only a handful of disks
>> you're better off using the firmware RAID. Another advantage is that
>> you can use parity RAID (controller firmware only) and avoid some of the
>> RMW penalty, as the read blocks will be in controller cache. I.e. you
>> can use three 7.2K disks, get the same capacity as a four disk RAID10,
>> with equal read performance and nearly the same write performance.
>>
>> Write heavy DB workloads are a post child for hardware caching RAID devices.
>>
>> --
>> Stan
>>
>>
>>
>>
>>> 2013/9/18 Drew <drew.kay@gmail.com>:
>>>> On Wed, Sep 18, 2013 at 8:51 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
>>>>> Sorry guys, this time i don't have a full knowledge about the
>>>>> workload, but from what he told me, he want fast writes with hdd but i
>>>>> could check if small ssd devices could help
>>>>> After install linux with raid1 i will install apache mariadb and php
>>>>> at this machine, in other words it's a database and web server load,
>>>>> but i don't know what size of app and database will run yet
>>>>>
>>>>> Btw, ssd with bcache or dm cache could help hdd (this must be
>>>>> enterprise level) writes, right?
>>>>> Any idea what the best method to test what kernel drive could give
>>>>> superior performace? I'm thinking about install the bcache, and after
>>>>> make a backup and install dm cache and check what's better, any other
>>>>> idea?
>>>>
>>>> We still need to know what size datasets are going to be used. And
>>>> also given it's a webserver, how big of a pipe does he have?
>>>>
>>>> Given a typical webserver in a colo w/ 10Mbps pipe, I think the
>>>> suggested config is overkill. For a webserver the 7200 SATA's should
>>>> be able to deliver enough data to keep apache happy.
>>>>
>>>> In the database side, depends on how intensive the workload is. I see
>>>> a lot of webservers where the 7200's are just fine because the I/O
>>>> demands from the database are low. Blog/CMS systems like wordpress
>>>> will be harder on the database but again it depends on how heavy the
>>>> access is to the server. How many visitors/hour does he expect to
>>>> serve?
>>>>
>>>>
>>>> --
>>>> Drew
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>>
>>>
>>
>
>
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-19 7:47 ` Stan Hoeppner
@ 2013-09-19 15:30 ` Roberto Spadim
2013-09-19 15:49 ` Benjamin ESTRABAUD
2013-09-19 22:15 ` Stan Hoeppner
0 siblings, 2 replies; 28+ messages in thread
From: Roberto Spadim @ 2013-09-19 15:30 UTC (permalink / raw)
To: Stan Hoeppner; +Cc: Drew, Linux-RAID
Hi Stan!
thanks a lot about your experience
I have some doubts about raid boards, what you look when you buy one?
i bought some raid boards (most perc from dell, others adaptec and
others lsi) and don't know if it's really a good feature... check if
i'm wrong about things i most look before buying one:
1) Smart or other tool to diagnostics and access drives diagnostics
2) Cache memory (if i have 512mb here, i could replace with 512mb or
more at linux side? instead of cache at raid board, why not add cache
to linux kernel?)
3) batery backup, how this really work? what kind of raid board really
work nice with this?
4) support for news drivers (firmware updates)
5) support for hot swap
6) if i use ssd what should i consider? i have one raid card with ssd
and i don't know if it's runs nice or just do the job
7) anything else? costs =) ?
i will search about this boards you told, and about features (i don't
know what bbu means yet, but will check... any good raid boards
literarture to read? maybe wikipedia?)
thanks a lot!! :)
2013/9/19 Stan Hoeppner <stan@hardwarefreak.com>:
> On 9/18/2013 10:42 PM, Roberto Spadim wrote:
>> nice, in other words, is better spend money with hardware raid cards
>> right?
>
> If it's my money, yes, absolutely. RAID BBWC will run circles around an
> SSD with a random write workload. The cycle time on DDR2 SDRAM is 10s
> of nanoseconds. Write latency on flash cells is 50-100 microseconds.
> Do the math.
>
> Random write apps such as transactional databases rarely, if ever,
> saturate the BBWC faster than it can flush and free pages, so the
> additional capacity of an SSD yields no benefit. Additionally, good
> RAID firmware will take some of the randomness out of the write pattern
> by flushing nearby LBA sectors in a single IO to the drives, increasing
> the effectiveness of TCQ/NCQ, thereby reducing seeks. This in essence
> increases the random IO throughput of the drives.
>
> In summary, yes, a good caching RAID controller w/BBU will yield vastly
> superior performance compared to SSD for most random write workloads,
> simply due to instantaneous ACK to fsync and friends.
>
>> any special card that i should look?
>
> If this R420 is the 4x3.5" model then the LSI 9260-4i is suitable. If
> it's the 8x2.5" drive model then the LSI 9260-8i is suitable. Both have
> 512MB of cache DRAM. In both cases you'd use the LSI00161/ LSIiBBU07
> BBU for lower cost instead of the flash option. These two models have
> the lowest MSRP of the LSI RAID cards having both large cache and BBU
> support.
>
> In the 8x2.5" case you could also use the Dell PERC 710, which has built
> in FBWC. Probably more expensive than the LSI branded cards. All of
> Dell's RAID cards are rebranded LSI cards, or OEM produced by LSI for
> Dell with Dell branded firmware. I.e. it's the same product, same
> performance, just a different name on it.
>
> Adaptec also has decent RAID cards. The bottom end doesn't support BBU
> so steer clear of those, i.e. 6405e/6805e, etc.
>
> Don't use Areca, HighPoint, Promise, etc. They're simply not in the
> same league as the enterprise vendors above. If you have problems with
> optimizing their cards, drivers, firmware, etc for a specific workload,
> their support is simply non existent. You're on your own.
>
>> 2013/9/18 Stan Hoeppner <stan@hardwarefreak.com>:
>>> On 9/18/2013 12:33 PM, Roberto Spadim wrote:
>>>> Well the internet link here is 100mbps, i think the workload will be a
>>>> bit more than only 100 users, it's a second webserver+database server
>>>> He is trying to use a cheaper server with more disk performace, Brazil
>>>> costs are too high to allow a full ssd system or 15k rpm sas harddisks
>>>> For mariadb server i'm studing if the thread-pool scheduler will be
>>>> used instead of one thread per connection but "it's not my problem"
>>>> the final user will select what is better for database scheduler
>>>> In other words i think the work load will not be a simple web server
>>>> cms/blog, i don't know yet how it will work, it's a black/gray box to
>>>> me, today he have sata enterprise hdd 7200rpm at servers (dell server
>>>> r420 if i'm not wrong) and is studing if a ssd could help, that's my
>>>> 'job' (hobby) in this task
>>>
>>> Based on the information provided it sounds like the machine is seek
>>> bound. The simplest, and best, solution to this problem is simply
>>> installing a [B|F]BWC RAID card w/512KB cache. Synchronous writes are
>>> acked when committed to RAID cache instead of the platter. This will
>>> yield ~130,000 burst write TPS before hitting the spindles, or ~130,000
>>> writes in flight. This is far more performance than you can achieve
>>> with a low end enterprise SSD, for about the same cost. It's fully
>>> transparent and performance is known and guaranteed, unlike the recent
>>> kernel based block IO caching hacks targeting SSDs as fast read/write
>>> buffers.
>>>
>>> You can use the onboard RAID firmware to create RAID1s or a RAID10, or
>>> you can expose each disk individually and use md/RAID while still
>>> benefiting from the write caching, though for only a handful of disks
>>> you're better off using the firmware RAID. Another advantage is that
>>> you can use parity RAID (controller firmware only) and avoid some of the
>>> RMW penalty, as the read blocks will be in controller cache. I.e. you
>>> can use three 7.2K disks, get the same capacity as a four disk RAID10,
>>> with equal read performance and nearly the same write performance.
>>>
>>> Write heavy DB workloads are a post child for hardware caching RAID devices.
>>>
>>> --
>>> Stan
>>>
>>>
>>>
>>>
>>>> 2013/9/18 Drew <drew.kay@gmail.com>:
>>>>> On Wed, Sep 18, 2013 at 8:51 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
>>>>>> Sorry guys, this time i don't have a full knowledge about the
>>>>>> workload, but from what he told me, he want fast writes with hdd but i
>>>>>> could check if small ssd devices could help
>>>>>> After install linux with raid1 i will install apache mariadb and php
>>>>>> at this machine, in other words it's a database and web server load,
>>>>>> but i don't know what size of app and database will run yet
>>>>>>
>>>>>> Btw, ssd with bcache or dm cache could help hdd (this must be
>>>>>> enterprise level) writes, right?
>>>>>> Any idea what the best method to test what kernel drive could give
>>>>>> superior performace? I'm thinking about install the bcache, and after
>>>>>> make a backup and install dm cache and check what's better, any other
>>>>>> idea?
>>>>>
>>>>> We still need to know what size datasets are going to be used. And
>>>>> also given it's a webserver, how big of a pipe does he have?
>>>>>
>>>>> Given a typical webserver in a colo w/ 10Mbps pipe, I think the
>>>>> suggested config is overkill. For a webserver the 7200 SATA's should
>>>>> be able to deliver enough data to keep apache happy.
>>>>>
>>>>> In the database side, depends on how intensive the workload is. I see
>>>>> a lot of webservers where the 7200's are just fine because the I/O
>>>>> demands from the database are low. Blog/CMS systems like wordpress
>>>>> will be harder on the database but again it depends on how heavy the
>>>>> access is to the server. How many visitors/hour does he expect to
>>>>> serve?
>>>>>
>>>>>
>>>>> --
>>>>> Drew
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>
>>>>
>>>>
>>>
>>
>>
>>
>
--
Roberto Spadim
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-19 15:30 ` Roberto Spadim
@ 2013-09-19 15:49 ` Benjamin ESTRABAUD
2013-09-19 16:23 ` Roberto Spadim
2013-09-19 22:15 ` Stan Hoeppner
1 sibling, 1 reply; 28+ messages in thread
From: Benjamin ESTRABAUD @ 2013-09-19 15:49 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Stan Hoeppner, Drew, Linux-RAID
On 19/09/13 16:30, Roberto Spadim wrote:
> Hi Stan!
Hi Roberto,
Just a few things:
> thanks a lot about your experience
> I have some doubts about raid boards, what you look when you buy one?
> i bought some raid boards (most perc from dell, others adaptec and
> others lsi) and don't know if it's really a good feature... check if
> i'm wrong about things i most look before buying one:
> 1) Smart or other tool to diagnostics and access drives diagnostics
> 2) Cache memory (if i have 512mb here, i could replace with 512mb or
> more at linux side? instead of cache at raid board, why not add cache
> to linux kernel?)
You could, but your Linux box is unlikely battery backed. The advantage
of these card's memory is that in the event of a power failure, the
contents of memory is kept and flushed later on. This is especially
important because here we are talking about "fast" writes: the RAID card
tells the IO requester that the IO has been flushed even though it
hasn't (at least not on the drive). If this was a Linux box and you
lacked battery backing, the client would assume some data was written
while it was actually lost, causing silent failure (the worst kind of it).
> 3) batery backup, how this really work? what kind of raid board really
> work nice with this?
As Stan mentioned, some LSI controllers support this. You can lookup the
LSI website to find out about it. Sometimes the BBU (battery unit) comes
separately.
> 4) support for news drivers (firmware updates)
LSI is quite good when it comes to Linux drivers. They are updated
directly in the official kernel so no need to update .ko files or
anything like that.
> 5) support for hot swap
These cards usually support hot swap very well, with less downtime
between pulling/pushing drives. (sometimes MD can take a few seconds to
remove/add a drive in an array).
> 6) if i use ssd what should i consider? i have one raid card with ssd
> and i don't know if it's runs nice or just do the job
> 7) anything else? costs =) ?
The costs, as Stan mentioned also, are fairly reasonable, in the order
of a good quality SSD drive.
>
> i will search about this boards you told, and about features (i don't
> know what bbu means yet, but will check... any good raid boards
> literarture to read? maybe wikipedia?)
BBU means "battery backed unit" as far as I know.
>
> thanks a lot!! :)
One note: This is not an LSI promotional response, I just know about LSI
cards more than their counterpart, so I gave insight on what I knew.
Regards,
Ben.
>
> 2013/9/19 Stan Hoeppner <stan@hardwarefreak.com>:
>> On 9/18/2013 10:42 PM, Roberto Spadim wrote:
>>> nice, in other words, is better spend money with hardware raid cards
>>> right?
>> If it's my money, yes, absolutely. RAID BBWC will run circles around an
>> SSD with a random write workload. The cycle time on DDR2 SDRAM is 10s
>> of nanoseconds. Write latency on flash cells is 50-100 microseconds.
>> Do the math.
>>
>> Random write apps such as transactional databases rarely, if ever,
>> saturate the BBWC faster than it can flush and free pages, so the
>> additional capacity of an SSD yields no benefit. Additionally, good
>> RAID firmware will take some of the randomness out of the write pattern
>> by flushing nearby LBA sectors in a single IO to the drives, increasing
>> the effectiveness of TCQ/NCQ, thereby reducing seeks. This in essence
>> increases the random IO throughput of the drives.
>>
>> In summary, yes, a good caching RAID controller w/BBU will yield vastly
>> superior performance compared to SSD for most random write workloads,
>> simply due to instantaneous ACK to fsync and friends.
>>
>>> any special card that i should look?
>> If this R420 is the 4x3.5" model then the LSI 9260-4i is suitable. If
>> it's the 8x2.5" drive model then the LSI 9260-8i is suitable. Both have
>> 512MB of cache DRAM. In both cases you'd use the LSI00161/ LSIiBBU07
>> BBU for lower cost instead of the flash option. These two models have
>> the lowest MSRP of the LSI RAID cards having both large cache and BBU
>> support.
>>
>> In the 8x2.5" case you could also use the Dell PERC 710, which has built
>> in FBWC. Probably more expensive than the LSI branded cards. All of
>> Dell's RAID cards are rebranded LSI cards, or OEM produced by LSI for
>> Dell with Dell branded firmware. I.e. it's the same product, same
>> performance, just a different name on it.
>>
>> Adaptec also has decent RAID cards. The bottom end doesn't support BBU
>> so steer clear of those, i.e. 6405e/6805e, etc.
>>
>> Don't use Areca, HighPoint, Promise, etc. They're simply not in the
>> same league as the enterprise vendors above. If you have problems with
>> optimizing their cards, drivers, firmware, etc for a specific workload,
>> their support is simply non existent. You're on your own.
>>
>>> 2013/9/18 Stan Hoeppner <stan@hardwarefreak.com>:
>>>> On 9/18/2013 12:33 PM, Roberto Spadim wrote:
>>>>> Well the internet link here is 100mbps, i think the workload will be a
>>>>> bit more than only 100 users, it's a second webserver+database server
>>>>> He is trying to use a cheaper server with more disk performace, Brazil
>>>>> costs are too high to allow a full ssd system or 15k rpm sas harddisks
>>>>> For mariadb server i'm studing if the thread-pool scheduler will be
>>>>> used instead of one thread per connection but "it's not my problem"
>>>>> the final user will select what is better for database scheduler
>>>>> In other words i think the work load will not be a simple web server
>>>>> cms/blog, i don't know yet how it will work, it's a black/gray box to
>>>>> me, today he have sata enterprise hdd 7200rpm at servers (dell server
>>>>> r420 if i'm not wrong) and is studing if a ssd could help, that's my
>>>>> 'job' (hobby) in this task
>>>> Based on the information provided it sounds like the machine is seek
>>>> bound. The simplest, and best, solution to this problem is simply
>>>> installing a [B|F]BWC RAID card w/512KB cache. Synchronous writes are
>>>> acked when committed to RAID cache instead of the platter. This will
>>>> yield ~130,000 burst write TPS before hitting the spindles, or ~130,000
>>>> writes in flight. This is far more performance than you can achieve
>>>> with a low end enterprise SSD, for about the same cost. It's fully
>>>> transparent and performance is known and guaranteed, unlike the recent
>>>> kernel based block IO caching hacks targeting SSDs as fast read/write
>>>> buffers.
>>>>
>>>> You can use the onboard RAID firmware to create RAID1s or a RAID10, or
>>>> you can expose each disk individually and use md/RAID while still
>>>> benefiting from the write caching, though for only a handful of disks
>>>> you're better off using the firmware RAID. Another advantage is that
>>>> you can use parity RAID (controller firmware only) and avoid some of the
>>>> RMW penalty, as the read blocks will be in controller cache. I.e. you
>>>> can use three 7.2K disks, get the same capacity as a four disk RAID10,
>>>> with equal read performance and nearly the same write performance.
>>>>
>>>> Write heavy DB workloads are a post child for hardware caching RAID devices.
>>>>
>>>> --
>>>> Stan
>>>>
>>>>
>>>>
>>>>
>>>>> 2013/9/18 Drew <drew.kay@gmail.com>:
>>>>>> On Wed, Sep 18, 2013 at 8:51 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
>>>>>>> Sorry guys, this time i don't have a full knowledge about the
>>>>>>> workload, but from what he told me, he want fast writes with hdd but i
>>>>>>> could check if small ssd devices could help
>>>>>>> After install linux with raid1 i will install apache mariadb and php
>>>>>>> at this machine, in other words it's a database and web server load,
>>>>>>> but i don't know what size of app and database will run yet
>>>>>>>
>>>>>>> Btw, ssd with bcache or dm cache could help hdd (this must be
>>>>>>> enterprise level) writes, right?
>>>>>>> Any idea what the best method to test what kernel drive could give
>>>>>>> superior performace? I'm thinking about install the bcache, and after
>>>>>>> make a backup and install dm cache and check what's better, any other
>>>>>>> idea?
>>>>>> We still need to know what size datasets are going to be used. And
>>>>>> also given it's a webserver, how big of a pipe does he have?
>>>>>>
>>>>>> Given a typical webserver in a colo w/ 10Mbps pipe, I think the
>>>>>> suggested config is overkill. For a webserver the 7200 SATA's should
>>>>>> be able to deliver enough data to keep apache happy.
>>>>>>
>>>>>> In the database side, depends on how intensive the workload is. I see
>>>>>> a lot of webservers where the 7200's are just fine because the I/O
>>>>>> demands from the database are low. Blog/CMS systems like wordpress
>>>>>> will be harder on the database but again it depends on how heavy the
>>>>>> access is to the server. How many visitors/hour does he expect to
>>>>>> serve?
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Drew
>>>>>> --
>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>>
>>>>>
>>>
>>>
>
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-19 15:49 ` Benjamin ESTRABAUD
@ 2013-09-19 16:23 ` Roberto Spadim
2013-09-19 16:31 ` Benjamin ESTRABAUD
0 siblings, 1 reply; 28+ messages in thread
From: Roberto Spadim @ 2013-09-19 16:23 UTC (permalink / raw)
To: Benjamin ESTRABAUD; +Cc: Stan Hoeppner, Drew, Linux-RAID
Hi Ben! thanks a lot too, here in Brazil i don't have many options,
adaptec, lsi, dell are easy to find and buy, others brands i need to
search more and some times import, but no problem, since i have some
time to select and buy the equips
about bbu, yes what i found is the battery backed unit too, now i
understand... the raid board consume less power than the linux machine
and can wait more time (72h) to end writes with power loss than a ups
(~1h or more depends on how much money you expend on ups and baterry
here)
well thanks about informations
about the diagnostics... what i should look? smart information and
iostat are something that i use here, there's any problem with a raid
card in the middle? i have two cases that i can't access smart info,
but others raid cards can, does these cards support diagnostics?
2013/9/19 Benjamin ESTRABAUD <be@mpstor.com>:
> On 19/09/13 16:30, Roberto Spadim wrote:
>>
>> Hi Stan!
>
> Hi Roberto,
>
> Just a few things:
>
>> thanks a lot about your experience
>> I have some doubts about raid boards, what you look when you buy one?
>> i bought some raid boards (most perc from dell, others adaptec and
>> others lsi) and don't know if it's really a good feature... check if
>> i'm wrong about things i most look before buying one:
>> 1) Smart or other tool to diagnostics and access drives diagnostics
>> 2) Cache memory (if i have 512mb here, i could replace with 512mb or
>> more at linux side? instead of cache at raid board, why not add cache
>> to linux kernel?)
>
> You could, but your Linux box is unlikely battery backed. The advantage of
> these card's memory is that in the event of a power failure, the contents of
> memory is kept and flushed later on. This is especially important because
> here we are talking about "fast" writes: the RAID card tells the IO
> requester that the IO has been flushed even though it hasn't (at least not
> on the drive). If this was a Linux box and you lacked battery backing, the
> client would assume some data was written while it was actually lost,
> causing silent failure (the worst kind of it).
>
>> 3) batery backup, how this really work? what kind of raid board really
>> work nice with this?
>
> As Stan mentioned, some LSI controllers support this. You can lookup the LSI
> website to find out about it. Sometimes the BBU (battery unit) comes
> separately.
>
>> 4) support for news drivers (firmware updates)
>
> LSI is quite good when it comes to Linux drivers. They are updated directly
> in the official kernel so no need to update .ko files or anything like that.
>
>> 5) support for hot swap
>
> These cards usually support hot swap very well, with less downtime between
> pulling/pushing drives. (sometimes MD can take a few seconds to remove/add a
> drive in an array).
>
>> 6) if i use ssd what should i consider? i have one raid card with ssd
>> and i don't know if it's runs nice or just do the job
>> 7) anything else? costs =) ?
>
> The costs, as Stan mentioned also, are fairly reasonable, in the order of a
> good quality SSD drive.
>
>>
>> i will search about this boards you told, and about features (i don't
>> know what bbu means yet, but will check... any good raid boards
>> literarture to read? maybe wikipedia?)
>
>
> BBU means "battery backed unit" as far as I know.
>>
>>
>> thanks a lot!! :)
>
> One note: This is not an LSI promotional response, I just know about LSI
> cards more than their counterpart, so I gave insight on what I knew.
>
> Regards,
> Ben.
>
>>
>> 2013/9/19 Stan Hoeppner <stan@hardwarefreak.com>:
>>>
>>> On 9/18/2013 10:42 PM, Roberto Spadim wrote:
>>>>
>>>> nice, in other words, is better spend money with hardware raid cards
>>>> right?
>>>
>>> If it's my money, yes, absolutely. RAID BBWC will run circles around an
>>> SSD with a random write workload. The cycle time on DDR2 SDRAM is 10s
>>> of nanoseconds. Write latency on flash cells is 50-100 microseconds.
>>> Do the math.
>>>
>>> Random write apps such as transactional databases rarely, if ever,
>>> saturate the BBWC faster than it can flush and free pages, so the
>>> additional capacity of an SSD yields no benefit. Additionally, good
>>> RAID firmware will take some of the randomness out of the write pattern
>>> by flushing nearby LBA sectors in a single IO to the drives, increasing
>>> the effectiveness of TCQ/NCQ, thereby reducing seeks. This in essence
>>> increases the random IO throughput of the drives.
>>>
>>> In summary, yes, a good caching RAID controller w/BBU will yield vastly
>>> superior performance compared to SSD for most random write workloads,
>>> simply due to instantaneous ACK to fsync and friends.
>>>
>>>> any special card that i should look?
>>>
>>> If this R420 is the 4x3.5" model then the LSI 9260-4i is suitable. If
>>> it's the 8x2.5" drive model then the LSI 9260-8i is suitable. Both have
>>> 512MB of cache DRAM. In both cases you'd use the LSI00161/ LSIiBBU07
>>> BBU for lower cost instead of the flash option. These two models have
>>> the lowest MSRP of the LSI RAID cards having both large cache and BBU
>>> support.
>>>
>>> In the 8x2.5" case you could also use the Dell PERC 710, which has built
>>> in FBWC. Probably more expensive than the LSI branded cards. All of
>>> Dell's RAID cards are rebranded LSI cards, or OEM produced by LSI for
>>> Dell with Dell branded firmware. I.e. it's the same product, same
>>> performance, just a different name on it.
>>>
>>> Adaptec also has decent RAID cards. The bottom end doesn't support BBU
>>> so steer clear of those, i.e. 6405e/6805e, etc.
>>>
>>> Don't use Areca, HighPoint, Promise, etc. They're simply not in the
>>> same league as the enterprise vendors above. If you have problems with
>>> optimizing their cards, drivers, firmware, etc for a specific workload,
>>> their support is simply non existent. You're on your own.
>>>
>>>> 2013/9/18 Stan Hoeppner <stan@hardwarefreak.com>:
>>>>>
>>>>> On 9/18/2013 12:33 PM, Roberto Spadim wrote:
>>>>>>
>>>>>> Well the internet link here is 100mbps, i think the workload will be a
>>>>>> bit more than only 100 users, it's a second webserver+database server
>>>>>> He is trying to use a cheaper server with more disk performace, Brazil
>>>>>> costs are too high to allow a full ssd system or 15k rpm sas harddisks
>>>>>> For mariadb server i'm studing if the thread-pool scheduler will be
>>>>>> used instead of one thread per connection but "it's not my problem"
>>>>>> the final user will select what is better for database scheduler
>>>>>> In other words i think the work load will not be a simple web server
>>>>>> cms/blog, i don't know yet how it will work, it's a black/gray box to
>>>>>> me, today he have sata enterprise hdd 7200rpm at servers (dell server
>>>>>> r420 if i'm not wrong) and is studing if a ssd could help, that's my
>>>>>> 'job' (hobby) in this task
>>>>>
>>>>> Based on the information provided it sounds like the machine is seek
>>>>> bound. The simplest, and best, solution to this problem is simply
>>>>> installing a [B|F]BWC RAID card w/512KB cache. Synchronous writes are
>>>>> acked when committed to RAID cache instead of the platter. This will
>>>>> yield ~130,000 burst write TPS before hitting the spindles, or ~130,000
>>>>> writes in flight. This is far more performance than you can achieve
>>>>> with a low end enterprise SSD, for about the same cost. It's fully
>>>>> transparent and performance is known and guaranteed, unlike the recent
>>>>> kernel based block IO caching hacks targeting SSDs as fast read/write
>>>>> buffers.
>>>>>
>>>>> You can use the onboard RAID firmware to create RAID1s or a RAID10, or
>>>>> you can expose each disk individually and use md/RAID while still
>>>>> benefiting from the write caching, though for only a handful of disks
>>>>> you're better off using the firmware RAID. Another advantage is that
>>>>> you can use parity RAID (controller firmware only) and avoid some of
>>>>> the
>>>>> RMW penalty, as the read blocks will be in controller cache. I.e. you
>>>>> can use three 7.2K disks, get the same capacity as a four disk RAID10,
>>>>> with equal read performance and nearly the same write performance.
>>>>>
>>>>> Write heavy DB workloads are a post child for hardware caching RAID
>>>>> devices.
>>>>>
>>>>> --
>>>>> Stan
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> 2013/9/18 Drew <drew.kay@gmail.com>:
>>>>>>>
>>>>>>> On Wed, Sep 18, 2013 at 8:51 AM, Roberto Spadim
>>>>>>> <roberto@spadim.com.br> wrote:
>>>>>>>>
>>>>>>>> Sorry guys, this time i don't have a full knowledge about the
>>>>>>>> workload, but from what he told me, he want fast writes with hdd but
>>>>>>>> i
>>>>>>>> could check if small ssd devices could help
>>>>>>>> After install linux with raid1 i will install apache mariadb and php
>>>>>>>> at this machine, in other words it's a database and web server load,
>>>>>>>> but i don't know what size of app and database will run yet
>>>>>>>>
>>>>>>>> Btw, ssd with bcache or dm cache could help hdd (this must be
>>>>>>>> enterprise level) writes, right?
>>>>>>>> Any idea what the best method to test what kernel drive could give
>>>>>>>> superior performace? I'm thinking about install the bcache, and
>>>>>>>> after
>>>>>>>> make a backup and install dm cache and check what's better, any
>>>>>>>> other
>>>>>>>> idea?
>>>>>>>
>>>>>>> We still need to know what size datasets are going to be used. And
>>>>>>> also given it's a webserver, how big of a pipe does he have?
>>>>>>>
>>>>>>> Given a typical webserver in a colo w/ 10Mbps pipe, I think the
>>>>>>> suggested config is overkill. For a webserver the 7200 SATA's should
>>>>>>> be able to deliver enough data to keep apache happy.
>>>>>>>
>>>>>>> In the database side, depends on how intensive the workload is. I see
>>>>>>> a lot of webservers where the 7200's are just fine because the I/O
>>>>>>> demands from the database are low. Blog/CMS systems like wordpress
>>>>>>> will be harder on the database but again it depends on how heavy the
>>>>>>> access is to the server. How many visitors/hour does he expect to
>>>>>>> serve?
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Drew
>>>>>>> --
>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid"
>>>>>>> in
>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>>>
>>>>>>
>>>>>>
>>>>
>>>>
>>
>>
>
--
Roberto Spadim
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-19 16:23 ` Roberto Spadim
@ 2013-09-19 16:31 ` Benjamin ESTRABAUD
[not found] ` <CAH3kUhE33h=7D6r7KO9VvQRN5qrZS+cad KUBQW8POFYvyGsS3w@mail.gmail.com>
0 siblings, 1 reply; 28+ messages in thread
From: Benjamin ESTRABAUD @ 2013-09-19 16:31 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Stan Hoeppner, Drew, Linux-RAID
On 19/09/13 17:23, Roberto Spadim wrote:
> Hi Ben! thanks a lot too, here in Brazil i don't have many options,
> adaptec, lsi, dell are easy to find and buy, others brands i need to
> search more and some times import, but no problem, since i have some
> time to select and buy the equips
>
> about bbu, yes what i found is the battery backed unit too, now i
> understand... the raid board consume less power than the linux machine
> and can wait more time (72h) to end writes with power loss than a ups
> (~1h or more depends on how much money you expend on ups and baterry
> here)
Yes, this is why even when you have a UPS in the mix, you still need a
BBU for your RAID card: The UPS avoids downtime, the BBU prevents data loss.
> well thanks about informations
> about the diagnostics... what i should look? smart information and
> iostat are something that i use here, there's any problem with a raid
> card in the middle? i have two cases that i can't access smart info,
> but others raid cards can, does these cards support diagnostics?
I think that most of these cards support ATA passthrough, which would be
required for you to run Smart commands on the drives "behind" the RAID
card. You need to verify if the card supports this feature.
As for "iostat", it's a /proc data parser so you will only have
information about the devices Linux actually sees (and not the drives
"handled" by the RAID card). So you'll probably have stats on the RAID
"disk" (the device "exported" by the RAID card) but not individual drives.
However, in the case of LSI, "megaRaid" may have that information and
may have an easy enough way of parsing it (but again this is to be
confirmed).
Regards,
Ben.
> 2013/9/19 Benjamin ESTRABAUD <be@mpstor.com>:
>> On 19/09/13 16:30, Roberto Spadim wrote:
>>> Hi Stan!
>> Hi Roberto,
>>
>> Just a few things:
>>
>>> thanks a lot about your experience
>>> I have some doubts about raid boards, what you look when you buy one?
>>> i bought some raid boards (most perc from dell, others adaptec and
>>> others lsi) and don't know if it's really a good feature... check if
>>> i'm wrong about things i most look before buying one:
>>> 1) Smart or other tool to diagnostics and access drives diagnostics
>>> 2) Cache memory (if i have 512mb here, i could replace with 512mb or
>>> more at linux side? instead of cache at raid board, why not add cache
>>> to linux kernel?)
>> You could, but your Linux box is unlikely battery backed. The advantage of
>> these card's memory is that in the event of a power failure, the contents of
>> memory is kept and flushed later on. This is especially important because
>> here we are talking about "fast" writes: the RAID card tells the IO
>> requester that the IO has been flushed even though it hasn't (at least not
>> on the drive). If this was a Linux box and you lacked battery backing, the
>> client would assume some data was written while it was actually lost,
>> causing silent failure (the worst kind of it).
>>
>>> 3) batery backup, how this really work? what kind of raid board really
>>> work nice with this?
>> As Stan mentioned, some LSI controllers support this. You can lookup the LSI
>> website to find out about it. Sometimes the BBU (battery unit) comes
>> separately.
>>
>>> 4) support for news drivers (firmware updates)
>> LSI is quite good when it comes to Linux drivers. They are updated directly
>> in the official kernel so no need to update .ko files or anything like that.
>>
>>> 5) support for hot swap
>> These cards usually support hot swap very well, with less downtime between
>> pulling/pushing drives. (sometimes MD can take a few seconds to remove/add a
>> drive in an array).
>>
>>> 6) if i use ssd what should i consider? i have one raid card with ssd
>>> and i don't know if it's runs nice or just do the job
>>> 7) anything else? costs =) ?
>> The costs, as Stan mentioned also, are fairly reasonable, in the order of a
>> good quality SSD drive.
>>
>>> i will search about this boards you told, and about features (i don't
>>> know what bbu means yet, but will check... any good raid boards
>>> literarture to read? maybe wikipedia?)
>>
>> BBU means "battery backed unit" as far as I know.
>>>
>>> thanks a lot!! :)
>> One note: This is not an LSI promotional response, I just know about LSI
>> cards more than their counterpart, so I gave insight on what I knew.
>>
>> Regards,
>> Ben.
>>
>>> 2013/9/19 Stan Hoeppner <stan@hardwarefreak.com>:
>>>> On 9/18/2013 10:42 PM, Roberto Spadim wrote:
>>>>> nice, in other words, is better spend money with hardware raid cards
>>>>> right?
>>>> If it's my money, yes, absolutely. RAID BBWC will run circles around an
>>>> SSD with a random write workload. The cycle time on DDR2 SDRAM is 10s
>>>> of nanoseconds. Write latency on flash cells is 50-100 microseconds.
>>>> Do the math.
>>>>
>>>> Random write apps such as transactional databases rarely, if ever,
>>>> saturate the BBWC faster than it can flush and free pages, so the
>>>> additional capacity of an SSD yields no benefit. Additionally, good
>>>> RAID firmware will take some of the randomness out of the write pattern
>>>> by flushing nearby LBA sectors in a single IO to the drives, increasing
>>>> the effectiveness of TCQ/NCQ, thereby reducing seeks. This in essence
>>>> increases the random IO throughput of the drives.
>>>>
>>>> In summary, yes, a good caching RAID controller w/BBU will yield vastly
>>>> superior performance compared to SSD for most random write workloads,
>>>> simply due to instantaneous ACK to fsync and friends.
>>>>
>>>>> any special card that i should look?
>>>> If this R420 is the 4x3.5" model then the LSI 9260-4i is suitable. If
>>>> it's the 8x2.5" drive model then the LSI 9260-8i is suitable. Both have
>>>> 512MB of cache DRAM. In both cases you'd use the LSI00161/ LSIiBBU07
>>>> BBU for lower cost instead of the flash option. These two models have
>>>> the lowest MSRP of the LSI RAID cards having both large cache and BBU
>>>> support.
>>>>
>>>> In the 8x2.5" case you could also use the Dell PERC 710, which has built
>>>> in FBWC. Probably more expensive than the LSI branded cards. All of
>>>> Dell's RAID cards are rebranded LSI cards, or OEM produced by LSI for
>>>> Dell with Dell branded firmware. I.e. it's the same product, same
>>>> performance, just a different name on it.
>>>>
>>>> Adaptec also has decent RAID cards. The bottom end doesn't support BBU
>>>> so steer clear of those, i.e. 6405e/6805e, etc.
>>>>
>>>> Don't use Areca, HighPoint, Promise, etc. They're simply not in the
>>>> same league as the enterprise vendors above. If you have problems with
>>>> optimizing their cards, drivers, firmware, etc for a specific workload,
>>>> their support is simply non existent. You're on your own.
>>>>
>>>>> 2013/9/18 Stan Hoeppner <stan@hardwarefreak.com>:
>>>>>> On 9/18/2013 12:33 PM, Roberto Spadim wrote:
>>>>>>> Well the internet link here is 100mbps, i think the workload will be a
>>>>>>> bit more than only 100 users, it's a second webserver+database server
>>>>>>> He is trying to use a cheaper server with more disk performace, Brazil
>>>>>>> costs are too high to allow a full ssd system or 15k rpm sas harddisks
>>>>>>> For mariadb server i'm studing if the thread-pool scheduler will be
>>>>>>> used instead of one thread per connection but "it's not my problem"
>>>>>>> the final user will select what is better for database scheduler
>>>>>>> In other words i think the work load will not be a simple web server
>>>>>>> cms/blog, i don't know yet how it will work, it's a black/gray box to
>>>>>>> me, today he have sata enterprise hdd 7200rpm at servers (dell server
>>>>>>> r420 if i'm not wrong) and is studing if a ssd could help, that's my
>>>>>>> 'job' (hobby) in this task
>>>>>> Based on the information provided it sounds like the machine is seek
>>>>>> bound. The simplest, and best, solution to this problem is simply
>>>>>> installing a [B|F]BWC RAID card w/512KB cache. Synchronous writes are
>>>>>> acked when committed to RAID cache instead of the platter. This will
>>>>>> yield ~130,000 burst write TPS before hitting the spindles, or ~130,000
>>>>>> writes in flight. This is far more performance than you can achieve
>>>>>> with a low end enterprise SSD, for about the same cost. It's fully
>>>>>> transparent and performance is known and guaranteed, unlike the recent
>>>>>> kernel based block IO caching hacks targeting SSDs as fast read/write
>>>>>> buffers.
>>>>>>
>>>>>> You can use the onboard RAID firmware to create RAID1s or a RAID10, or
>>>>>> you can expose each disk individually and use md/RAID while still
>>>>>> benefiting from the write caching, though for only a handful of disks
>>>>>> you're better off using the firmware RAID. Another advantage is that
>>>>>> you can use parity RAID (controller firmware only) and avoid some of
>>>>>> the
>>>>>> RMW penalty, as the read blocks will be in controller cache. I.e. you
>>>>>> can use three 7.2K disks, get the same capacity as a four disk RAID10,
>>>>>> with equal read performance and nearly the same write performance.
>>>>>>
>>>>>> Write heavy DB workloads are a post child for hardware caching RAID
>>>>>> devices.
>>>>>>
>>>>>> --
>>>>>> Stan
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> 2013/9/18 Drew <drew.kay@gmail.com>:
>>>>>>>> On Wed, Sep 18, 2013 at 8:51 AM, Roberto Spadim
>>>>>>>> <roberto@spadim.com.br> wrote:
>>>>>>>>> Sorry guys, this time i don't have a full knowledge about the
>>>>>>>>> workload, but from what he told me, he want fast writes with hdd but
>>>>>>>>> i
>>>>>>>>> could check if small ssd devices could help
>>>>>>>>> After install linux with raid1 i will install apache mariadb and php
>>>>>>>>> at this machine, in other words it's a database and web server load,
>>>>>>>>> but i don't know what size of app and database will run yet
>>>>>>>>>
>>>>>>>>> Btw, ssd with bcache or dm cache could help hdd (this must be
>>>>>>>>> enterprise level) writes, right?
>>>>>>>>> Any idea what the best method to test what kernel drive could give
>>>>>>>>> superior performace? I'm thinking about install the bcache, and
>>>>>>>>> after
>>>>>>>>> make a backup and install dm cache and check what's better, any
>>>>>>>>> other
>>>>>>>>> idea?
>>>>>>>> We still need to know what size datasets are going to be used. And
>>>>>>>> also given it's a webserver, how big of a pipe does he have?
>>>>>>>>
>>>>>>>> Given a typical webserver in a colo w/ 10Mbps pipe, I think the
>>>>>>>> suggested config is overkill. For a webserver the 7200 SATA's should
>>>>>>>> be able to deliver enough data to keep apache happy.
>>>>>>>>
>>>>>>>> In the database side, depends on how intensive the workload is. I see
>>>>>>>> a lot of webservers where the 7200's are just fine because the I/O
>>>>>>>> demands from the database are low. Blog/CMS systems like wordpress
>>>>>>>> will be harder on the database but again it depends on how heavy the
>>>>>>>> access is to the server. How many visitors/hour does he expect to
>>>>>>>> serve?
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Drew
>>>>>>>> --
>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid"
>>>>>>>> in
>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>>>>
>>>>>>>
>>>>>
>>>
>
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-19 15:30 ` Roberto Spadim
2013-09-19 15:49 ` Benjamin ESTRABAUD
@ 2013-09-19 22:15 ` Stan Hoeppner
2013-09-19 22:50 ` Roberto Spadim
1 sibling, 1 reply; 28+ messages in thread
From: Stan Hoeppner @ 2013-09-19 22:15 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Drew, Linux-RAID
On 9/19/2013 10:30 AM, Roberto Spadim wrote:
> 1) Smart or other tool to diagnostics and access drives diagnostics
See the '-d' option in smartctl(8).
> 2) Cache memory (if i have 512mb here, i could replace with 512mb or
> more at linux side? instead of cache at raid board, why not add cache
> to linux kernel?)
Because if the power goes out, or the kernel crashes, the contents of
system RAM are lost, ergo you lose data and possibly corrupt your files
and filesystem.
> 3) batery backup, how this really work? what kind of raid board really
> work nice with this?
A BBU, or battery backup unit, provides power to the DRAM and DRAM
controller on the RAID card. If the power goes out or kernel crashes
the data is intact. When power and operating state are restored, the
controller flushes the cache contents to the drives. Most BBUs will
hold for about 72 hours before the batteries run out.
A newer option is a flash backed cache. Here there is no battery unit,
and the data is truly non-volatile. In the event of power loss or some
crash situations, the controller copies the contents of the write cache
to onboard flash memory. When normal system state is restored, the
flash is dumped to DRAM, then flushed to the drives. This option is a
little more expensive, but is preferred for obvious reasons. There is
no 72 hour limit. The data resides in flash indefinitely. This can be
valuable in the case of natural disasters that take out utility power
and network links for days or weeks, but leave the facility and systems
unharmed. With flash backed write cache, you can wait it out, or
relocate, and the data will hit disk after you power up. With BBU, you
have only ~3 days to get back online.
> 4) support for news drivers (firmware updates)
All of the quality RAID cards have pretty seamless firmware update
mechanisms. In the case of Linux the drivers are in mainline, so
updating your kernel updates the driver.
> 5) support for hot swap
RAID cards supported hot swap long before Linus wrote the first lines of
code that became Linux, and more than a decade before the md driver was
written. RAID cards typically handle hot swap better than md does.
> 6) if i use ssd what should i consider? i have one raid card with ssd
> and i don't know if it's runs nice or just do the job
I'm not sure what you're asking here.
> 7) anything else? costs =) ?
I can't speak accurately to costs. The last time we spoke of pricing,
off list, you stated a 500GB SATA drive costs ~$500 USD in your locale.
That's radically out of line with pricing here in the US.
I can only say for comparison that I can obtain an LSI 9260-4i 512MB
w/BBU for ~$470 USD. I can obtain an Intel DC S3700 200GB enterprise
SSD for $499. But this isn't an apt comparison as neither device is a
direct replacement for the other. It's the complete storage
architecture and its overall capabilities that matters. Using an SSD
with one of the late kernel caching hacks doesn't give you the
protection of BBU/flash cache on hardware RAID. Nor does it give you
the near zero latency fsync ACK of RAID cache.
> i will search about this boards you told, and about features (i don't
> know what bbu means yet, but will check... any good raid boards
> literarture to read? maybe wikipedia?)
So you've never used a hardware RAID controller? Completely new to you?
Wow... Start with these. Beware. This is a few hundred pages of
material.
http://www.lsi.com/downloads/Public/MegaRAID%20SAS/MegaRAID%20SAS%209260-4i/MR_SAS9260-4i_PB_FIN_071212.pdf
http://www.lsi.com/downloads/Public/MegaRAID%20SAS/41450-04_RevC_6Gbs_MegaRAID_SAS_UG.pdf
http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/51530-00_RevK_MegaRAID_SAS_SW_UG.pdf
> thanks a lot!! :)
>
> 2013/9/19 Stan Hoeppner <stan@hardwarefreak.com>:
>> On 9/18/2013 10:42 PM, Roberto Spadim wrote:
>>> nice, in other words, is better spend money with hardware raid cards
>>> right?
>>
>> If it's my money, yes, absolutely. RAID BBWC will run circles around an
>> SSD with a random write workload. The cycle time on DDR2 SDRAM is 10s
>> of nanoseconds. Write latency on flash cells is 50-100 microseconds.
>> Do the math.
>>
>> Random write apps such as transactional databases rarely, if ever,
>> saturate the BBWC faster than it can flush and free pages, so the
>> additional capacity of an SSD yields no benefit. Additionally, good
>> RAID firmware will take some of the randomness out of the write pattern
>> by flushing nearby LBA sectors in a single IO to the drives, increasing
>> the effectiveness of TCQ/NCQ, thereby reducing seeks. This in essence
>> increases the random IO throughput of the drives.
>>
>> In summary, yes, a good caching RAID controller w/BBU will yield vastly
>> superior performance compared to SSD for most random write workloads,
>> simply due to instantaneous ACK to fsync and friends.
>>
>>> any special card that i should look?
>>
>> If this R420 is the 4x3.5" model then the LSI 9260-4i is suitable. If
>> it's the 8x2.5" drive model then the LSI 9260-8i is suitable. Both have
>> 512MB of cache DRAM. In both cases you'd use the LSI00161/ LSIiBBU07
>> BBU for lower cost instead of the flash option. These two models have
>> the lowest MSRP of the LSI RAID cards having both large cache and BBU
>> support.
>>
>> In the 8x2.5" case you could also use the Dell PERC 710, which has built
>> in FBWC. Probably more expensive than the LSI branded cards. All of
>> Dell's RAID cards are rebranded LSI cards, or OEM produced by LSI for
>> Dell with Dell branded firmware. I.e. it's the same product, same
>> performance, just a different name on it.
>>
>> Adaptec also has decent RAID cards. The bottom end doesn't support BBU
>> so steer clear of those, i.e. 6405e/6805e, etc.
>>
>> Don't use Areca, HighPoint, Promise, etc. They're simply not in the
>> same league as the enterprise vendors above. If you have problems with
>> optimizing their cards, drivers, firmware, etc for a specific workload,
>> their support is simply non existent. You're on your own.
>>
>>> 2013/9/18 Stan Hoeppner <stan@hardwarefreak.com>:
>>>> On 9/18/2013 12:33 PM, Roberto Spadim wrote:
>>>>> Well the internet link here is 100mbps, i think the workload will be a
>>>>> bit more than only 100 users, it's a second webserver+database server
>>>>> He is trying to use a cheaper server with more disk performace, Brazil
>>>>> costs are too high to allow a full ssd system or 15k rpm sas harddisks
>>>>> For mariadb server i'm studing if the thread-pool scheduler will be
>>>>> used instead of one thread per connection but "it's not my problem"
>>>>> the final user will select what is better for database scheduler
>>>>> In other words i think the work load will not be a simple web server
>>>>> cms/blog, i don't know yet how it will work, it's a black/gray box to
>>>>> me, today he have sata enterprise hdd 7200rpm at servers (dell server
>>>>> r420 if i'm not wrong) and is studing if a ssd could help, that's my
>>>>> 'job' (hobby) in this task
>>>>
>>>> Based on the information provided it sounds like the machine is seek
>>>> bound. The simplest, and best, solution to this problem is simply
>>>> installing a [B|F]BWC RAID card w/512KB cache. Synchronous writes are
>>>> acked when committed to RAID cache instead of the platter. This will
>>>> yield ~130,000 burst write TPS before hitting the spindles, or ~130,000
>>>> writes in flight. This is far more performance than you can achieve
>>>> with a low end enterprise SSD, for about the same cost. It's fully
>>>> transparent and performance is known and guaranteed, unlike the recent
>>>> kernel based block IO caching hacks targeting SSDs as fast read/write
>>>> buffers.
>>>>
>>>> You can use the onboard RAID firmware to create RAID1s or a RAID10, or
>>>> you can expose each disk individually and use md/RAID while still
>>>> benefiting from the write caching, though for only a handful of disks
>>>> you're better off using the firmware RAID. Another advantage is that
>>>> you can use parity RAID (controller firmware only) and avoid some of the
>>>> RMW penalty, as the read blocks will be in controller cache. I.e. you
>>>> can use three 7.2K disks, get the same capacity as a four disk RAID10,
>>>> with equal read performance and nearly the same write performance.
>>>>
>>>> Write heavy DB workloads are a post child for hardware caching RAID devices.
>>>>
>>>> --
>>>> Stan
>>>>
>>>>
>>>>
>>>>
>>>>> 2013/9/18 Drew <drew.kay@gmail.com>:
>>>>>> On Wed, Sep 18, 2013 at 8:51 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
>>>>>>> Sorry guys, this time i don't have a full knowledge about the
>>>>>>> workload, but from what he told me, he want fast writes with hdd but i
>>>>>>> could check if small ssd devices could help
>>>>>>> After install linux with raid1 i will install apache mariadb and php
>>>>>>> at this machine, in other words it's a database and web server load,
>>>>>>> but i don't know what size of app and database will run yet
>>>>>>>
>>>>>>> Btw, ssd with bcache or dm cache could help hdd (this must be
>>>>>>> enterprise level) writes, right?
>>>>>>> Any idea what the best method to test what kernel drive could give
>>>>>>> superior performace? I'm thinking about install the bcache, and after
>>>>>>> make a backup and install dm cache and check what's better, any other
>>>>>>> idea?
>>>>>>
>>>>>> We still need to know what size datasets are going to be used. And
>>>>>> also given it's a webserver, how big of a pipe does he have?
>>>>>>
>>>>>> Given a typical webserver in a colo w/ 10Mbps pipe, I think the
>>>>>> suggested config is overkill. For a webserver the 7200 SATA's should
>>>>>> be able to deliver enough data to keep apache happy.
>>>>>>
>>>>>> In the database side, depends on how intensive the workload is. I see
>>>>>> a lot of webservers where the 7200's are just fine because the I/O
>>>>>> demands from the database are low. Blog/CMS systems like wordpress
>>>>>> will be harder on the database but again it depends on how heavy the
>>>>>> access is to the server. How many visitors/hour does he expect to
>>>>>> serve?
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Drew
>>>>>> --
>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>>
>>
>
>
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-19 22:15 ` Stan Hoeppner
@ 2013-09-19 22:50 ` Roberto Spadim
0 siblings, 0 replies; 28+ messages in thread
From: Roberto Spadim @ 2013-09-19 22:50 UTC (permalink / raw)
To: Stan Hoeppner; +Cc: Drew, Linux-RAID
Hi Stan!!
2013/9/19 Stan Hoeppner <stan@hardwarefreak.com>:
> On 9/19/2013 10:30 AM, Roberto Spadim wrote:
>
>> 1) Smart or other tool to diagnostics and access drives diagnostics
>
> See the '-d' option in smartctl(8).
nice i tryed this in a running machine, some cards don't work, others
work with smart
>> 2) Cache memory (if i have 512mb here, i could replace with 512mb or
>> more at linux side? instead of cache at raid board, why not add cache
>> to linux kernel?)
>
> Because if the power goes out, or the kernel crashes, the contents of
> system RAM are lost, ergo you lose data and possibly corrupt your files
> and filesystem.
OK, cache at raid is better
>> 3) batery backup, how this really work? what kind of raid board really
>> work nice with this?
>
> A BBU, or battery backup unit, provides power to the DRAM and DRAM
> controller on the RAID card. If the power goes out or kernel crashes
> the data is intact. When power and operating state are restored, the
> controller flushes the cache contents to the drives. Most BBUs will
> hold for about 72 hours before the batteries run out.
>
> A newer option is a flash backed cache. Here there is no battery unit,
In this case, it´s something similar to a SSD at raid card? this don't
have write cicles limit? and problems with flash being corrupt?
> and the data is truly non-volatile. In the event of power loss or some
> crash situations, the controller copies the contents of the write cache
> to onboard flash memory. When normal system state is restored, the
> flash is dumped to DRAM, then flushed to the drives. This option is a
> little more expensive, but is preferred for obvious reasons. There is
> no 72 hour limit. The data resides in flash indefinitely. This can be
> valuable in the case of natural disasters that take out utility power
> and network links for days or weeks, but leave the facility and systems
> unharmed. With flash backed write cache, you can wait it out, or
> relocate, and the data will hit disk after you power up. With BBU, you
> have only ~3 days to get back online.
>
>> 4) support for news drivers (firmware updates)
>
> All of the quality RAID cards have pretty seamless firmware update
> mechanisms. In the case of Linux the drivers are in mainline, so
> updating your kernel updates the driver.
Nice =)
>> 5) support for hot swap
>
> RAID cards supported hot swap long before Linus wrote the first lines of
> code that became Linux, and more than a decade before the md driver was
> written. RAID cards typically handle hot swap better than md does.
Yes, but some Dell server (here) have a raid card and can't allow a
hot swap, i don't know if it's a problem about drive bays or not
>> 6) if i use ssd what should i consider? i have one raid card with ssd
>> and i don't know if it's runs nice or just do the job
>
> I'm not sure what you're asking here.
Well just to know if the raid card could be used with ssd...
i was thinking something like:
RAID CARD -> ssd
motherboard -> hdd
and a bcache or dmcache of md-raid1(hdds) with raidcard-raid1(ssds)
>
>> 7) anything else? costs =) ?
>
> I can't speak accurately to costs. The last time we spoke of pricing,
> off list, you stated a 500GB SATA drive costs ~$500 USD in your locale.
> That's radically out of line with pricing here in the US.
yes here is my problem, this time i´m considering some parts being imported
> I can only say for comparison that I can obtain an LSI 9260-4i 512MB
> w/BBU for ~$470 USD. I can obtain an Intel DC S3700 200GB enterprise
> SSD for $499. But this isn't an apt comparison as neither device is a
:'( i will cry hahah it's very cheap for my country market
> direct replacement for the other. It's the complete storage
> architecture and its overall capabilities that matters. Using an SSD
> with one of the late kernel caching hacks doesn't give you the
> protection of BBU/flash cache on hardware RAID. Nor does it give you
> the near zero latency fsync ACK of RAID cache.
Nice
>> i will search about this boards you told, and about features (i don't
>> know what bbu means yet, but will check... any good raid boards
>> literarture to read? maybe wikipedia?)
>
> So you've never used a hardware RAID controller? Completely new to you?
+- i don't know with details, i have a supperficial experience only,
not a technical view of raid cards yet
> Wow... Start with these. Beware. This is a few hundred pages of
> material.
>
> http://www.lsi.com/downloads/Public/MegaRAID%20SAS/MegaRAID%20SAS%209260-4i/MR_SAS9260-4i_PB_FIN_071212.pdf
> http://www.lsi.com/downloads/Public/MegaRAID%20SAS/41450-04_RevC_6Gbs_MegaRAID_SAS_UG.pdf
> http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/51530-00_RevK_MegaRAID_SAS_SW_UG.pdf
!!! wow! very nice, i will read :)
Thanks a lot!
>
>
>
>> thanks a lot!! :)
>>
>> 2013/9/19 Stan Hoeppner <stan@hardwarefreak.com>:
>>> On 9/18/2013 10:42 PM, Roberto Spadim wrote:
>>>> nice, in other words, is better spend money with hardware raid cards
>>>> right?
>>>
>>> If it's my money, yes, absolutely. RAID BBWC will run circles around an
>>> SSD with a random write workload. The cycle time on DDR2 SDRAM is 10s
>>> of nanoseconds. Write latency on flash cells is 50-100 microseconds.
>>> Do the math.
>>>
>>> Random write apps such as transactional databases rarely, if ever,
>>> saturate the BBWC faster than it can flush and free pages, so the
>>> additional capacity of an SSD yields no benefit. Additionally, good
>>> RAID firmware will take some of the randomness out of the write pattern
>>> by flushing nearby LBA sectors in a single IO to the drives, increasing
>>> the effectiveness of TCQ/NCQ, thereby reducing seeks. This in essence
>>> increases the random IO throughput of the drives.
>>>
>>> In summary, yes, a good caching RAID controller w/BBU will yield vastly
>>> superior performance compared to SSD for most random write workloads,
>>> simply due to instantaneous ACK to fsync and friends.
>>>
>>>> any special card that i should look?
>>>
>>> If this R420 is the 4x3.5" model then the LSI 9260-4i is suitable. If
>>> it's the 8x2.5" drive model then the LSI 9260-8i is suitable. Both have
>>> 512MB of cache DRAM. In both cases you'd use the LSI00161/ LSIiBBU07
>>> BBU for lower cost instead of the flash option. These two models have
>>> the lowest MSRP of the LSI RAID cards having both large cache and BBU
>>> support.
>>>
>>> In the 8x2.5" case you could also use the Dell PERC 710, which has built
>>> in FBWC. Probably more expensive than the LSI branded cards. All of
>>> Dell's RAID cards are rebranded LSI cards, or OEM produced by LSI for
>>> Dell with Dell branded firmware. I.e. it's the same product, same
>>> performance, just a different name on it.
>>>
>>> Adaptec also has decent RAID cards. The bottom end doesn't support BBU
>>> so steer clear of those, i.e. 6405e/6805e, etc.
>>>
>>> Don't use Areca, HighPoint, Promise, etc. They're simply not in the
>>> same league as the enterprise vendors above. If you have problems with
>>> optimizing their cards, drivers, firmware, etc for a specific workload,
>>> their support is simply non existent. You're on your own.
>>>
>>>> 2013/9/18 Stan Hoeppner <stan@hardwarefreak.com>:
>>>>> On 9/18/2013 12:33 PM, Roberto Spadim wrote:
>>>>>> Well the internet link here is 100mbps, i think the workload will be a
>>>>>> bit more than only 100 users, it's a second webserver+database server
>>>>>> He is trying to use a cheaper server with more disk performace, Brazil
>>>>>> costs are too high to allow a full ssd system or 15k rpm sas harddisks
>>>>>> For mariadb server i'm studing if the thread-pool scheduler will be
>>>>>> used instead of one thread per connection but "it's not my problem"
>>>>>> the final user will select what is better for database scheduler
>>>>>> In other words i think the work load will not be a simple web server
>>>>>> cms/blog, i don't know yet how it will work, it's a black/gray box to
>>>>>> me, today he have sata enterprise hdd 7200rpm at servers (dell server
>>>>>> r420 if i'm not wrong) and is studing if a ssd could help, that's my
>>>>>> 'job' (hobby) in this task
>>>>>
>>>>> Based on the information provided it sounds like the machine is seek
>>>>> bound. The simplest, and best, solution to this problem is simply
>>>>> installing a [B|F]BWC RAID card w/512KB cache. Synchronous writes are
>>>>> acked when committed to RAID cache instead of the platter. This will
>>>>> yield ~130,000 burst write TPS before hitting the spindles, or ~130,000
>>>>> writes in flight. This is far more performance than you can achieve
>>>>> with a low end enterprise SSD, for about the same cost. It's fully
>>>>> transparent and performance is known and guaranteed, unlike the recent
>>>>> kernel based block IO caching hacks targeting SSDs as fast read/write
>>>>> buffers.
>>>>>
>>>>> You can use the onboard RAID firmware to create RAID1s or a RAID10, or
>>>>> you can expose each disk individually and use md/RAID while still
>>>>> benefiting from the write caching, though for only a handful of disks
>>>>> you're better off using the firmware RAID. Another advantage is that
>>>>> you can use parity RAID (controller firmware only) and avoid some of the
>>>>> RMW penalty, as the read blocks will be in controller cache. I.e. you
>>>>> can use three 7.2K disks, get the same capacity as a four disk RAID10,
>>>>> with equal read performance and nearly the same write performance.
>>>>>
>>>>> Write heavy DB workloads are a post child for hardware caching RAID devices.
>>>>>
>>>>> --
>>>>> Stan
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>> 2013/9/18 Drew <drew.kay@gmail.com>:
>>>>>>> On Wed, Sep 18, 2013 at 8:51 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
>>>>>>>> Sorry guys, this time i don't have a full knowledge about the
>>>>>>>> workload, but from what he told me, he want fast writes with hdd but i
>>>>>>>> could check if small ssd devices could help
>>>>>>>> After install linux with raid1 i will install apache mariadb and php
>>>>>>>> at this machine, in other words it's a database and web server load,
>>>>>>>> but i don't know what size of app and database will run yet
>>>>>>>>
>>>>>>>> Btw, ssd with bcache or dm cache could help hdd (this must be
>>>>>>>> enterprise level) writes, right?
>>>>>>>> Any idea what the best method to test what kernel drive could give
>>>>>>>> superior performace? I'm thinking about install the bcache, and after
>>>>>>>> make a backup and install dm cache and check what's better, any other
>>>>>>>> idea?
>>>>>>>
>>>>>>> We still need to know what size datasets are going to be used. And
>>>>>>> also given it's a webserver, how big of a pipe does he have?
>>>>>>>
>>>>>>> Given a typical webserver in a colo w/ 10Mbps pipe, I think the
>>>>>>> suggested config is overkill. For a webserver the 7200 SATA's should
>>>>>>> be able to deliver enough data to keep apache happy.
>>>>>>>
>>>>>>> In the database side, depends on how intensive the workload is. I see
>>>>>>> a lot of webservers where the 7200's are just fine because the I/O
>>>>>>> demands from the database are low. Blog/CMS systems like wordpress
>>>>>>> will be harder on the database but again it depends on how heavy the
>>>>>>> access is to the server. How many visitors/hour does he expect to
>>>>>>> serve?
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Drew
>>>>>>> --
>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>
>>
>>
>
--
Roberto Spadim
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
[not found] ` <523B3185.3020309@mpstor.com>
@ 2013-09-19 23:22 ` Stan Hoeppner
2013-09-24 5:06 ` Roberto Spadim
0 siblings, 1 reply; 28+ messages in thread
From: Stan Hoeppner @ 2013-09-19 23:22 UTC (permalink / raw)
To: Benjamin ESTRABAUD; +Cc: Roberto Spadim, Drew, Linux-RAID
On 9/19/2013 12:16 PM, Benjamin ESTRABAUD wrote:
> MegaRAID is the LSI RAID controller's driver/management software name.
For the young pups here, MegaRAID was the brand name for the hardware
RAID products sold by American Megatrends Incorporated, AMI, in the
1990s. You just might recognize that name, since most PCs ship with AMI
BIOS. AMI sold their RAID division to LSI more than 10 years ago. They
also sold their motherboard division though I can't recall to whom.
LSI also purchased Mylex Corp around the same time frame. Together, AMI
and Mylex owned over ~70% of the US RAID card market, a large percent of
the worldwide RAID card market, and had OEM contracts with all of the
major hardware vendors, including Bull, Data General, DEC, Dell,
Fujitsu/Siemens, HP, IBM, SGI, SUN, Unisys, etc. Sometime later they
also acquired 3Ware, pretty well sewing up the market.
Mylex didn't use any branding, simply model numbers, such as DAC960,
DAC1100, etc. So LSI retained the MegaRAID brand and has used it for
all RAID products to date, as it had wide recognition and is catchy as
far as branding goes.
--
Stan
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-19 23:22 ` Stan Hoeppner
@ 2013-09-24 5:06 ` Roberto Spadim
2013-09-24 6:11 ` Roberto Spadim
2013-09-24 8:29 ` Stan Hoeppner
0 siblings, 2 replies; 28+ messages in thread
From: Roberto Spadim @ 2013-09-24 5:06 UTC (permalink / raw)
Cc: Linux-RAID
Hi Stan,Benjamin, Drew and others guys!
Well i read many things about raid cards now =) thanks stan! i'm not a
expert but at least somethings i understood...
i was thinking about BBU at raid cards...
it's something like bcache with the 'ssd' part as the raid card, and
the hdd part as the raid disks.... different from ssd the raid card
use ram memory (probably) + batery
well now my question is...
there's a memory + bbu 'hard drive'?
i want to use it as the ssd part of the bcache, and use the hdd as my
motherboard hdd connections
in other words... i want the raid card goals with a memory hard drive
any ideas? anyone know anything about it?
thanks guys!
2013/9/19 Stan Hoeppner <stan@hardwarefreak.com>:
> On 9/19/2013 12:16 PM, Benjamin ESTRABAUD wrote:
>
>> MegaRAID is the LSI RAID controller's driver/management software name.
>
> For the young pups here, MegaRAID was the brand name for the hardware
> RAID products sold by American Megatrends Incorporated, AMI, in the
> 1990s. You just might recognize that name, since most PCs ship with AMI
> BIOS. AMI sold their RAID division to LSI more than 10 years ago. They
> also sold their motherboard division though I can't recall to whom.
>
> LSI also purchased Mylex Corp around the same time frame. Together, AMI
> and Mylex owned over ~70% of the US RAID card market, a large percent of
> the worldwide RAID card market, and had OEM contracts with all of the
> major hardware vendors, including Bull, Data General, DEC, Dell,
> Fujitsu/Siemens, HP, IBM, SGI, SUN, Unisys, etc. Sometime later they
> also acquired 3Ware, pretty well sewing up the market.
>
> Mylex didn't use any branding, simply model numbers, such as DAC960,
> DAC1100, etc. So LSI retained the MegaRAID brand and has used it for
> all RAID products to date, as it had wide recognition and is catchy as
> far as branding goes.
>
> --
> Stan
>
--
Roberto Spadim
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-24 5:06 ` Roberto Spadim
@ 2013-09-24 6:11 ` Roberto Spadim
2013-09-24 7:18 ` Tommy Apel
2013-09-24 8:29 ` Stan Hoeppner
1 sibling, 1 reply; 28+ messages in thread
From: Roberto Spadim @ 2013-09-24 6:11 UTC (permalink / raw)
Cc: Linux-RAID
hi guys, i found this one ( http://en.wikipedia.org/wiki/I-RAM ), but
it's pci (not pci-express) and don't have a high iops (pci bootleneck)
any pci-express solution like this?
2013/9/24 Roberto Spadim <rspadim@gmail.com>:
> Hi Stan,Benjamin, Drew and others guys!
> Well i read many things about raid cards now =) thanks stan! i'm not a
> expert but at least somethings i understood...
>
> i was thinking about BBU at raid cards...
> it's something like bcache with the 'ssd' part as the raid card, and
> the hdd part as the raid disks.... different from ssd the raid card
> use ram memory (probably) + batery
>
> well now my question is...
> there's a memory + bbu 'hard drive'?
> i want to use it as the ssd part of the bcache, and use the hdd as my
> motherboard hdd connections
> in other words... i want the raid card goals with a memory hard drive
>
> any ideas? anyone know anything about it?
>
> thanks guys!
>
> 2013/9/19 Stan Hoeppner <stan@hardwarefreak.com>:
>> On 9/19/2013 12:16 PM, Benjamin ESTRABAUD wrote:
>>
>>> MegaRAID is the LSI RAID controller's driver/management software name.
>>
>> For the young pups here, MegaRAID was the brand name for the hardware
>> RAID products sold by American Megatrends Incorporated, AMI, in the
>> 1990s. You just might recognize that name, since most PCs ship with AMI
>> BIOS. AMI sold their RAID division to LSI more than 10 years ago. They
>> also sold their motherboard division though I can't recall to whom.
>>
>> LSI also purchased Mylex Corp around the same time frame. Together, AMI
>> and Mylex owned over ~70% of the US RAID card market, a large percent of
>> the worldwide RAID card market, and had OEM contracts with all of the
>> major hardware vendors, including Bull, Data General, DEC, Dell,
>> Fujitsu/Siemens, HP, IBM, SGI, SUN, Unisys, etc. Sometime later they
>> also acquired 3Ware, pretty well sewing up the market.
>>
>> Mylex didn't use any branding, simply model numbers, such as DAC960,
>> DAC1100, etc. So LSI retained the MegaRAID brand and has used it for
>> all RAID products to date, as it had wide recognition and is catchy as
>> far as branding goes.
>>
>> --
>> Stan
>>
>
>
>
> --
> Roberto Spadim
--
Roberto Spadim
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-24 6:11 ` Roberto Spadim
@ 2013-09-24 7:18 ` Tommy Apel
0 siblings, 0 replies; 28+ messages in thread
From: Tommy Apel @ 2013-09-24 7:18 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Linux-RAID
Hello Roberto,
modern day I-RAM are a PCIe SSD's, you might want to take a look at
these options
OCZ RevoDrive, Fusion-IO ioDrive, LSI Nytro
/Tommy
2013/9/24 Roberto Spadim <rspadim@gmail.com>:
> hi guys, i found this one ( http://en.wikipedia.org/wiki/I-RAM ), but
> it's pci (not pci-express) and don't have a high iops (pci bootleneck)
> any pci-express solution like this?
>
> 2013/9/24 Roberto Spadim <rspadim@gmail.com>:
>> Hi Stan,Benjamin, Drew and others guys!
>> Well i read many things about raid cards now =) thanks stan! i'm not a
>> expert but at least somethings i understood...
>>
>> i was thinking about BBU at raid cards...
>> it's something like bcache with the 'ssd' part as the raid card, and
>> the hdd part as the raid disks.... different from ssd the raid card
>> use ram memory (probably) + batery
>>
>> well now my question is...
>> there's a memory + bbu 'hard drive'?
>> i want to use it as the ssd part of the bcache, and use the hdd as my
>> motherboard hdd connections
>> in other words... i want the raid card goals with a memory hard drive
>>
>> any ideas? anyone know anything about it?
>>
>> thanks guys!
>>
>> 2013/9/19 Stan Hoeppner <stan@hardwarefreak.com>:
>>> On 9/19/2013 12:16 PM, Benjamin ESTRABAUD wrote:
>>>
>>>> MegaRAID is the LSI RAID controller's driver/management software name.
>>>
>>> For the young pups here, MegaRAID was the brand name for the hardware
>>> RAID products sold by American Megatrends Incorporated, AMI, in the
>>> 1990s. You just might recognize that name, since most PCs ship with AMI
>>> BIOS. AMI sold their RAID division to LSI more than 10 years ago. They
>>> also sold their motherboard division though I can't recall to whom.
>>>
>>> LSI also purchased Mylex Corp around the same time frame. Together, AMI
>>> and Mylex owned over ~70% of the US RAID card market, a large percent of
>>> the worldwide RAID card market, and had OEM contracts with all of the
>>> major hardware vendors, including Bull, Data General, DEC, Dell,
>>> Fujitsu/Siemens, HP, IBM, SGI, SUN, Unisys, etc. Sometime later they
>>> also acquired 3Ware, pretty well sewing up the market.
>>>
>>> Mylex didn't use any branding, simply model numbers, such as DAC960,
>>> DAC1100, etc. So LSI retained the MegaRAID brand and has used it for
>>> all RAID products to date, as it had wide recognition and is catchy as
>>> far as branding goes.
>>>
>>> --
>>> Stan
>>>
>>
>>
>>
>> --
>> Roberto Spadim
>
>
>
> --
> Roberto Spadim
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-24 5:06 ` Roberto Spadim
2013-09-24 6:11 ` Roberto Spadim
@ 2013-09-24 8:29 ` Stan Hoeppner
2013-09-24 12:12 ` Bradley D. Thornton
1 sibling, 1 reply; 28+ messages in thread
From: Stan Hoeppner @ 2013-09-24 8:29 UTC (permalink / raw)
To: Roberto Spadim
On 9/24/2013 12:06 AM, Roberto Spadim wrote:
> Hi Stan,Benjamin, Drew and others guys!
> Well i read many things about raid cards now =) thanks stan! i'm not a
> expert but at least somethings i understood...
>
> i was thinking about BBU at raid cards...
> it's something like bcache with the 'ssd' part as the raid card, and
> the hdd part as the raid disks.... different from ssd the raid card
> use ram memory (probably) + batery
>
> well now my question is...
> there's a memory + bbu 'hard drive'?
> i want to use it as the ssd part of the bcache, and use the hdd as my
> motherboard hdd connections
> in other words... i want the raid card goals with a memory hard drive
>
> any ideas? anyone know anything about it?
You desire to drink champagne on a beer budget I'm afraid. You first
stated this was for business use on a Dell 420 server, so I recommended
a business oriented solution to solve the IOPS problem you previously
described.
Now you say what we've been discussing is for your home PC. So I'm left
wondering what it is that you're really after.
If the latter case is closer to the truth of it, you should use bcache
with an SSD.
> thanks guys!
>
> 2013/9/19 Stan Hoeppner <stan@hardwarefreak.com>:
>> On 9/19/2013 12:16 PM, Benjamin ESTRABAUD wrote:
>>
>>> MegaRAID is the LSI RAID controller's driver/management software name.
>>
>> For the young pups here, MegaRAID was the brand name for the hardware
>> RAID products sold by American Megatrends Incorporated, AMI, in the
>> 1990s. You just might recognize that name, since most PCs ship with AMI
>> BIOS. AMI sold their RAID division to LSI more than 10 years ago. They
>> also sold their motherboard division though I can't recall to whom.
>>
>> LSI also purchased Mylex Corp around the same time frame. Together, AMI
>> and Mylex owned over ~70% of the US RAID card market, a large percent of
>> the worldwide RAID card market, and had OEM contracts with all of the
>> major hardware vendors, including Bull, Data General, DEC, Dell,
>> Fujitsu/Siemens, HP, IBM, SGI, SUN, Unisys, etc. Sometime later they
>> also acquired 3Ware, pretty well sewing up the market.
>>
>> Mylex didn't use any branding, simply model numbers, such as DAC960,
>> DAC1100, etc. So LSI retained the MegaRAID brand and has used it for
>> all RAID products to date, as it had wide recognition and is catchy as
>> far as branding goes.
>>
>> --
>> Stan
>>
>
>
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-24 8:29 ` Stan Hoeppner
@ 2013-09-24 12:12 ` Bradley D. Thornton
2013-09-25 2:32 ` Stan Hoeppner
0 siblings, 1 reply; 28+ messages in thread
From: Bradley D. Thornton @ 2013-09-24 12:12 UTC (permalink / raw)
To: linux-raid
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
On 09/24/2013 01:29 AM, Stan Hoeppner wrote:
> On 9/24/2013 12:06 AM, Roberto Spadim wrote:
>> Hi Stan,Benjamin, Drew and others guys!
>
> You desire to drink champagne on a beer budget I'm afraid. You first
Hi Stan,
Please don't BCC the list - it whacks people's mail filters. either To:
or CC: is fine, but again, please don't BCC: the list okay?
Kindest regards,
- --
Bradley D. Thornton
Manager Network Services
NorthTech Computer
TEL: +1.310.388.9469 (US)
TEL: +44.203.318.2755 (UK)
TEL: +41.43.508.05.10 (CH)
http://NorthTech.US
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Find this cert at x-hkp://pool.sks-keyservers.net
iQEcBAEBAwAGBQJSQYGmAAoJEE1wgkIhr9j3m4wH/21YOW7p6ogcogB44hoNHOD0
4uEGSNrIr4e4aI7EPTiG3nTL1MSumfyWeLC8K8XRuIPJNQ+USMxQ5uBPyR2SPU/v
DCNFbFue28gqHDAPvOM5xSeDgGQP8BRpfWvU0QBeUhti4aMig4/tzSQGET7qKM9Q
p/4FYKZuZh5nodwvoQzMk73CovmlceA/BBrkm1pPyQ1kYZHakNYywvvh4ohPK+2K
HgKXFmqRn870JPtODgjgFzgtWfIj47TwNE7i7x4elWrg9SvsHYXIzNgk69+8TOZX
E+q8c/rLP8d/D8JKuYMVz9HLSu8PBEEfsilK++0kfIjPfyd3TSr9k64Z0CwDux8=
=E8j/
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-24 12:12 ` Bradley D. Thornton
@ 2013-09-25 2:32 ` Stan Hoeppner
2013-09-25 2:44 ` Stan Hoeppner
2013-09-25 4:35 ` Roberto Spadim
0 siblings, 2 replies; 28+ messages in thread
From: Stan Hoeppner @ 2013-09-25 2:32 UTC (permalink / raw)
To: Bradley D. Thornton; +Cc: linux-raid
On 9/24/2013 7:12 AM, Bradley D. Thornton wrote:
> On 09/24/2013 01:29 AM, Stan Hoeppner wrote:
>> On 9/24/2013 12:06 AM, Roberto Spadim wrote:
>>> Hi Stan,Benjamin, Drew and others guys!
>>
>> You desire to drink champagne on a beer budget I'm afraid. You first
...
> Please don't BCC the list - it whacks people's mail filters. either To:
> or CC: is fine, but again, please don't BCC: the list okay?
Hi Bradley,
Thank you for scolding a mailop WRT the netiquette of BCC'ing a list. :)
"Reply to all" shouldn't cause this. So what is the root cause? Hmm...
let's see... Here we go. Take a look at the headers of Roberto's msg
to which I replied:
From: Roberto Spadim <rspadim@gmail.com>
Cc: Linux-RAID <linux-raid@vger.kernel.org>
Content-Type: text/plain; charset=ISO-8859-1
To: unlisted-recipients:; (no To-header on input)
This mangled header caused TBird's reply-all to do the following:
To: Roberto Spadim <rspadim@gmail.com>
CC: unlisted-recipients:;, Linux-RAID <linux-raid@vger.kernel.org>
I've never used the feature, but AIUI, this "unlisted-recipients"
directive in TBird turns all subsequent CCs into BCCs.
So, my apologies for concentrating on the content of my reply, and not
paying closer attention to the headers before hitting send. That said,
the root cause of this problem lay at the feet of Roberto, or his MUA,
for mangling the headers. You may want to take this up with him, though
I'm sure he'll see this msg.
--
Stan
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-25 2:32 ` Stan Hoeppner
@ 2013-09-25 2:44 ` Stan Hoeppner
2013-09-25 4:35 ` Roberto Spadim
1 sibling, 0 replies; 28+ messages in thread
From: Stan Hoeppner @ 2013-09-25 2:44 UTC (permalink / raw)
To: stan; +Cc: Bradley D. Thornton, linux-raid
On 9/24/2013 9:32 PM, Stan Hoeppner wrote:
> This mangled header caused TBird's reply-all to do the following:
>
> To: Roberto Spadim <rspadim@gmail.com>
> CC: unlisted-recipients:;, Linux-RAID <linux-raid@vger.kernel.org>
>
>
> I've never used the feature, but AIUI, this "unlisted-recipients"
> directive in TBird turns all subsequent CCs into BCCs.
Actually, upon further inspection, a BCC header isn't the issue, because
there's not one, just the mangled CC:
Illegal-Object: Syntax error in CC: address found on vger.kernel.org:
CC: unlisted-recipients:;Linux-RAID <linux-raid@vger.kernel.org>
^-missing end of address
vger simply replaced the broken CC header with a syntax error message.
Thus a reply-all to this message will simply go to the sender and the
non-broken CC list, i.e. to me and to Roberto, not the list.
Regardless, the root of the problem lay with Roberto or his MUA for the
original mangled header.
--
Stan
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-25 2:32 ` Stan Hoeppner
2013-09-25 2:44 ` Stan Hoeppner
@ 2013-09-25 4:35 ` Roberto Spadim
2013-09-25 5:53 ` Stan Hoeppner
1 sibling, 1 reply; 28+ messages in thread
From: Roberto Spadim @ 2013-09-25 4:35 UTC (permalink / raw)
To: Stan Hoeppner; +Cc: Bradley D. Thornton, Linux-RAID
Hi Bradley!
Sorry i don't understand what a BCC is, i'm using gmail with reply
button, at gmail it show To: and CC: fields only
what unlisted-recipients means?
help is wellcome to solve this problem
2013/9/24 Stan Hoeppner <stan@hardwarefreak.com>:
> On 9/24/2013 7:12 AM, Bradley D. Thornton wrote:
>
>> On 09/24/2013 01:29 AM, Stan Hoeppner wrote:
>>> On 9/24/2013 12:06 AM, Roberto Spadim wrote:
>>>> Hi Stan,Benjamin, Drew and others guys!
>>>
>>> You desire to drink champagne on a beer budget I'm afraid. You first
> ...
>> Please don't BCC the list - it whacks people's mail filters. either To:
>> or CC: is fine, but again, please don't BCC: the list okay?
>
> Hi Bradley,
>
> Thank you for scolding a mailop WRT the netiquette of BCC'ing a list. :)
>
> "Reply to all" shouldn't cause this. So what is the root cause? Hmm...
> let's see... Here we go. Take a look at the headers of Roberto's msg
> to which I replied:
>
> From: Roberto Spadim <rspadim@gmail.com>
> Cc: Linux-RAID <linux-raid@vger.kernel.org>
> Content-Type: text/plain; charset=ISO-8859-1
> To: unlisted-recipients:; (no To-header on input)
>
>
> This mangled header caused TBird's reply-all to do the following:
>
> To: Roberto Spadim <rspadim@gmail.com>
> CC: unlisted-recipients:;, Linux-RAID <linux-raid@vger.kernel.org>
>
>
> I've never used the feature, but AIUI, this "unlisted-recipients"
> directive in TBird turns all subsequent CCs into BCCs.
>
>
> So, my apologies for concentrating on the content of my reply, and not
> paying closer attention to the headers before hitting send. That said,
> the root cause of this problem lay at the feet of Roberto, or his MUA,
> for mangling the headers. You may want to take this up with him, though
> I'm sure he'll see this msg.
>
> --
> Stan
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Roberto Spadim
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Best configuration for bcache/md cache or other cache using ssd
2013-09-25 4:35 ` Roberto Spadim
@ 2013-09-25 5:53 ` Stan Hoeppner
0 siblings, 0 replies; 28+ messages in thread
From: Stan Hoeppner @ 2013-09-25 5:53 UTC (permalink / raw)
To: Roberto Spadim; +Cc: Bradley D. Thornton, Linux-RAID
On 9/24/2013 11:35 PM, Roberto Spadim wrote:
> Hi Bradley!
> Sorry i don't understand what a BCC is, i'm using gmail with reply
> button, at gmail it show To: and CC: fields only
> what unlisted-recipients means?
> help is wellcome to solve this problem
Roberto,
Troubleshooting email client problems is way off topic for this list.
So this aspect of this thread should end now out of courtesy to the list
members and the archives.
I only replied to Bradley, and the list, because I was publicly accused
of doing something that only brain dead nubies would do. Rightly or
wrongly, I took offense to that accusation, having some 25+ years of IT
experience and some 10+ years operating mail servers.
To disprove the accusation, I traced down the source of the problem
which was the TO header in your email of 9/24/2013 12:06 CST. It is up
to you to figure out what went wrong, if you even care to. I say that
because this could have simply been a momentary glitch in Gmail and not
worth the hassle. If you do choose to pursue this, you should do so
independently of this list.
Regards,
--
Stan
> 2013/9/24 Stan Hoeppner <stan@hardwarefreak.com>:
>> On 9/24/2013 7:12 AM, Bradley D. Thornton wrote:
>>
>>> On 09/24/2013 01:29 AM, Stan Hoeppner wrote:
>>>> On 9/24/2013 12:06 AM, Roberto Spadim wrote:
>>>>> Hi Stan,Benjamin, Drew and others guys!
>>>>
>>>> You desire to drink champagne on a beer budget I'm afraid. You first
>> ...
>>> Please don't BCC the list - it whacks people's mail filters. either To:
>>> or CC: is fine, but again, please don't BCC: the list okay?
>>
>> Hi Bradley,
>>
>> Thank you for scolding a mailop WRT the netiquette of BCC'ing a list. :)
>>
>> "Reply to all" shouldn't cause this. So what is the root cause? Hmm...
>> let's see... Here we go. Take a look at the headers of Roberto's msg
>> to which I replied:
>>
>> From: Roberto Spadim <rspadim@gmail.com>
>> Cc: Linux-RAID <linux-raid@vger.kernel.org>
>> Content-Type: text/plain; charset=ISO-8859-1
>> To: unlisted-recipients:; (no To-header on input)
>>
>>
>> This mangled header caused TBird's reply-all to do the following:
>>
>> To: Roberto Spadim <rspadim@gmail.com>
>> CC: unlisted-recipients:;, Linux-RAID <linux-raid@vger.kernel.org>
>>
>>
>> I've never used the feature, but AIUI, this "unlisted-recipients"
>> directive in TBird turns all subsequent CCs into BCCs.
>>
>>
>> So, my apologies for concentrating on the content of my reply, and not
>> paying closer attention to the headers before hitting send. That said,
>> the root cause of this problem lay at the feet of Roberto, or his MUA,
>> for mangling the headers. You may want to take this up with him, though
>> I'm sure he'll see this msg.
^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2013-09-25 5:53 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-17 19:20 Best configuration for bcache/md cache or other cache using ssd Roberto Spadim
2013-09-18 13:59 ` Drew
[not found] ` <CAH3kUhHin5PfjDCNFjD8eypNML=0YrkQp14DrCADc2StcODdaw@mail.gmail.com>
2013-09-18 15:39 ` Fwd: " Drew
2013-09-18 16:00 ` Mark Knecht
2013-09-18 15:51 ` Fwd: " Roberto Spadim
2013-09-18 16:07 ` Tommy Apel
[not found] ` <CAH3kUhEWUe=20ovmd5BT3kzmYn25YS3Np5R3jPiJDBEAhAOb_A@mail.gmail.com>
2013-09-18 16:27 ` Tommy Apel
2013-09-18 17:15 ` Drew
2013-09-18 17:33 ` Roberto Spadim
2013-09-19 2:26 ` Stan Hoeppner
2013-09-19 3:42 ` Roberto Spadim
2013-09-19 7:47 ` Stan Hoeppner
2013-09-19 15:30 ` Roberto Spadim
2013-09-19 15:49 ` Benjamin ESTRABAUD
2013-09-19 16:23 ` Roberto Spadim
2013-09-19 16:31 ` Benjamin ESTRABAUD
[not found] ` <CAH3kUhE33h=7D6r7KO9VvQRN5qrZS+cad KUBQW8POFYvyGsS3w@mail.gmail.com>
[not found] ` <523B3185.3020309@mpstor.com>
2013-09-19 23:22 ` Stan Hoeppner
2013-09-24 5:06 ` Roberto Spadim
2013-09-24 6:11 ` Roberto Spadim
2013-09-24 7:18 ` Tommy Apel
2013-09-24 8:29 ` Stan Hoeppner
2013-09-24 12:12 ` Bradley D. Thornton
2013-09-25 2:32 ` Stan Hoeppner
2013-09-25 2:44 ` Stan Hoeppner
2013-09-25 4:35 ` Roberto Spadim
2013-09-25 5:53 ` Stan Hoeppner
2013-09-19 22:15 ` Stan Hoeppner
2013-09-19 22:50 ` Roberto Spadim
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).