* Caching raid with SSD.
@ 2016-03-05 21:06 Ram Ramesh
2016-03-07 1:58 ` John Stoffel
0 siblings, 1 reply; 4+ messages in thread
From: Ram Ramesh @ 2016-03-05 21:06 UTC (permalink / raw)
To: Linux Raid
Any one here actually use SSD caches for RAID arrays? Can you share your
experience and let me know your choice of the type of cache methods your
tired/used and why you think one is better or worse than other? If it is
possible, please provide raid type/size and ssd size used.
Thanks and Regards
Ramesh
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Caching raid with SSD.
2016-03-05 21:06 Caching raid with SSD Ram Ramesh
@ 2016-03-07 1:58 ` John Stoffel
2016-03-07 4:45 ` Ram Ramesh
0 siblings, 1 reply; 4+ messages in thread
From: John Stoffel @ 2016-03-07 1:58 UTC (permalink / raw)
To: Ram Ramesh; +Cc: Linux Raid
Ram> Any one here actually use SSD caches for RAID arrays? Can you
Ram> share your experience and let me know your choice of the type of
Ram> cache methods your tired/used and why you think one is better or
Ram> worse than other? If it is possible, please provide raid
Ram> type/size and ssd size used.
I'm using a pair of 4Tb drives mirrored, and a pair of 512gb SSDs,
also mirrored, along with lvmcache to setup my caching across a couple
of volumes.
I honestly haven't seen huge improvements, but I also haven't had the
time to do any serious testing either. Whcih I should do. I've been
sorta thinking that using the Phoronix testing stuff would be the way
to go.
My SSDs and 4Tb drives are all on an LSI 8-port SATA controller, PCI-E
4x I think. It's an MPT SAS-2 controller. I did this so that my boot
drives are some partitions on the SSDs, and then I use two more
mirrored partitions for the cache.
And this is an NFS server for my home directories, etc.
I didn't use bcache because you can't remove a cache device without
rebooting, or at least bringing a device offline and back online,
which doesn't fit my desires to be able to dynamically add/remove
caches, esp for the testing I've never bothered to do.
John
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Caching raid with SSD.
2016-03-07 1:58 ` John Stoffel
@ 2016-03-07 4:45 ` Ram Ramesh
2016-03-07 15:17 ` John Stoffel
0 siblings, 1 reply; 4+ messages in thread
From: Ram Ramesh @ 2016-03-07 4:45 UTC (permalink / raw)
To: John Stoffel; +Cc: Linux Raid
On 03/06/2016 07:58 PM, John Stoffel wrote:
> Ram> Any one here actually use SSD caches for RAID arrays? Can you
> Ram> share your experience and let me know your choice of the type of
> Ram> cache methods your tired/used and why you think one is better or
> Ram> worse than other? If it is possible, please provide raid
> Ram> type/size and ssd size used.
>
> I'm using a pair of 4Tb drives mirrored, and a pair of 512gb SSDs,
> also mirrored, along with lvmcache to setup my caching across a couple
> of volumes.
>
> I honestly haven't seen huge improvements, but I also haven't had the
> time to do any serious testing either. Whcih I should do. I've been
> sorta thinking that using the Phoronix testing stuff would be the way
> to go.
>
> My SSDs and 4Tb drives are all on an LSI 8-port SATA controller, PCI-E
> 4x I think. It's an MPT SAS-2 controller. I did this so that my boot
> drives are some partitions on the SSDs, and then I use two more
> mirrored partitions for the cache.
>
> And this is an NFS server for my home directories, etc.
>
> I didn't use bcache because you can't remove a cache device without
> rebooting, or at least bringing a device offline and back online,
> which doesn't fit my desires to be able to dynamically add/remove
> caches, esp for the testing I've never bothered to do.
>
> John
I do not have lvm, but already have a live (regular) file system. While
I can accept downtime, I cannot accept formatting drives/disks. I simply
do not have the extra space to copy back and forth. That is why I
thought of dmcache. I ran a fio experiment on my ssd (old curial M4) and
I am getting 6K (random) IOPs whereas my raid gives me about 1.5K. I
really do not see much point unless my new SSD puts out some decent
numbers stand alone.
Thanks for sharing the details of your setup.
Ramesh
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Caching raid with SSD.
2016-03-07 4:45 ` Ram Ramesh
@ 2016-03-07 15:17 ` John Stoffel
0 siblings, 0 replies; 4+ messages in thread
From: John Stoffel @ 2016-03-07 15:17 UTC (permalink / raw)
To: Ram Ramesh; +Cc: John Stoffel, Linux Raid
>>>>> "Ram" == Ram Ramesh <rramesh2400@gmail.com> writes:
Ram> On 03/06/2016 07:58 PM, John Stoffel wrote:
Ram> Any one here actually use SSD caches for RAID arrays? Can you
Ram> share your experience and let me know your choice of the type of
Ram> cache methods your tired/used and why you think one is better or
Ram> worse than other? If it is possible, please provide raid
Ram> type/size and ssd size used.
>>
>> I'm using a pair of 4Tb drives mirrored, and a pair of 512gb SSDs,
>> also mirrored, along with lvmcache to setup my caching across a couple
>> of volumes.
>>
>> I honestly haven't seen huge improvements, but I also haven't had the
>> time to do any serious testing either. Whcih I should do. I've been
>> sorta thinking that using the Phoronix testing stuff would be the way
>> to go.
>>
>> My SSDs and 4Tb drives are all on an LSI 8-port SATA controller, PCI-E
>> 4x I think. It's an MPT SAS-2 controller. I did this so that my boot
>> drives are some partitions on the SSDs, and then I use two more
>> mirrored partitions for the cache.
>>
>> And this is an NFS server for my home directories, etc.
>>
>> I didn't use bcache because you can't remove a cache device without
>> rebooting, or at least bringing a device offline and back online,
>> which doesn't fit my desires to be able to dynamically add/remove
>> caches, esp for the testing I've never bothered to do.
>>
>> John
Ram> I do not have lvm, but already have a live (regular) file
Ram> system. While I can accept downtime, I cannot accept formatting
Ram> drives/disks. I simply do not have the extra space to copy back
Ram> and forth. That is why I thought of dmcache. I ran a fio
Ram> experiment on my ssd (old curial M4) and I am getting 6K (random)
Ram> IOPs whereas my raid gives me about 1.5K. I really do not see
Ram> much point unless my new SSD puts out some decent numbers stand
Ram> alone.
I'm not sure fio is the right test here, but it all depends on what
you do, which is the curse of performance testing!
I kinda like kernel compiles, and doing lots of image viewing
(building of thumbnails, etc) to see how things will speed up in my
common usecase.
Ram> Thanks for sharing the details of your setup.
Good luck with your testing, I'd love to see your results when you've
decided which way to go.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-03-07 15:17 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-05 21:06 Caching raid with SSD Ram Ramesh
2016-03-07 1:58 ` John Stoffel
2016-03-07 4:45 ` Ram Ramesh
2016-03-07 15:17 ` John Stoffel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).