public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* hdd + ssd
@ 2015-10-22 18:23 krautus
  2015-10-22 18:27 ` Eric Sandeen
  2015-10-23 11:48 ` Emmanuel Florac
  0 siblings, 2 replies; 6+ messages in thread
From: krautus @ 2015-10-22 18:23 UTC (permalink / raw)
  To: xfs

Hello I'm trying to understand why and how to add one or more SSD (as a cache drive / keeping xfs log)
to a traditional spinning xfs storage volume.
I mean: which data will go to the ssd ? Inodes and dentries will go to the ssd ?
Will the _read_ performance increase ?

In general I'm looking to increase (cache) the reading performance of folders with a lot of small files (emails),
for email servers.

Feel free to let me rtfm :)
I'd gladly study the documentation / articles / benchmarks but my google-fu isn't in best shape.

Thank you,
Mike
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: hdd + ssd
  2015-10-22 18:23 hdd + ssd krautus
@ 2015-10-22 18:27 ` Eric Sandeen
  2015-10-23 11:48 ` Emmanuel Florac
  1 sibling, 0 replies; 6+ messages in thread
From: Eric Sandeen @ 2015-10-22 18:27 UTC (permalink / raw)
  To: xfs

On 10/22/15 1:23 PM, krautus@kr916.org wrote:
> Hello I'm trying to understand why and how to add one or more SSD (as a cache drive / keeping xfs log)
> to a traditional spinning xfs storage volume.
> I mean: which data will go to the ssd ? Inodes and dentries will go to the ssd ?
> Will the _read_ performance increase ?

You might want to look into something like dm-cache
https://en.wikipedia.org/wiki/Dm-cache

(TBH, I've not used it before, so grains of salt apply for your usecase).

Putting the log on an ssd is not likely to be the solution you want;
it certainly won't speed up reads.

-Eric

> In general I'm looking to increase (cache) the reading performance of folders with a lot of small files (emails),
> for email servers.
> 
> Feel free to let me rtfm :)
> I'd gladly study the documentation / articles / benchmarks but my google-fu isn't in best shape.
> 
> Thank you,
> Mike
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: hdd + ssd
  2015-10-22 18:23 hdd + ssd krautus
  2015-10-22 18:27 ` Eric Sandeen
@ 2015-10-23 11:48 ` Emmanuel Florac
  2015-10-23 21:34   ` Stefan Ring
  1 sibling, 1 reply; 6+ messages in thread
From: Emmanuel Florac @ 2015-10-23 11:48 UTC (permalink / raw)
  To: krautus@kr916.org; +Cc: xfs

Le Thu, 22 Oct 2015 20:23:24 +0200
"krautus@kr916.org" <krautus@kr916.org> écrivait:

> Hello I'm trying to understand why and how to add one or more SSD (as
> a cache drive / keeping xfs log) to a traditional spinning xfs
> storage volume. I mean: which data will go to the ssd ? Inodes and
> dentries will go to the ssd ? Will the _read_ performance increase ?
> 
> In general I'm looking to increase (cache) the reading performance of
> folders with a lot of small files (emails), for email servers.
> 
> Feel free to let me rtfm :)
> I'd gladly study the documentation / articles / benchmarks but my
> google-fu isn't in best shape.

You've got several options, some integrated with the kernel: dm-cache
and bcache, some available as additional tools: flashcache and
EnhanceIO.

YMMV, but here's my take:

 * flashcache being a facebook internal dev, probably is the most
   largely deployed one. It's clearly production-ready.

 * EnhanceIO works fine but I haven't tested it thoroughly. It adds no
   signature to the drives so it can be added to existing filesystems
   (flashcache and bcache need reformatting). However that means that
   bad thing may happen if you're careless -- it's clearly targeted at
   always-on servers.

 * bcache works fine but the latest fixes haven't been backported, so
   you should probably use it only with latest (4.2, 4.3) kernels. It's
   not very mature yet but it's *friggin' fast*.

 * dm-cache is the easiest to set-up with the lvmcache command (if your
   distro is recent enough of course). Like very very easy. It's
   unfortunately the slowest of the pack, apparently. Doesn't need
   reformatting IF your existing FS already lives in a LV. 

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: hdd + ssd
  2015-10-23 11:48 ` Emmanuel Florac
@ 2015-10-23 21:34   ` Stefan Ring
  2015-10-24  5:38     ` krautus
  2015-11-02 12:05     ` Emmanuel Florac
  0 siblings, 2 replies; 6+ messages in thread
From: Stefan Ring @ 2015-10-23 21:34 UTC (permalink / raw)
  To: Emmanuel Florac; +Cc: Linux fs XFS, krautus@kr916.org

On Fri, Oct 23, 2015 at 1:48 PM, Emmanuel Florac <eflorac@intellique.com> wrote:
> YMMV, but here's my take:
>
>  * flashcache being a facebook internal dev, probably is the most
>    largely deployed one. It's clearly production-ready.
>
>  * EnhanceIO works fine but I haven't tested it thoroughly. It adds no
>    signature to the drives so it can be added to existing filesystems
>    (flashcache and bcache need reformatting). However that means that
>    bad thing may happen if you're careless -- it's clearly targeted at
>    always-on servers.
>
>  * bcache works fine but the latest fixes haven't been backported, so
>    you should probably use it only with latest (4.2, 4.3) kernels. It's
>    not very mature yet but it's *friggin' fast*.
>
>  * dm-cache is the easiest to set-up with the lvmcache command (if your
>    distro is recent enough of course). Like very very easy. It's
>    unfortunately the slowest of the pack, apparently. Doesn't need
>    reformatting IF your existing FS already lives in a LV.

Very good summary, thanks! Do you also happen to know if all of these
retain cache contents across reboots?

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: hdd + ssd
  2015-10-23 21:34   ` Stefan Ring
@ 2015-10-24  5:38     ` krautus
  2015-11-02 12:05     ` Emmanuel Florac
  1 sibling, 0 replies; 6+ messages in thread
From: krautus @ 2015-10-24  5:38 UTC (permalink / raw)
  Cc: Linux fs XFS

Thank you all for the suggestions!
I'll report back my progress, will take a while .. :)

Bye and have a nice week-end,
Mike

On Fri, 23 Oct 2015 23:34:00 +0200
Stefan Ring <stefanrin@gmail.com> wrote:

> On Fri, Oct 23, 2015 at 1:48 PM, Emmanuel Florac <eflorac@intellique.com> wrote:
> > YMMV, but here's my take:
> >
> >  * flashcache being a facebook internal dev, probably is the most
> >    largely deployed one. It's clearly production-ready.
> >
> >  * EnhanceIO works fine but I haven't tested it thoroughly. It adds no
> >    signature to the drives so it can be added to existing filesystems
> >    (flashcache and bcache need reformatting). However that means that
> >    bad thing may happen if you're careless -- it's clearly targeted at
> >    always-on servers.
> >
> >  * bcache works fine but the latest fixes haven't been backported, so
> >    you should probably use it only with latest (4.2, 4.3) kernels. It's
> >    not very mature yet but it's *friggin' fast*.
> >
> >  * dm-cache is the easiest to set-up with the lvmcache command (if your
> >    distro is recent enough of course). Like very very easy. It's
> >    unfortunately the slowest of the pack, apparently. Doesn't need
> >    reformatting IF your existing FS already lives in a LV.
> 
> Very good summary, thanks! Do you also happen to know if all of these
> retain cache contents across reboots?
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: hdd + ssd
  2015-10-23 21:34   ` Stefan Ring
  2015-10-24  5:38     ` krautus
@ 2015-11-02 12:05     ` Emmanuel Florac
  1 sibling, 0 replies; 6+ messages in thread
From: Emmanuel Florac @ 2015-11-02 12:05 UTC (permalink / raw)
  To: Stefan Ring; +Cc: Linux fs XFS, krautus@kr916.org

Le Fri, 23 Oct 2015 23:34:00 +0200
Stefan Ring <stefanrin@gmail.com> écrivait:

> Very good summary, thanks! Do you also happen to know if all of these
> retain cache contents across reboots?

Yes, they retain cache contents across reboots.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2015-11-02 12:05 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-10-22 18:23 hdd + ssd krautus
2015-10-22 18:27 ` Eric Sandeen
2015-10-23 11:48 ` Emmanuel Florac
2015-10-23 21:34   ` Stefan Ring
2015-10-24  5:38     ` krautus
2015-11-02 12:05     ` Emmanuel Florac

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox