public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Emmanuel Florac <eflorac@intellique.com>
To: "krautus@kr916.org" <krautus@kr916.org>
Cc: xfs@oss.sgi.com
Subject: Re: hdd + ssd
Date: Fri, 23 Oct 2015 13:48:05 +0200	[thread overview]
Message-ID: <20151023134805.3c7998e4@harpe.intellique.com> (raw)
In-Reply-To: <20151022202324.5f00807f@linux>

Le Thu, 22 Oct 2015 20:23:24 +0200
"krautus@kr916.org" <krautus@kr916.org> écrivait:

> Hello I'm trying to understand why and how to add one or more SSD (as
> a cache drive / keeping xfs log) to a traditional spinning xfs
> storage volume. I mean: which data will go to the ssd ? Inodes and
> dentries will go to the ssd ? Will the _read_ performance increase ?
> 
> In general I'm looking to increase (cache) the reading performance of
> folders with a lot of small files (emails), for email servers.
> 
> Feel free to let me rtfm :)
> I'd gladly study the documentation / articles / benchmarks but my
> google-fu isn't in best shape.

You've got several options, some integrated with the kernel: dm-cache
and bcache, some available as additional tools: flashcache and
EnhanceIO.

YMMV, but here's my take:

 * flashcache being a facebook internal dev, probably is the most
   largely deployed one. It's clearly production-ready.

 * EnhanceIO works fine but I haven't tested it thoroughly. It adds no
   signature to the drives so it can be added to existing filesystems
   (flashcache and bcache need reformatting). However that means that
   bad thing may happen if you're careless -- it's clearly targeted at
   always-on servers.

 * bcache works fine but the latest fixes haven't been backported, so
   you should probably use it only with latest (4.2, 4.3) kernels. It's
   not very mature yet but it's *friggin' fast*.

 * dm-cache is the easiest to set-up with the lvmcache command (if your
   distro is recent enough of course). Like very very easy. It's
   unfortunately the slowest of the pack, apparently. Doesn't need
   reformatting IF your existing FS already lives in a LV. 

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2015-10-23 11:48 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-10-22 18:23 hdd + ssd krautus
2015-10-22 18:27 ` Eric Sandeen
2015-10-23 11:48 ` Emmanuel Florac [this message]
2015-10-23 21:34   ` Stefan Ring
2015-10-24  5:38     ` krautus
2015-11-02 12:05     ` Emmanuel Florac

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20151023134805.3c7998e4@harpe.intellique.com \
    --to=eflorac@intellique.com \
    --cc=krautus@kr916.org \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox