From: Paddy Steed <jarktasaa@gmail.com>
To: linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: SSD optimizations
Date: Mon, 13 Dec 2010 17:17:51 +0000 [thread overview]
Message-ID: <1292260671.11248.609.camel@paddy-desktop> (raw)
In-Reply-To: <4D05630E.7070809@bobich.net>
[-- Attachment #1: Type: text/plain, Size: 4952 bytes --]
Thank you for all your replies.
On Mon, 2010-12-13 at 00:04 +0000, Gordan Bobic wrote:
> On 12/12/2010 17:24, Paddy Steed wrote:
> > In a few weeks parts for my new computer will be arriving. The storage
> > will be a 128GB SSD. A few weeks after that I will order three large
> > disks for a RAID array. I understand that BTRFS RAID 5 support will be
> > available shortly. What is the best possible way for me to get the
> > highest performance out of this setup. I know of the option to optimize
> > for SSD's
>
> BTRFS is hardly the best option for SSDs. I typically use ext4 without a
> journal on SSDs, or ext2 if that is not available. Journalling causes
> more writes to hit the disk, which wears out flash faster. Plus, SSDs
> typically have much slower writes than reads, so avoiding writes is a
> good thing. AFAIK there is no way to disable journaling on BTRFS.
My write speed is similar to the read speed (OCZ Vertex 128GB) and it
also comes with a waranty that runs out after the drive will be
obsolete. Using up flash cycles is not an issue for me.
> > but wont that affect all the drives in the array, not to
> > mention having the SSD in the raid array will make the usable size much
> > smaller as RAID 5 goes by the smallest disk.
>
> If you are talking about BTRFS' parity RAID implementation, it is hard
> to comment in any way on it before it has actually been implemented.
> Especially if you are looking for something stable for production use,
> you should probably avoid features that immature.
I would take images until I felt it was stable every day. I spoke to
`cmasion' who has now finished fsck and is working on RAID 5 fully.
> > Is there a way to use it as
> > a cache the works even on power down.
>
> You want to use the SSD as a _write_ cache? That doesn't sound too
> sensible at all.
As previously stated, wear is not an issue.
> What you are looking for is hierarchical/tiered storage. I am not aware
> of existance of such a thing for Linux. BTRFS has no feature for it. You
> might be able to cobble up a solution that uses aufs or mhddfs (both
> fuse based) with some cron jobs to shift most recently used files to
> your SSD, but the fuse overheads will probably limit the usefulness of
> this approach.
>
> > My current plan is to have
> > the /tmp directory in RAM on tmpfs
>
> Ideally, quite a lot should really be on tmpfs, in addition to /tmp and
> /var/tmp.
> Have a look at my patches here:
> https://bugzilla.redhat.com/show_bug.cgi?id=223722
>
> My motivation for this was mainly to improve performance on slow flash
> (when running off a USB stick or an SD card), but it also removes the
> most write-heavy things off the flash and into RAM. Less flash wear and
> more speed.
>
> If you are putting a lot onto tmpfs, you may also want to look at the
> compcache project which provides a compressed swap RAM disk. Much faster
> than actual swap - to the point where it actually makes swapping feasible.
>
> > the /boot directory on a dedicated
> > partition on the SSD along with a 12GB swap partition also on the SSD
> > with the rest of the space (on the SSD) available as a cache.
>
> Swap on SSD is generally a bad idea. If your machine starts swapping
> it'll grind to a halt anyway, regardless of whether it's swapping to
> SSD, and heavy swapping to SSD will just kill the flash prematurely.
>
> > The three
> > mechanical hard drives will be on a RAID 5 array using BTRFS. Can anyone
> > suggest any improvements to my plan and also how to implement the cache?
>
> A very "soft" solution using aufs and cron jobs for moving things with
> the most recent atime to the SSD is probably as good as it's going to
> get at the moment, but bear in mind that fuse overheads will probably
> offset any performance benefit you gain from the SSD. You could get
> clever and instead of just using atime set up inotify logging and put
> the most frequently (as opposed to most recently) accessed files onto
> your SSD. This would, in theory, give you more benefit. You also have to
> bear in mind that the most frequently accessed files will be cached in
> RAM anyway, so your pre-caching onto SSD is only really going to be
> relevant when your working set size is considerably bigger than your RAM
> - at which point your performance is going to take a significant
> nosedive anyway (especially if you then hit a fuse file system).
>
> In either case, you should not put the frequently written files onto
> flash (recent mtime).
>
> Also note that RAID5 is potentially very slow on writes, especially
> small writes. It is also unsuitable for arrays over about 4TB (usable)
> in size for disk reliability reasons.
>
> Gordan
So, no-one has any idea's on how to implement the cache. Would making it
all swap work, does to OS cache files in swap?
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 490 bytes --]
next prev parent reply other threads:[~2010-12-13 17:17 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-12-12 17:24 SSD optimizations Paddy Steed
2010-12-13 0:04 ` Gordan Bobic
2010-12-13 5:11 ` Sander
2010-12-13 9:25 ` Gordan Bobic
2010-12-13 14:33 ` Peter Harris
2010-12-13 15:04 ` Gordan Bobic
2010-12-13 15:17 ` cwillu
2010-12-13 16:48 ` Gordan Bobic
2010-12-13 17:17 ` Paddy Steed [this message]
2010-12-13 17:47 ` Gordan Bobic
2010-12-13 18:20 ` Tomasz Torcz
2010-12-13 19:34 ` Ric Wheeler
-- strict thread matches above, loose matches on Subject: below --
2010-03-10 19:49 SSD Optimizations Gordan Bobic
2010-03-10 21:14 ` Marcus Fritzsch
2010-03-10 21:22 ` Marcus Fritzsch
2010-03-10 23:13 ` Gordan Bobic
2010-03-11 10:35 ` Daniel J Blueman
2010-03-11 12:03 ` Gordan Bobic
2010-03-10 23:12 ` Mike Fedyk
2010-03-10 23:22 ` Gordan Bobic
2010-03-11 7:38 ` Sander
2010-03-11 10:59 ` Hubert Kario
2010-03-11 11:31 ` Stephan von Krawczynski
2010-03-11 12:17 ` Gordan Bobic
2010-03-11 12:59 ` Stephan von Krawczynski
2010-03-11 13:20 ` Gordan Bobic
2010-03-11 14:01 ` Hubert Kario
2010-03-11 15:35 ` Stephan von Krawczynski
2010-03-11 16:03 ` Gordan Bobic
2010-03-11 16:19 ` Chris Mason
2010-03-12 1:07 ` Hubert Kario
2010-03-12 1:42 ` Chris Mason
2010-03-12 9:15 ` Stephan von Krawczynski
2010-03-12 16:00 ` Hubert Kario
2010-03-13 17:02 ` Stephan von Krawczynski
2010-03-13 19:01 ` Hubert Kario
2010-03-11 16:48 ` Martin K. Petersen
2010-03-11 14:39 ` Sander
2010-03-11 17:35 ` Stephan von Krawczynski
2010-03-11 18:00 ` Chris Mason
2010-03-13 16:43 ` Stephan von Krawczynski
2010-03-13 19:41 ` Hubert Kario
2010-03-13 21:48 ` Chris Mason
2010-03-14 3:19 ` Jeremy Fitzhardinge
2010-03-11 12:09 ` Gordan Bobic
2010-03-11 16:22 ` Martin K. Petersen
2010-03-11 11:59 ` Gordan Bobic
2010-03-11 15:59 ` Asdo
[not found] ` <4B98F350.6080804@shiftmail.org>
2010-03-11 16:15 ` Gordan Bobic
2010-03-11 14:21 ` Chris Mason
2010-03-11 16:18 ` Gordan Bobic
2010-03-11 16:29 ` Chris Mason
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1292260671.11248.609.camel@paddy-desktop \
--to=jarktasaa@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).