From: Chris Mason <chris.mason@oracle.com>
To: Josef Bacik <josef@redhat.com>
Cc: srimugunthan dhandapani <muggy.mit@gmail.com>,
linux-btrfs@vger.kernel.org
Subject: Re: ssd optimised mode
Date: Fri, 20 Feb 2009 11:30:25 -0500 [thread overview]
Message-ID: <1235147425.13249.16.camel@think.oraclecorp.com> (raw)
In-Reply-To: <20090220160134.GE24890@unused.rdu.redhat.com>
On Fri, 2009-02-20 at 11:01 -0500, Josef Bacik wrote:
> On Fri, Feb 20, 2009 at 04:56:55PM +0530, srimugunthan dhandapani wrote:
> > hi all,
> > I would like to know what are the ssd specific optimisations in btrfs .
> > I read from the archives that
> >
> > "mount -o ssd option, which clusters file data writes together regardless of
> > the directory the files belong to. There are a number of other performance
> > tweaks for SSD, aimed at clustering metadata and data writes to better take
> > advantage of the hardware"
> > Also I read that there are some allocator changes specific to ssd. I
> > would like to know what these changes are.
> >
> > I tried to see the v 0.17 code to understand what ssd specific changes are.
> > I saw that the SSD option was used in two places in function
> > find_free_extent and in function btrfs_defrag_leaves.
> > I couldn't gather much information from the code.
> > So can somebody help me understand the optimisations and changes in btrfs.
> > are any documentation or reading material available?
> > Thanks in advance for helping,
>
> So really the only thing that changes with SSD is that we keep track of the last
> place we allocated for data, and start looking from there again. Since there is
> no seek penalty we can just be dumb in the allocator and use up space
> sequentially regardless of the file instead of trying to group allocations for
> the file/dir. Thanks,
We already do this with metadata, but the ssd mode makes the metadata
cluster that we try to allocate larger now. This change was made based
on benchmarks with an SSD that is bad at random writes, and it made a
significant difference for metadata writes.
The short answer is that in ssd mode we don't try to avoid random reads.
-chris
next prev parent reply other threads:[~2009-02-20 16:30 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-02-20 11:26 ssd optimised mode srimugunthan dhandapani
2009-02-20 16:01 ` Josef Bacik
2009-02-20 16:30 ` Chris Mason [this message]
2009-02-22 1:07 ` Dmitri Nikulin
2009-02-22 17:44 ` Steven Pratt
2009-02-23 1:06 ` Dmitri Nikulin
2009-02-23 1:22 ` Dongjun Shin
2009-02-23 2:33 ` Dmitri Nikulin
2009-02-23 3:15 ` Dongjun Shin
2009-02-23 3:17 ` Seth Huang
2009-02-23 4:01 ` Dmitri Nikulin
2009-02-23 9:31 ` Oliver Mattos
2009-02-23 16:40 ` Martin K. Petersen
2009-02-23 16:48 ` Claudio Martins
2009-02-23 17:23 ` Martin K. Petersen
2009-02-23 14:33 ` Chris Mason
2009-02-24 0:16 ` Dmitri Nikulin
2009-02-24 0:35 ` Dongjun Shin
2009-02-24 2:32 ` Martin K. Petersen
2009-02-24 3:53 ` Dmitri Nikulin
2009-02-24 4:09 ` Dongjun Shin
2009-02-24 4:10 ` Martin K. Petersen
2009-02-24 4:23 ` Dmitri Nikulin
2009-02-23 22:19 ` Wes Felter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1235147425.13249.16.camel@think.oraclecorp.com \
--to=chris.mason@oracle.com \
--cc=josef@redhat.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=muggy.mit@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox