linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Austin S Hemmelgarn <ahferroin7@gmail.com>
To: "P. Remek" <p.remek1@googlemail.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: btrfs performance - ssd array
Date: Mon, 12 Jan 2015 11:43:38 -0500	[thread overview]
Message-ID: <54B3F9BA.3060503@gmail.com> (raw)
In-Reply-To: <CABdHLQ6b_PpaMK7n+NhU5qV9UvtanqJcpn8y1vpM84h9RGSXDQ@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 1943 bytes --]

On 2015-01-12 10:35, P. Remek wrote:
>> Another thing to consider is that the kernel's default I/O scheduler and the default parameters for that I/O scheduler are almost always suboptimal for SSD's, and this tends to show far more with BTRFS than anything else.  Personally >I've found that using the CFQ I/O scheduler with the following parameters works best for a majority of SSD's:
>> 1. slice_idle=0
>> 2. back_seek_penalty=1
>> 3. back_seek_max set equal to the size in sectors of the device
>> 4. nr_requests and quantum set to the hardware command queue depth
>
> I will give these suggestions a try but I don't expect any big gain.
> Notice that the difference between EXT4 and BTRFS random write is
> massive - its 200 000 IOPs vs. 15 000 IOPs and the device and kernel
> parameters are exactly the same (it is same machine) for both test
> scenarios. It suggests that something is taking down write performance
> in the Btrfs implementation.
>
> Notice also that we did some performance tuning ( queue scheduling set
> to noop, irq affinity distribution and pinning to specific numa nodes
> and cores etc.)
>
The stuff about the I/O scheduler is more general advice for dealing 
with SSD's than anything BTRFS specific.  I've found though that on SATA 
(I don't have anywhere near the kind of budget needed for SAS disks, and 
even less so for SAS SSD's) connected SSD's at least, using the no-op 
I/O scheduler get's better small burst performance, but it causes 
horrible latency spikes whenever trying to do something that requires 
bulk throughput with random writes (rsync being an excellent example of 
this).

Something else I thought of after my initial reply, due to the COW 
nature of BTRFS, you will generally get better performance of metadata 
operations with shallower directory structures (largely because mtime 
updates propagate up the directory tree to the root of the filesystem).


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 2455 bytes --]

  reply	other threads:[~2015-01-12 16:43 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-12 13:51 btrfs performance - ssd array P. Remek
2015-01-12 14:54 ` Austin S Hemmelgarn
2015-01-12 15:11   ` Patrik Lundquist
2015-01-12 16:32     ` Austin S Hemmelgarn
2015-01-12 15:35   ` P. Remek
2015-01-12 16:43     ` Austin S Hemmelgarn [this message]
2015-01-13  3:59 ` Wang Shilong
2015-01-15 13:32   ` P. Remek
2015-01-18  5:11     ` Wang Shilong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54B3F9BA.3060503@gmail.com \
    --to=ahferroin7@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=p.remek1@googlemail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).