public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Shrinand Javadekar <shrinand@maginatics.com>
Cc: xfs@oss.sgi.com
Subject: Re: XFS Syncd
Date: Fri, 5 Jun 2015 08:08:33 +1000	[thread overview]
Message-ID: <20150604220833.GU24666@dastard> (raw)
In-Reply-To: <CABppvi6dbiOVDJq+gVMOMNhMZ5Lf2kEvtiQdOzxfhq9fLDrPVQ@mail.gmail.com>

On Thu, Jun 04, 2015 at 12:26:19AM -0700, Shrinand Javadekar wrote:
> I made two changes based on the suggestions above:
> 
> 1. Reverted the agcount back to the default: 4.
> 2. Bumped the directory block size to 8k (-n size=8k)
> 
> This definitely has made things better. My throughput for one run of
> my 40GB (5GB on each disk) test has gone up from ~70MB/s to 88MB/s.
> The pauses started off being very small : ~1 sec. Right now, with 20GB
> data in each disk, I see the pauses are ~4 seconds.
> 
> I ran echo w > /proc/sysrq-trigger as soon as the system went into one
> of these pauses. Attached here is the output of dmesg after that. I'm

Ok, it didn't catch anything blocked, just dumped scheduler info for
each CPU. But the fact the changes had a positive impact means we
are probably on the right track.

> going to run a test overnight to see how it performs. Especially, how
> big do the pauses get as more and more data is written into the
> system.
> 
> Also, unfortunately, I don't have a kernel dev setup ready to try out
> the patch immediately. I will try and setup the environment to try it
> out.

Ok, I'll be doing more testing here on it, but it would be great if
you could see what difference it makes and report back. No hurry,
such a change is probably too late for the next merge window, so
there's plenty of time to get it right...

Thanks for all the time you've spent triaging this problem so far!

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2015-06-04 22:09 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-10  4:23 XFS Syncd Shrinand Javadekar
2015-04-10  6:32 ` Dave Chinner
2015-04-10  6:51   ` Shrinand Javadekar
2015-04-10  7:21     ` Dave Chinner
2015-04-10  7:29       ` Shrinand Javadekar
2015-04-10 13:12         ` Dave Chinner
2015-06-02 18:43           ` Shrinand Javadekar
2015-06-03  3:57             ` Dave Chinner
2015-06-03 23:18               ` Shrinand Javadekar
2015-06-04  0:35                 ` Dave Chinner
2015-06-04  0:58                   ` Shrinand Javadekar
2015-06-04  1:55                     ` Dave Chinner
2015-06-04  1:25                   ` Dave Chinner
2015-06-04  2:03                     ` Dave Chinner
2015-06-04  6:23                       ` Dave Chinner
2015-06-04  7:26                         ` Shrinand Javadekar
2015-06-04 22:08                           ` Dave Chinner [this message]
2015-06-05  0:59                         ` Shrinand Javadekar
2015-06-05 17:31                           ` Shrinand Javadekar
2015-06-08 21:56                             ` Shrinand Javadekar
2015-06-09 23:12                               ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150604220833.GU24666@dastard \
    --to=david@fromorbit.com \
    --cc=shrinand@maginatics.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox