public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Stan Hoeppner <stan@hardwarefreak.com>
Cc: xfs@oss.sgi.com
Subject: Re: gather write metrics on multiple files
Date: Fri, 10 Oct 2014 08:13:39 +1100	[thread overview]
Message-ID: <20141009211339.GD4376@dastard> (raw)
In-Reply-To: <54361C04.5090404@hardwarefreak.com>

On Thu, Oct 09, 2014 at 12:24:20AM -0500, Stan Hoeppner wrote:
> On 10/08/2014 11:49 PM, Joe Landman wrote:
> > On 10/09/2014 12:40 AM, Stan Hoeppner wrote:
> >> Does anyone know of a utility that can track writes to files in
> >> an XFS directory tree, or filesystem wide for that matter, and
> >> gather filesystem blocks written per second data, or simply
> >> KiB/s, etc?  I need to analyze an application's actual IO
> >> behavior to see if it matches what I'm being told the
> >> application is supposed to be doing.
> >>
> > 
> > We've written a few for this purpose (local IO probing).
> > 
> > Start with collectl (looks at /proc/diskstats), and others.  Our
> > tools go to /proc/diskstats, and use this to compute BW and IOPs
> > per device.
> > 
> > If you need to log it for a long time, set up a time series
> > database (we use influxdb and the graphite plugin).  Then grab
> > your favorite metrics tool that talks to graphite/influxdb (I
> > like https://github.com/joelandman/sios-metrics for obvious
> > reasons), and start collecting data.
> 
> I'm told we have 800 threads writing to nearly as many files
> concurrently on a single XFS on a 12+2 spindle RAID6 LUN.
> Achieved data rate is currently ~300 MiB/s.  Some of these are
> files are supposedly being written at a rate of only 32KiB every
> 2-3 seconds, while some (two) are ~50 MiB/s.  I need to determine
> how many bytes we're writing to each of the low rate files, and
> how many files, to figure out RMW mitigation strategies.  Out of
> the apparent 800 streams 700 are these low data rate suckers, one
> stream writing per file.  
> 
> Nary a stock RAID controller is going to be able to assemble full
> stripes out of these small slow writes.  With a 768 KiB stripe
> that's what, 24 seconds to fill it at 2 seconds per 32 KiB IO?

Raid controllers don't typically have the resources to track
hundreds of separate write streams at a time. Most don't have the
memory available to track that many active write streams, and those
that do probably can't proritise writeback sanely given how slowly
most cachelines would be touched. The fast writers would simply tune
over the slower writer caches way too quickly.

Perhaps you need to change the application to make the slow writers
buffer stripe sized writes in memory and flush them 768k at a
time...

> I've been playing with bcache for a few days but it actually drops
> throughput by about 30% no matter how I turn its knobs.  Unless I
> can get Kent to respond to some of my questions bcache will be a
> dead end.  I had high hopes for it, thinking it would turn these
> small random IOs into larger sequential writes.  It may actually
> be doing so, but it's doing something else too, and badly.  IO
> times go through the roof once bcache starts gobbling IOs, and
> throughput to the LUNs drops significantly even though bcache is
> writing 50-100 MIB/s to the SSD.  Not sure what's causing that.

Have you tried dm-cache?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2014-10-09 21:13 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-09  4:40 gather write metrics on multiple files Stan Hoeppner
2014-10-09  4:49 ` Joe Landman
2014-10-09  5:24   ` Stan Hoeppner
2014-10-09 21:13     ` Dave Chinner [this message]
2014-10-09 22:30       ` Stan Hoeppner
2014-10-18  6:03       ` Stan Hoeppner
2014-10-18 18:16         ` Stan Hoeppner
2014-10-19 22:24           ` Dave Chinner
2014-10-21 23:56             ` Stan Hoeppner
2014-10-25  2:28               ` Stan Hoeppner
2014-10-09 21:07 ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141009211339.GD4376@dastard \
    --to=david@fromorbit.com \
    --cc=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox