From: Dave Chinner <david@fromorbit.com>
To: Jeff Mahoney <jeffm@suse.com>
Cc: Mark Fasheh <mfasheh@suse.de>,
Christoph Hellwig <hch@infradead.org>,
xfs@oss.sgi.com
Subject: Re: xfs-trace-ilock-more
Date: Thu, 15 Dec 2011 07:32:43 +1100 [thread overview]
Message-ID: <20111214203243.GN3179@dastard> (raw)
In-Reply-To: <4EE8F7F0.7070207@suse.com>
On Wed, Dec 14, 2011 at 02:24:32PM -0500, Jeff Mahoney wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 12/14/2011 01:27 PM, Mark Fasheh wrote:
> > Hey Christoph,
> >
> > On Tue, Dec 13, 2011 at 09:40:40PM -0500, Christoph Hellwig wrote:
> >> Can you explain the story behid this patch in SLES11SP1?
> >
> > We were looking at some performance issues and needed a bit more
> > information on the amount of time spent in ilock. I can give you
> > more specifics if you want, I just have to dig up the e-mails (it's
> > been a while).
>
> That's pretty much the explanation. With heavy reader load, buffered
> writes were stalling for 80 ms and sometimes longer. I suspected it
> was contention on the ilock and the tracing with that patch
> demonstrated a delay there. Since we were chasing a similar issue at
> another site, it seemed worthwhile to just keep it around. We're still
> tracking down the cause. I'm not sure if more recent kernels have the
> same issue as there's been quite a lot of churn.
I'm not surprised - there's nothing really guaranteeing bound shared
vs exclusive access to the ilock. It's all down to the read/write
bias of the rwsem - readers will hold off the writer for some time.
Still, it would be nice to see a trace from such a holdoff to
confirm this is actually the case...
FWIW, if you have an app that requires concurrent, low latency reads
and writes to the same file, that's what the XFS Direct IO was
designed for - in most cases the iolock is taken in shared mode for
both read and write, and so such hold-offs don't generally happen...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2011-12-14 20:32 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-12-14 2:40 xfs-trace-ilock-more Christoph Hellwig
2011-12-14 18:27 ` xfs-trace-ilock-more Mark Fasheh
2011-12-14 19:24 ` xfs-trace-ilock-more Jeff Mahoney
2011-12-14 20:32 ` Dave Chinner [this message]
2011-12-14 22:42 ` xfs-trace-ilock-more Jeff Mahoney
2011-12-18 20:26 ` xfs-trace-ilock-more Christoph Hellwig
2011-12-18 20:27 ` xfs-trace-ilock-more Christoph Hellwig
2012-01-05 22:38 ` xfs-trace-ilock-more Mark Fasheh
2012-01-05 23:54 ` xfs-trace-ilock-more Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20111214203243.GN3179@dastard \
--to=david@fromorbit.com \
--cc=hch@infradead.org \
--cc=jeffm@suse.com \
--cc=mfasheh@suse.de \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox