public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Brian Foster <bfoster@redhat.com>
Cc: xfs@oss.sgi.com
Subject: Re: [PATCH 2/5] repair: per AG locks contend for cachelines
Date: Fri, 13 Dec 2013 07:46:23 +1100	[thread overview]
Message-ID: <20131212204623.GY10988@dastard> (raw)
In-Reply-To: <52AA0773.1000506@redhat.com>

On Thu, Dec 12, 2013 at 01:58:59PM -0500, Brian Foster wrote:
> On 12/12/2013 02:22 AM, Dave Chinner wrote:
> > From: Dave Chinner <dchinner@redhat.com>
> > 
> > The per-ag locks used to protect per-ag block lists are located in a tightly
> > packed array. That means that they share cachelines, so separate them out inot
> > separate 64 byte regions in the array.
> > 
> > pahole confirms the padding is correctly applied:
> > 
> > struct aglock {
> >         pthread_mutex_t            lock;                 /*     0    40 */
> > 
> >         /* size: 64, cachelines: 1, members: 1 */
> >         /* padding: 24 */
> > };
> > 
> > Signed-off-by: Dave Chinner <dchinner@redhat.com>
> > ---
> 
> Out of curiosity, any data on this one?

There's a small improvement but it's within the margins of error. At
higher levels of concurrency it will make a bigger difference, but
right now we can't get to a thread per AG (where the contention
would really show) because of other scalability limitations

e.g. mmap_sem contention becomes a limiting factor at 20-25 threads
because of page faults occurring on hash cache lookups (read lock)
occurring in parallel with memory allocation (libc mprotect calls
take a write lock). So more scalability will come from a CPU cache
friendlier cache index - I'm tempted to drop in per-AG rb trees and
see where that leads (now where would I have got that idea?)....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2013-12-12 20:46 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-12  7:22 [PATCH 0/5] xfs_repair: scalability inmprovements Dave Chinner
2013-12-12  7:22 ` [PATCH 1/5] repair: translation lookups limit scalability Dave Chinner
2013-12-12 18:26   ` Christoph Hellwig
2013-12-12 18:58   ` Brian Foster
2013-12-12  7:22 ` [PATCH 2/5] repair: per AG locks contend for cachelines Dave Chinner
2013-12-12 18:27   ` Christoph Hellwig
2013-12-12 18:58   ` Brian Foster
2013-12-12 20:46     ` Dave Chinner [this message]
2013-12-12  7:22 ` [PATCH 3/5] repair: phase 6 is trivially parallelisable Dave Chinner
2013-12-12 18:43   ` Christoph Hellwig
2013-12-12 20:53     ` Dave Chinner
2013-12-12 18:59   ` Brian Foster
2013-12-12  7:22 ` [PATCH 4/5] libxfs: buffer cache hashing is suboptimal Dave Chinner
2013-12-12 18:28   ` Christoph Hellwig
2013-12-12 18:59   ` Brian Foster
2013-12-12 20:56     ` Dave Chinner
2013-12-13 14:23       ` Brian Foster
2013-12-12  7:22 ` [PATCH 5/5] repair: limit auto-striding concurrency apprpriately Dave Chinner
2013-12-12 18:29   ` Christoph Hellwig
2013-12-12 21:00     ` Dave Chinner
2013-12-12 18:59   ` Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131212204623.GY10988@dastard \
    --to=david@fromorbit.com \
    --cc=bfoster@redhat.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox