public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Davidlohr Bueso <davidlohr@hp.com>
Cc: linux-kernel@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Tim Chen <tim.c.chen@linux.intel.com>,
	Ingo Molnar <mingo@kernel.org>
Subject: Re: [regression, 3.16-rc] rwsem: optimistic spinning causing performance degradation
Date: Thu, 3 Jul 2014 14:59:33 +1000	[thread overview]
Message-ID: <20140703045933.GP4453@dastard> (raw)
In-Reply-To: <1404358268.23839.13.camel@buesod1.americas.hpqcorp.net>

On Wed, Jul 02, 2014 at 08:31:08PM -0700, Davidlohr Bueso wrote:
> On Thu, 2014-07-03 at 12:32 +1000, Dave Chinner wrote:
> > Hi folks,
> > 
> > I've got a workload that hammers the mmap_sem via multi-threads
> > memory allocation and page faults: it's called xfs_repair. 
> 
> Another reason for concurrent address space operations :/

*nod*

> >         XFS_REPAIR Summary    Thu Jul  3 11:41:28 2014
> > 
> > Phase           Start           End             Duration
> > Phase 1:        07/03 11:40:27  07/03 11:40:27  
> > Phase 2:        07/03 11:40:27  07/03 11:40:36  9 seconds
> > Phase 3:        07/03 11:40:36  07/03 11:41:12  36 seconds
> 
> The *real* degradation is here then.

Yes, as I said, it's in phase 2 and 3 where all the IO and memory
allocation is done.

> > This is what the kernel profile looks like on the strided run:
> > 
> > -  83.06%  [kernel]  [k] osq_lock
> >    - osq_lock
> >       - 100.00% rwsem_down_write_failed
> >          - call_rwsem_down_write_failed
> >             - 99.55% sys_mprotect
> >                  tracesys
> >                  __GI___mprotect
> > -  12.02%  [kernel]  [k] rwsem_down_write_failed
> >      rwsem_down_write_failed
> >      call_rwsem_down_write_failed
> > +   1.09%  [kernel]  [k] _raw_spin_unlock_irqrestore
> > +   0.92%  [kernel]  [k] _raw_spin_unlock_irq
> > +   0.68%  [kernel]  [k] __do_softirq
> > +   0.33%  [kernel]  [k] default_send_IPI_mask_sequence_phys
> > +   0.10%  [kernel]  [k] __do_page_fault
> > 
> > Basically, all the kernel time is spent processing lock contention
> > rather than doing real work.
> 
> While before it just blocked.

Yup, pretty much - there was contention on the rwsem internal
spinlock, but nothing else burnt CPU time.

> > I haven't tested back on 3.15 yet, but historically the strided AG
> > repair for such filesystems (which I test all the time on 100+500TB
> > filesystem images) is about 20-25% faster on this storage subsystem.
> > Yes, the old code also burnt a lot of CPU due to lock contention,
> > but it didn't go 5x slower than having no threading at all.
> > 
> > So this looks like a significant performance regression due to:
> > 
> > 4fc828e locking/rwsem: Support optimistic spinning
> > 
> > which changed the rwsem behaviour in 3.16-rc1.
> 
> So the mmap_sem is held long enough in this workload that the cost of
> blocking is actually significantly smaller than just spinning --

The issues is that the memory allocation pattern alternates between
read and write locks. i.e. write lock on mprotect at allocation,
read lock on page fault when processing the contents. Hence we have
a pretty consistent pattern of allocation (and hence mprotect)
in prefetch threads, while there page faults are in the
processing threads.

> particularly MCS. How many threads are you running when you see the
> issue?

Lots. In phase 3:

$ ps -eLF |grep [x]fs_repair | wc -l
140
$

We use 6 threads per AG being processed:

	- 4 metadata prefetch threads (do allocation and IO),
	- 1 prefetch control thread
	- 1 metadata processing thread (do page faults)

That works out about right - default is to create a new processing
group every 15 AGs, so with 336 AGs we should have roughly 22 AGs
being processed concurrently...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2014-07-03  4:59 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-03  2:32 [regression, 3.16-rc] rwsem: optimistic spinning causing performance degradation Dave Chinner
2014-07-03  3:31 ` Davidlohr Bueso
2014-07-03  4:59   ` Dave Chinner [this message]
2014-07-03  5:39     ` Dave Chinner
2014-07-03  7:38       ` Peter Zijlstra
2014-07-03  7:56         ` Dave Chinner
     [not found] <1404413420.8764.42.camel@j-VirtualBox>
     [not found] ` <1404416236.3179.18.camel@buesod1.americas.hpqcorp.net>
2014-07-03 20:08   ` Davidlohr Bueso
2014-07-04  1:01 ` Dave Chinner
2014-07-04  1:46   ` Jason Low
2014-07-04  1:54     ` Jason Low
2014-07-04  6:13       ` Dave Chinner
2014-07-04  7:06         ` Jason Low
2014-07-04  8:21           ` Dave Chinner
2014-07-04  8:53           ` Davidlohr Bueso
2014-07-05  3:14             ` Jason Low
2014-07-04  7:52       ` Peter Zijlstra
2014-07-04  8:40         ` Davidlohr Bueso
2014-07-05  3:49           ` Jason Low
     [not found]             ` <CAAW_DMjgd5+EOvZX7_iZe-jHp=00Nf7MX3z6hBCRPgOfqnMtEA@mail.gmail.com>
2014-07-14  9:55               ` Peter Zijlstra
2014-07-14 17:10                 ` Jason Low
2014-07-15  2:17                 ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140703045933.GP4453@dastard \
    --to=david@fromorbit.com \
    --cc=davidlohr@hp.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=tim.c.chen@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox