From: Dave Chinner <david@fromorbit.com>
To: linux-kernel@vger.kernel.org
Cc: Davidlohr Bueso <davidlohr@hp.com>,
Peter Zijlstra <peterz@infradead.org>,
Tim Chen <tim.c.chen@linux.intel.com>,
Ingo Molnar <mingo@kernel.org>
Subject: [regression, 3.16-rc] rwsem: optimistic spinning causing performance degradation
Date: Thu, 3 Jul 2014 12:32:04 +1000 [thread overview]
Message-ID: <20140703023204.GN4453@dastard> (raw)
Hi folks,
I've got a workload that hammers the mmap_sem via multi-threads
memory allocation and page faults: it's called xfs_repair. I've been
debugging problems with the latest release, and in the process of
tracking down some recent regressions I noticed that turning off all
the cross-ag IO concurrency resulted in the repair running about
4-5x faster.
i.e.:
# time xfs_repair -f -vv fs.img
.....
XFS_REPAIR Summary Thu Jul 3 11:51:41 2014
Phase Start End Duration
Phase 1: 07/03 11:47:15 07/03 11:47:15
Phase 2: 07/03 11:47:15 07/03 11:47:35 20 seconds
Phase 3: 07/03 11:47:35 07/03 11:51:26 3 minutes, 51 seconds
Phase 4: 07/03 11:51:26 07/03 11:51:31 5 seconds
Phase 5: 07/03 11:51:31 07/03 11:51:31
Phase 6: 07/03 11:51:31 07/03 11:51:39 8 seconds
Phase 7: 07/03 11:51:39 07/03 11:51:39
Total run time: 4 minutes, 24 seconds
done
real 4m26.399s
user 1m6.023s
sys 27m26.707s
$
And turning off AG striding:
# time xfs_repair -f -vv -o ag_stride=-1 fs.img
.....
XFS_REPAIR Summary Thu Jul 3 11:41:28 2014
Phase Start End Duration
Phase 1: 07/03 11:40:27 07/03 11:40:27
Phase 2: 07/03 11:40:27 07/03 11:40:36 9 seconds
Phase 3: 07/03 11:40:36 07/03 11:41:12 36 seconds
Phase 4: 07/03 11:41:12 07/03 11:41:17 5 seconds
Phase 5: 07/03 11:41:17 07/03 11:41:18 1 second
Phase 6: 07/03 11:41:18 07/03 11:41:25 7 seconds
Phase 7: 07/03 11:41:25 07/03 11:41:25
Total run time: 58 seconds
done
real 0m59.893s
user 0m41.935s
sys 4m55.867s
$
The difference is in phase 2 and 3, which is where all the memory
allocation and IO that populates the userspace buffer cache takes
place. The remainder of the phases run from the cache. All IO is
direct IO, so there is no kernel caching at all. The filesystem
image has a lot of metadata in it - it has 336 AGs and the buffer
cache grows to about 6GB in size during phase 3.
The difference in performance is in the system CPU time, and it
results directly in lower IO dispatch rates - about 2,000 IOPS
instead of ~12,000.
This is what the kernel profile looks like on the strided run:
- 83.06% [kernel] [k] osq_lock
- osq_lock
- 100.00% rwsem_down_write_failed
- call_rwsem_down_write_failed
- 99.55% sys_mprotect
tracesys
__GI___mprotect
- 12.02% [kernel] [k] rwsem_down_write_failed
rwsem_down_write_failed
call_rwsem_down_write_failed
+ 1.09% [kernel] [k] _raw_spin_unlock_irqrestore
+ 0.92% [kernel] [k] _raw_spin_unlock_irq
+ 0.68% [kernel] [k] __do_softirq
+ 0.33% [kernel] [k] default_send_IPI_mask_sequence_phys
+ 0.10% [kernel] [k] __do_page_fault
Basically, all the kernel time is spent processing lock contention
rather than doing real work.
I haven't tested back on 3.15 yet, but historically the strided AG
repair for such filesystems (which I test all the time on 100+500TB
filesystem images) is about 20-25% faster on this storage subsystem.
Yes, the old code also burnt a lot of CPU due to lock contention,
but it didn't go 5x slower than having no threading at all.
So this looks like a significant performance regression due to:
4fc828e locking/rwsem: Support optimistic spinning
which changed the rwsem behaviour in 3.16-rc1.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
next reply other threads:[~2014-07-03 2:32 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-03 2:32 Dave Chinner [this message]
2014-07-03 3:31 ` [regression, 3.16-rc] rwsem: optimistic spinning causing performance degradation Davidlohr Bueso
2014-07-03 4:59 ` Dave Chinner
2014-07-03 5:39 ` Dave Chinner
2014-07-03 7:38 ` Peter Zijlstra
2014-07-03 7:56 ` Dave Chinner
[not found] <1404413420.8764.42.camel@j-VirtualBox>
[not found] ` <1404416236.3179.18.camel@buesod1.americas.hpqcorp.net>
2014-07-03 20:08 ` Davidlohr Bueso
2014-07-04 1:01 ` Dave Chinner
2014-07-04 1:46 ` Jason Low
2014-07-04 1:54 ` Jason Low
2014-07-04 6:13 ` Dave Chinner
2014-07-04 7:06 ` Jason Low
2014-07-04 8:21 ` Dave Chinner
2014-07-04 8:53 ` Davidlohr Bueso
2014-07-05 3:14 ` Jason Low
2014-07-04 7:52 ` Peter Zijlstra
2014-07-04 8:40 ` Davidlohr Bueso
2014-07-05 3:49 ` Jason Low
[not found] ` <CAAW_DMjgd5+EOvZX7_iZe-jHp=00Nf7MX3z6hBCRPgOfqnMtEA@mail.gmail.com>
2014-07-14 9:55 ` Peter Zijlstra
2014-07-14 17:10 ` Jason Low
2014-07-15 2:17 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140703023204.GN4453@dastard \
--to=david@fromorbit.com \
--cc=davidlohr@hp.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=tim.c.chen@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox