linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Waiman Long <waiman.long@hpe.com>
To: Ingo Molnar <mingo@kernel.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>,
	Jan Kara <jack@suse.com>, Jeff Layton <jlayton@poochiereds.net>,
	"J. Bruce Fields" <bfields@fieldses.org>,
	Tejun Heo <tj@kernel.org>,
	Christoph Lameter <cl@linux-foundation.org>,
	<linux-fsdevel@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Andi Kleen <andi@firstfloor.org>,
	Dave Chinner <dchinner@redhat.com>,
	Scott J Norton <scott.norton@hp.com>,
	Douglas Hatch <doug.hatch@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [RRC PATCH 2/2] vfs: Use per-cpu list for superblock's inode list
Date: Wed, 17 Feb 2016 10:40:27 -0500	[thread overview]
Message-ID: <56C4946B.10102@hpe.com> (raw)
In-Reply-To: <20160217071632.GA18403@gmail.com>

On 02/17/2016 02:16 AM, Ingo Molnar wrote:
> * Waiman Long<Waiman.Long@hpe.com>  wrote:
>
>> When many threads are trying to add or delete inode to or from
>> a superblock's s_inodes list, spinlock contention on the list can
>> become a performance bottleneck.
>>
>> This patch changes the s_inodes field to become a per-cpu list with
>> per-cpu spinlocks.
>>
>> With an exit microbenchmark that creates a large number of threads,
>> attachs many inodes to them and then exits. The runtimes of that
>> microbenchmark with 1000 threads before and after the patch on a
>> 4-socket Intel E7-4820 v3 system (40 cores, 80 threads) were as
>> follows:
>>
>>    Kernel            Elapsed Time    System Time
>>    ------            ------------    -----------
>>    Vanilla 4.5-rc4      65.29s         82m14s
>>    Patched 4.5-rc4      22.81s         23m03s
>>
>> Before the patch, spinlock contention at the inode_sb_list_add()
>> function at the startup phase and the inode_sb_list_del() function at
>> the exit phase were about 79% and 93% of total CPU time respectively
>> (as measured by perf). After the patch, the percpu_list_add()
>> function consumed only about 0.04% of CPU time at startup phase. The
>> percpu_list_del() function consumed about 0.4% of CPU time at exit
>> phase. There were still some spinlock contention, but they happened
>> elsewhere.
> Pretty impressive IMHO!
>
> Just for the record, here's your former 'batched list' number inserted into the
> above table:
>
>     Kernel                       Elapsed Time    System Time
>     ------                       ------------    -----------
>     Vanilla      [v4.5-rc4]      65.29s          82m14s
>     batched list [v4.4]          45.69s          49m44s
>     percpu list  [v4.5-rc4]      22.81s          23m03s
>
> i.e. the proper per CPU data structure and the resulting improvement in cache
> locality gave another doubling in performance.
>
> Just out of curiosity, could you post the profile of the latest patches - is there
> any (bigger) SMP overhead left, or is the profile pretty flat now?
>
> Thanks,
>
> 	Ingo

Yes, there were still spinlock contention elsewhere in the exit path. 
Now the bulk of the CPU times was in:

-   79.23%    79.23%         a.out  [kernel.kallsyms]    [k] 
native_queued_spin
    - native_queued_spin_lock_slowpath
       - 99.99% queued_spin_lock_slowpath
          - 100.00% _raw_spin_lock
             - 99.98% list_lru_del
                - d_lru_del
                   - 100.00% select_collect
                        detach_and_collect
                        d_walk
                        d_invalidate
                        proc_flush_task
                        release_task
                        do_exit
                        do_group_exit
                        get_signal
                        do_signal
                        exit_to_usermode_loop
                        syscall_return_slowpath
                        int_ret_from_sys_call

The locks that were being contended were nlru->lock. For a 4-node system 
that I used, there will be four of those.

Cheers,
Longman

  reply	other threads:[~2016-02-17 15:40 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-17  1:31 [RFC PATCH 0/2] vfs: Use per-cpu list for SB's s_inodes list Waiman Long
2016-02-17  1:31 ` [RFC PATCH 1/2] lib/percpu-list: Per-cpu list with associated per-cpu locks Waiman Long
2016-02-17  9:53   ` Dave Chinner
2016-02-17 11:00     ` Peter Zijlstra
2016-02-17 11:05       ` Peter Zijlstra
2016-02-17 16:16         ` Waiman Long
2016-02-17 16:22           ` Peter Zijlstra
2016-02-17 16:27           ` Christoph Lameter
2016-02-17 17:12             ` Waiman Long
2016-02-17 17:18               ` Peter Zijlstra
2016-02-17 17:41                 ` Waiman Long
2016-02-17 18:22                   ` Peter Zijlstra
2016-02-17 18:45                     ` Waiman Long
2016-02-17 19:39                       ` Peter Zijlstra
2016-02-17 11:10       ` Dave Chinner
2016-02-17 11:26         ` Peter Zijlstra
2016-02-17 11:36           ` Peter Zijlstra
2016-02-17 15:56     ` Waiman Long
2016-02-17 16:02       ` Peter Zijlstra
2016-02-17 15:13   ` Christoph Lameter
2016-02-17  1:31 ` [RRC PATCH 2/2] vfs: Use per-cpu list for superblock's inode list Waiman Long
2016-02-17  7:16   ` Ingo Molnar
2016-02-17 15:40     ` Waiman Long [this message]
2016-02-17 10:37   ` Dave Chinner
2016-02-17 16:08     ` Waiman Long
2016-02-18 23:58 ` [RFC PATCH 0/2] vfs: Use per-cpu list for SB's s_inodes list Dave Chinner
2016-02-19 21:04   ` Long, Wai Man

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56C4946B.10102@hpe.com \
    --to=waiman.long@hpe.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=andi@firstfloor.org \
    --cc=bfields@fieldses.org \
    --cc=cl@linux-foundation.org \
    --cc=dchinner@redhat.com \
    --cc=doug.hatch@hp.com \
    --cc=jack@suse.com \
    --cc=jlayton@poochiereds.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=scott.norton@hp.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).