From: Ingo Molnar <mingo@kernel.org>
To: Jan Kara <jack@suse.cz>
Cc: Waiman Long <Waiman.Long@hpe.com>,
Alexander Viro <viro@zeniv.linux.org.uk>,
Jan Kara <jack@suse.com>, Jeff Layton <jlayton@poochiereds.net>,
"J. Bruce Fields" <bfields@fieldses.org>,
Tejun Heo <tj@kernel.org>,
Christoph Lameter <cl@linux-foundation.org>,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Andi Kleen <andi@firstfloor.org>,
Dave Chinner <dchinner@redhat.com>,
Scott J Norton <scott.norton@hp.com>,
Douglas Hatch <doug.hatch@hp.com>
Subject: Re: [PATCH v3 3/3] vfs: Use per-cpu list for superblock's inode list
Date: Thu, 25 Feb 2016 09:06:35 +0100 [thread overview]
Message-ID: <20160225080635.GB10611@gmail.com> (raw)
In-Reply-To: <20160224085858.GE10096@quack.suse.cz>
* Jan Kara <jack@suse.cz> wrote:
> > > > With an exit microbenchmark that creates a large number of threads,
> > > > attachs many inodes to them and then exits. The runtimes of that
> > > > microbenchmark with 1000 threads before and after the patch on a 4-socket
> > > > Intel E7-4820 v3 system (40 cores, 80 threads) were as follows:
> > > >
> > > > Kernel Elapsed Time System Time
> > > > ------ ------------ -----------
> > > > Vanilla 4.5-rc4 65.29s 82m14s
> > > > Patched 4.5-rc4 22.81s 23m03s
> > > >
> > > > Before the patch, spinlock contention at the inode_sb_list_add() function
> > > > at the startup phase and the inode_sb_list_del() function at the exit
> > > > phase were about 79% and 93% of total CPU time respectively (as measured
> > > > by perf). After the patch, the percpu_list_add() function consumed only
> > > > about 0.04% of CPU time at startup phase. The percpu_list_del() function
> > > > consumed about 0.4% of CPU time at exit phase. There were still some
> > > > spinlock contention, but they happened elsewhere.
> > >
> > > While looking through this patch, I have noticed that the
> > > list_for_each_entry_safe() iterations in evict_inodes() and
> > > invalidate_inodes() are actually unnecessary. So if you first apply the
> > > attached patch, you don't have to implement safe iteration variants at all.
> > >
> > > As a second comment, I'd note that this patch grows struct inode by 1
> > > pointer. It is probably acceptable for large machines given the speedup but
> > > it should be noted in the changelog. Furthermore for UP or even small SMP
> > > systems this is IMHO undesired bloat since the speedup won't be noticeable.
> > >
> > > So for these small systems it would be good if per-cpu list magic would just
> > > fall back to single linked list with a spinlock. Do you think that is
> > > reasonably doable?
> >
> > Even many 'small' systems tend to be SMP these days.
>
> Yes, I know. But my tablet with 4 ARM cores is unlikely to benefit from this
> change either. [...]
I'm not sure about that at all, the above numbers are showing a 3x-4x speedup in
system time, which ought to be noticeable on smaller SMP systems as well.
Waiman, could you please post the microbenchmark?
Thanks,
Ingo
next prev parent reply other threads:[~2016-02-25 8:06 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-23 19:04 [PATCH v3 0/3] vfs: Use per-cpu list for SB's s_inodes list Waiman Long
2016-02-23 19:04 ` [PATCH v3 1/3] lib/percpu-list: Per-cpu list with associated per-cpu locks Waiman Long
2016-02-24 2:00 ` Boqun Feng
2016-02-24 4:01 ` Waiman Long
2016-02-24 7:56 ` Jan Kara
2016-02-24 19:51 ` Waiman Long
2016-02-23 19:04 ` [PATCH v3 2/3] fsnotify: Simplify inode iteration on umount Waiman Long
2016-02-23 19:04 ` [PATCH v3 3/3] vfs: Use per-cpu list for superblock's inode list Waiman Long
2016-02-24 8:28 ` Jan Kara
2016-02-24 8:36 ` Ingo Molnar
2016-02-24 8:58 ` Jan Kara
2016-02-25 8:06 ` Ingo Molnar [this message]
2016-02-25 14:43 ` Waiman Long
2016-02-24 20:23 ` Waiman Long
2016-02-25 14:50 ` Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160225080635.GB10611@gmail.com \
--to=mingo@kernel.org \
--cc=Waiman.Long@hpe.com \
--cc=andi@firstfloor.org \
--cc=bfields@fieldses.org \
--cc=cl@linux-foundation.org \
--cc=dchinner@redhat.com \
--cc=doug.hatch@hp.com \
--cc=jack@suse.com \
--cc=jack@suse.cz \
--cc=jlayton@poochiereds.net \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=scott.norton@hp.com \
--cc=tj@kernel.org \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).