linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Waiman Long <waiman.long@hpe.com>
To: Tejun Heo <tj@kernel.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>,
	Jan Kara <jack@suse.com>, Jeff Layton <jlayton@poochiereds.net>,
	"J. Bruce Fields" <bfields@fieldses.org>,
	Christoph Lameter <cl@linux-foundation.org>,
	<linux-fsdevel@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Andi Kleen <andi@firstfloor.org>,
	Dave Chinner <dchinner@redhat.com>,
	Boqun Feng <boqun.feng@gmail.com>,
	Scott J Norton <scott.norton@hpe.com>,
	Douglas Hatch <doug.hatch@hpe.com>
Subject: Re: [RFC PATCH v2 6/7] lib/persubnode: Introducing a simple per-subnode APIs
Date: Tue, 12 Jul 2016 14:51:31 -0400	[thread overview]
Message-ID: <57853C33.8000705@hpe.com> (raw)
In-Reply-To: <20160712142727.GA3190@htj.duckdns.org>

On 07/12/2016 10:27 AM, Tejun Heo wrote:
> Hello,
>
> On Mon, Jul 11, 2016 at 01:32:11PM -0400, Waiman Long wrote:
>> The percpu APIs are extensively used in the Linux kernel to reduce
>> cacheline contention and improve performance. For some use cases, the
>> percpu APIs may be too fine-grain for distributed resources whereas
>> a per-node based allocation may be too coarse as we can have dozens
>> of CPUs in a NUMA node in some high-end systems.
>>
>> This patch introduces a simple per-subnode APIs where each of the
>> distributed resources will be shared by only a handful of CPUs within
>> a NUMA node. The per-subnode APIs are built on top of the percpu APIs
>> and hence requires the same amount of memory as if the percpu APIs
>> are used. However, it helps to reduce the total number of separate
>> resources that needed to be managed. As a result, it can speed up code
>> that need to iterate all the resources compared with using the percpu
>> APIs. Cacheline contention, however, will increases slightly as each
>> resource is shared by more than one CPU. As long as the number of CPUs
>> in each subnode is small, the performance impact won't be significant.
>>
>> In this patch, at most 2 sibling groups can be put into a subnode. For
>> an x86-64 CPU, at most 4 CPUs will be in a subnode when HT is enabled
>> and 2 when it is not.
> I understand that there's a trade-off between local access and global
> traversing and you're trying to find a sweet spot between the two, but
> this seems pretty arbitrary.  What's the use case?  What are the
> numbers?  Why are global traversals often enough to matter so much?

The last 2 RFC patches were created in response to Andi's comment to 
have coarser granularity than per-cpu. In this particular use case, I 
don't think global list traversals are frequent enough to really have 
any noticeable performance impact. So I don't have any benchmark number 
to support this change. However, it may not be true for other future use 
cases.

These 2 patches were created to gauge if using a per-subnode API for 
this use case is a good idea or not. I am perfectly happy to keep it as 
per-cpu and scrap the last 2 RFC patches. My main goal is to make this 
patchset more acceptable to be moved forward instead of staying in limbo.

Cheers,
Longman

  reply	other threads:[~2016-07-12 18:51 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-11 17:32 [PATCH v2 0/7] vfs: Use dlock list for SB's s_inodes list Waiman Long
2016-07-11 17:32 ` [PATCH v2 1/7] lib/dlock-list: Distributed and lock-protected lists Waiman Long
2016-07-13 16:08   ` Tejun Heo
2016-07-14  2:54     ` Waiman Long
2016-07-14 11:50       ` Tejun Heo
2016-07-14 14:35         ` Jan Kara
2016-07-14 14:51           ` Tejun Heo
2016-07-14 16:16             ` Waiman Long
2016-07-14 17:13         ` Waiman Long
2016-07-14 17:41           ` Tejun Heo
2016-07-11 17:32 ` [PATCH v2 2/7] lib/dlock-list: Add __percpu modifier for parameters Waiman Long
2016-07-13 16:08   ` Tejun Heo
2016-07-14  2:54     ` Waiman Long
2016-07-11 17:32 ` [PATCH v2 3/7] fsnotify: Simplify inode iteration on umount Waiman Long
2016-07-11 17:32 ` [PATCH v2 4/7] vfs: Remove unnecessary list_for_each_entry_safe() variants Waiman Long
2016-07-11 17:32 ` [PATCH v2 5/7] vfs: Use dlock list for superblock's inode list Waiman Long
2016-07-11 17:32 ` [RFC PATCH v2 6/7] lib/persubnode: Introducing a simple per-subnode APIs Waiman Long
2016-07-12  3:14   ` Boqun Feng
2016-07-12 18:37     ` Waiman Long
2016-07-12 14:27   ` Tejun Heo
2016-07-12 18:51     ` Waiman Long [this message]
2016-07-12 18:57       ` Tejun Heo
2016-07-12 19:42         ` Waiman Long
2016-07-11 17:32 ` [RFC PATCH v2 7/7] lib/dlock-list: Use the per-subnode APIs for managing lists Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57853C33.8000705@hpe.com \
    --to=waiman.long@hpe.com \
    --cc=andi@firstfloor.org \
    --cc=bfields@fieldses.org \
    --cc=boqun.feng@gmail.com \
    --cc=cl@linux-foundation.org \
    --cc=dchinner@redhat.com \
    --cc=doug.hatch@hpe.com \
    --cc=jack@suse.com \
    --cc=jlayton@poochiereds.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=scott.norton@hpe.com \
    --cc=tj@kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).