From: Waiman Long <waiman.long@hpe.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
linux-kernel@vger.kernel.org,
Scott J Norton <scott.norton@hpe.com>,
Douglas Hatch <doug.hatch@hpe.com>, Paul Turner <pjt@google.com>,
Ben Segall <bsegall@google.com>,
Morten Rasmussen <morten.rasmussen@arm.com>,
Yuyang Du <yuyang.du@intel.com>
Subject: Re: [RFC PATCH 3/3] sched/fair: Use different cachelines for readers and writers of load_avg
Date: Wed, 02 Dec 2015 13:44:50 -0500 [thread overview]
Message-ID: <565F3C22.8060405@hpe.com> (raw)
In-Reply-To: <20151201084748.GJ3816@twins.programming.kicks-ass.net>
On 12/01/2015 03:47 AM, Peter Zijlstra wrote:
> On Mon, Nov 30, 2015 at 11:00:35PM -0500, Waiman Long wrote:
>
>> I think the current kernel use power-of-2 kmemcaches to satisfy kalloc()
>> requests except when the size is less than or equal to 192 where there are
>> some non-power-of-2 kmemcaches available. Given that the task_group
>> structure is large enough with FAIR_GROUP_SCHED enabled, we shouldn't hit
>> the case that the allocated buffer is not cacheline aligned.
> Using out-of-object storage is allowed (none of the existing sl*b
> allocators do so iirc).
>
> That is, its perfectly valid for a sl*b allocator for 64 byte objects to
> allocate 72 bytes for each object and use the 'spare' 8 bytes for object
> tracking or whatnot.
>
> That would respect the minimum alignment guarantee of 8 bytes but not
> provide the 'expected' object size alignment you're assuming.
>
> Also, we have the proper interfaces to request the explicit alignment
> for a reason. So if you need the alignment for correctness, use those.
Thanks for the tip. I have just sent out an updated patch set which
create a cache-aligned memcache for task group. That should work under
all kernel config setting.
Cheers,
Longman
next prev parent reply other threads:[~2015-12-02 18:44 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-25 19:09 [PATCH 0/3] sched/fair: Reduce contention on tg's load_avg Waiman Long
2015-11-25 19:09 ` [PATCH 1/3] sched/fair: Avoid redundant idle_cpu() call in update_sg_lb_stats() Waiman Long
2015-12-04 11:57 ` [tip:sched/core] " tip-bot for Waiman Long
2015-11-25 19:09 ` [PATCH 2/3] sched/fair: Move hot load_avg into its own cacheline Waiman Long
2015-11-30 10:23 ` Peter Zijlstra
2015-11-25 19:09 ` [RFC PATCH 3/3] sched/fair: Use different cachelines for readers and writers of load_avg Waiman Long
2015-11-30 10:22 ` Peter Zijlstra
2015-11-30 19:13 ` Waiman Long
2015-11-30 22:09 ` Peter Zijlstra
2015-12-01 3:55 ` Waiman Long
2015-12-01 8:49 ` Peter Zijlstra
2015-12-01 10:44 ` Mike Galbraith
2015-12-02 18:48 ` Waiman Long
2015-11-30 22:29 ` Peter Zijlstra
2015-12-01 4:00 ` Waiman Long
2015-12-01 8:47 ` Peter Zijlstra
2015-12-02 18:44 ` Waiman Long [this message]
2015-11-30 22:32 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=565F3C22.8060405@hpe.com \
--to=waiman.long@hpe.com \
--cc=bsegall@google.com \
--cc=doug.hatch@hpe.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=morten.rasmussen@arm.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=scott.norton@hpe.com \
--cc=yuyang.du@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox