From: tip-bot for Peter Zijlstra <tipbot@zytor.com>
To: linux-tip-commits@vger.kernel.org
Cc: mgorman@techsingularity.net, acme@redhat.com,
torvalds@linux-foundation.org, tglx@linutronix.de,
vincent.guittot@linaro.org, hpa@zytor.com, peterz@infradead.org,
frederic@kernel.org, patrick.bellasi@arm.com, mingo@kernel.org,
tony.luck@intel.com, nmanthey@amazon.de,
linux-kernel@vger.kernel.org
Subject: [tip:sched/urgent] sched/core: Force proper alignment of 'struct util_est'
Date: Thu, 5 Apr 2018 02:18:57 -0700 [thread overview]
Message-ID: <tip-317d359df95dd0cb7653d09b7fc513770590cf85@git.kernel.org> (raw)
In-Reply-To: <20180405080521.GG4129@hirez.programming.kicks-ass.net>
Commit-ID: 317d359df95dd0cb7653d09b7fc513770590cf85
Gitweb: https://git.kernel.org/tip/317d359df95dd0cb7653d09b7fc513770590cf85
Author: Peter Zijlstra <peterz@infradead.org>
AuthorDate: Thu, 5 Apr 2018 10:05:21 +0200
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Thu, 5 Apr 2018 10:56:16 +0200
sched/core: Force proper alignment of 'struct util_est'
For some as yet not understood reason, Tony gets unaligned access
traps on IA64 because of:
struct util_est ue = READ_ONCE(p->se.avg.util_est);
and:
WRITE_ONCE(p->se.avg.util_est, ue);
introduced by commit:
d519329f72a6 ("sched/fair: Update util_est only on util_avg updates")
Normally those two fields should end up on an 8-byte aligned location,
but UP and RANDSTRUCT can mess that up so enforce the alignment
explicitly.
Also make the alignment on sched_avg unconditional, as it is really
about data locality, not false-sharing.
With or without this patch the layout for sched_avg on a
ia64-defconfig build looks like:
$ pahole -EC sched_avg ia64-defconfig/kernel/sched/core.o
die__process_function: tag not supported (INVALID)!
struct sched_avg {
/* typedef u64 */ long long unsigned int last_update_time; /* 0 8 */
/* typedef u64 */ long long unsigned int load_sum; /* 8 8 */
/* typedef u64 */ long long unsigned int runnable_load_sum; /* 16 8 */
/* typedef u32 */ unsigned int util_sum; /* 24 4 */
/* typedef u32 */ unsigned int period_contrib; /* 28 4 */
long unsigned int load_avg; /* 32 8 */
long unsigned int runnable_load_avg; /* 40 8 */
long unsigned int util_avg; /* 48 8 */
struct util_est {
unsigned int enqueued; /* 56 4 */
unsigned int ewma; /* 60 4 */
} util_est; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
/* size: 64, cachelines: 1, members: 9 */
};
Reported-and-Tested-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Norbert Manthey <nmanthey@amazon.de>
Cc: Patrick Bellasi <patrick.bellasi@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony <tony.luck@intel.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Fixes: d519329f72a6 ("sched/fair: Update util_est only on util_avg updates")
Link: http://lkml.kernel.org/r/20180405080521.GG4129@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
include/linux/sched.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index f228c6033832..b3d697f3b573 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -300,7 +300,7 @@ struct util_est {
unsigned int enqueued;
unsigned int ewma;
#define UTIL_EST_WEIGHT_SHIFT 2
-};
+} __attribute__((__aligned__(sizeof(u64))));
/*
* The load_avg/util_avg accumulates an infinite geometric series
@@ -364,7 +364,7 @@ struct sched_avg {
unsigned long runnable_load_avg;
unsigned long util_avg;
struct util_est util_est;
-};
+} ____cacheline_aligned;
struct sched_statistics {
#ifdef CONFIG_SCHEDSTATS
@@ -435,7 +435,7 @@ struct sched_entity {
* Put into separate cache line so it does not
* collide with read-mostly values above.
*/
- struct sched_avg avg ____cacheline_aligned_in_smp;
+ struct sched_avg avg;
#endif
};
prev parent reply other threads:[~2018-04-05 9:19 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-02 23:24 v4.16+ seeing many unaligned access in dequeue_task_fair() on IA64 Luck, Tony
2018-04-02 23:39 ` Luck, Tony
2018-04-03 7:37 ` Peter Zijlstra
2018-04-03 18:58 ` Luck, Tony
2018-04-04 0:04 ` Luck, Tony
2018-04-04 7:25 ` Peter Zijlstra
2018-04-04 16:38 ` Luck, Tony
2018-04-04 16:53 ` Peter Zijlstra
2018-04-05 8:05 ` Peter Zijlstra
2018-04-05 8:56 ` Ingo Molnar
2018-04-05 9:18 ` tip-bot for Peter Zijlstra [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=tip-317d359df95dd0cb7653d09b7fc513770590cf85@git.kernel.org \
--to=tipbot@zytor.com \
--cc=acme@redhat.com \
--cc=frederic@kernel.org \
--cc=hpa@zytor.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-tip-commits@vger.kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mingo@kernel.org \
--cc=nmanthey@amazon.de \
--cc=patrick.bellasi@arm.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=tony.luck@intel.com \
--cc=torvalds@linux-foundation.org \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).