From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Zijlstra Subject: Re: [PATCH v5 1/4] sched/fair: add util_est on top of PELT Date: Tue, 6 Mar 2018 20:02:41 +0100 Message-ID: <20180306190241.GH25201@hirez.programming.kicks-ass.net> References: <20180222170153.673-1-patrick.bellasi@arm.com> <20180222170153.673-2-patrick.bellasi@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: <20180222170153.673-2-patrick.bellasi@arm.com> Sender: linux-kernel-owner@vger.kernel.org To: Patrick Bellasi Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Ingo Molnar , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Paul Turner , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle List-Id: linux-pm@vger.kernel.org On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote: > +struct util_est { > + unsigned int enqueued; > + unsigned int ewma; > +#define UTIL_EST_WEIGHT_SHIFT 2 > +}; > + ue = READ_ONCE(p->se.avg.util_est); > + WRITE_ONCE(p->se.avg.util_est, ue); That is actually quite dodgy... and relies on the fact that we have the 8 byte case in __write_once_size() and __read_once_size() unconditionally. It then further relies on the compiler DTRT for 32bit platforms, which is generating 2 32bit loads/stores. The advantage is of course that it will use single u64 loads/stores where available.