From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C09C9C433EF for ; Thu, 5 May 2022 18:33:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1384229AbiEEShH (ORCPT ); Thu, 5 May 2022 14:37:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232726AbiEESfZ (ORCPT ); Thu, 5 May 2022 14:35:25 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6F4401C12D for ; Thu, 5 May 2022 11:26:35 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BD186106F; Thu, 5 May 2022 11:26:08 -0700 (PDT) Received: from [192.168.178.6] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 56DAE3FA31; Thu, 5 May 2022 11:26:07 -0700 (PDT) Message-ID: <251d4cd4-7a28-af7b-942e-4e9f762fc05f@arm.com> Date: Thu, 5 May 2022 20:25:56 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.8.1 Subject: Re: [PATCH v8 1/7] sched/fair: Provide u64 read for 32-bits arch helper Content-Language: en-US To: Vincent Donnefort , peterz@infradead.org, mingo@redhat.com, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, morten.rasmussen@arm.com, chris.redpath@arm.com, qperret@google.com, tao.zhou@linux.dev References: <20220429141148.181816-1-vincent.donnefort@arm.com> <20220429141148.181816-2-vincent.donnefort@arm.com> From: Dietmar Eggemann In-Reply-To: <20220429141148.181816-2-vincent.donnefort@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 29/04/2022 16:11, Vincent Donnefort wrote: > Introducing macro helpers u64_u32_{store,load}() to factorize lockless > accesses to u64 variables for 32-bits architectures. > > Users are for now cfs_rq.min_vruntime and sched_avg.last_update_time. To > accommodate the later where the copy lies outside of the structure > (cfs_rq.last_udpate_time_copy instead of sched_avg.last_update_time_copy), > use the _copy() version of those helpers. > > Those new helpers encapsulate smp_rmb() and smp_wmb() synchronization and > therefore, have a small penalty in set_task_rq_fair() and init_cfs_rq(). ... but obviously only on 32bit machines. And for set_task_rq_fair() we now do one smp_rmb per cfs_rq (prev and next), like we do one smp_wmb() per cfs_rq in update_cfs_rq_load_avg(). [...] > @@ -3786,8 +3770,9 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) > decayed |= __update_load_avg_cfs_rq(now, cfs_rq); > > #ifndef CONFIG_64BIT Can we not get rid of this last CONFIG_64BIT here? > - smp_wmb(); > - cfs_rq->load_last_update_time_copy = sa->last_update_time; > + u64_u32_store_copy(sa->last_update_time, > + cfs_rq->last_update_time_copy, > + sa->last_update_time); (sa->last_update_time = sa->last_update_time); should dissolve on 64bit. > #endif [...] Reviewed-by: Dietmar Eggemann