From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40094C48BC2 for ; Wed, 23 Jun 2021 13:50:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1FC3161076 for ; Wed, 23 Jun 2021 13:50:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230326AbhFWNwv (ORCPT ); Wed, 23 Jun 2021 09:52:51 -0400 Received: from foss.arm.com ([217.140.110.172]:35660 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230229AbhFWNwu (ORCPT ); Wed, 23 Jun 2021 09:52:50 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DB317ED1; Wed, 23 Jun 2021 06:50:30 -0700 (PDT) Received: from localhost (unknown [10.1.195.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 783A03F718; Wed, 23 Jun 2021 06:50:30 -0700 (PDT) Date: Wed, 23 Jun 2021 14:50:29 +0100 From: Ionela Voinescu To: Viresh Kumar Cc: Rafael Wysocki , Sudeep Holla , Greg Kroah-Hartman , "Rafael J. Wysocki" , Will Deacon , linux-pm@vger.kernel.org, Vincent Guittot , Qian Cai , "Paul E . McKenney" , linux-kernel@vger.kernel.org Subject: Re: [PATCH V3 3/4] arch_topology: Avoid use-after-free for scale_freq_data Message-ID: <20210623135029.GC12411@arm.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hey, On Monday 21 Jun 2021 at 14:49:36 (+0530), Viresh Kumar wrote: > Currently topology_scale_freq_tick() (which gets called from > scheduler_tick()) may end up using a pointer to "struct > scale_freq_data", which was previously cleared by > topology_clear_scale_freq_source(), as there is no protection in place > here. The users of topology_clear_scale_freq_source() though needs a > guarantee that the previously cleared scale_freq_data isn't used > anymore, so they can free the related resources. > > Since topology_scale_freq_tick() is called from scheduler tick, we don't > want to add locking in there. Use the RCU update mechanism instead > (which is already used by the scheduler's utilization update path) to > guarantee race free updates here. > > synchronize_rcu() makes sure that all RCU critical sections that started > before it is called, will finish before it returns. And so the callers > of topology_clear_scale_freq_source() don't need to worry about their > callback getting called anymore. > > Cc: Paul E. McKenney > Fixes: 01e055c120a4 ("arch_topology: Allow multiple entities to provide sched_freq_tick() callback") > Tested-by: Vincent Guittot > Signed-off-by: Viresh Kumar > --- > drivers/base/arch_topology.c | 27 +++++++++++++++++++++------ > 1 file changed, 21 insertions(+), 6 deletions(-) > > diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c > index c1179edc0f3b..921312a8d957 100644 > --- a/drivers/base/arch_topology.c > +++ b/drivers/base/arch_topology.c > @@ -18,10 +18,11 @@ > #include > #include > #include > +#include > #include > #include > > -static DEFINE_PER_CPU(struct scale_freq_data *, sft_data); > +static DEFINE_PER_CPU(struct scale_freq_data __rcu *, sft_data); > static struct cpumask scale_freq_counters_mask; > static bool scale_freq_invariant; > > @@ -66,16 +67,20 @@ void topology_set_scale_freq_source(struct scale_freq_data *data, > if (cpumask_empty(&scale_freq_counters_mask)) > scale_freq_invariant = topology_scale_freq_invariant(); > > + rcu_read_lock(); > + > for_each_cpu(cpu, cpus) { > - sfd = per_cpu(sft_data, cpu); > + sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu)); > > /* Use ARCH provided counters whenever possible */ > if (!sfd || sfd->source != SCALE_FREQ_SOURCE_ARCH) { > - per_cpu(sft_data, cpu) = data; > + rcu_assign_pointer(per_cpu(sft_data, cpu), data); > cpumask_set_cpu(cpu, &scale_freq_counters_mask); > } > } > > + rcu_read_unlock(); > + > update_scale_freq_invariant(true); > } > EXPORT_SYMBOL_GPL(topology_set_scale_freq_source); > @@ -86,22 +91,32 @@ void topology_clear_scale_freq_source(enum scale_freq_source source, > struct scale_freq_data *sfd; > int cpu; > > + rcu_read_lock(); > + > for_each_cpu(cpu, cpus) { > - sfd = per_cpu(sft_data, cpu); > + sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu)); > > if (sfd && sfd->source == source) { > - per_cpu(sft_data, cpu) = NULL; > + rcu_assign_pointer(per_cpu(sft_data, cpu), NULL); > cpumask_clear_cpu(cpu, &scale_freq_counters_mask); > } > } > > + rcu_read_unlock(); > + > + /* > + * Make sure all references to previous sft_data are dropped to avoid > + * use-after-free races. > + */ > + synchronize_rcu(); > + > update_scale_freq_invariant(false); > } > EXPORT_SYMBOL_GPL(topology_clear_scale_freq_source); > > void topology_scale_freq_tick(void) > { > - struct scale_freq_data *sfd = *this_cpu_ptr(&sft_data); > + struct scale_freq_data *sfd = rcu_dereference_sched(*this_cpu_ptr(&sft_data)); > > if (sfd) > sfd->set_freq_scale(); > -- > 2.31.1.272.g89b43f80a514 > Reviewed-by: Ionela Voinescu