From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Gleixner Subject: Re: [patch V2 08/10] timer: Implement the hierarchical pull model Date: Wed, 19 Apr 2017 10:31:08 +0200 (CEST) Message-ID: References: <20170418111102.490432548@linutronix.de> <20170418111401.016420305@linutronix.de> <20170419081116.GA3029@worktop.programming.kicks-ass.net> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Return-path: Received: from Galois.linutronix.de ([146.0.238.70]:55356 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1761000AbdDSIbQ (ORCPT ); Wed, 19 Apr 2017 04:31:16 -0400 In-Reply-To: <20170419081116.GA3029@worktop.programming.kicks-ass.net> Sender: linux-pm-owner@vger.kernel.org List-Id: linux-pm@vger.kernel.org To: Peter Zijlstra Cc: LKML , John Stultz , Eric Dumazet , Anna-Maria Gleixner , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Arjan van de Ven , "Paul E. McKenney" , Frederic Weisbecker , Rik van Riel On Wed, 19 Apr 2017, Peter Zijlstra wrote: > On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote: > > +static struct tmigr_group *tmigr_get_group(unsigned int node, unsigned int lvl) > > +{ > > + struct tmigr_group *group; > > + > > + /* Try to attach to an exisiting group first */ > > + list_for_each_entry(group, &tmigr_level_list[lvl], list) { > > + /* > > + * If @lvl is below the cross numa node level, check > > + * whether this group belongs to the same numa node. > > + */ > > + if (lvl < tmigr_crossnode_level && group->numa_node != node) > > + continue; > > + /* If the group has capacity, use it */ > > + if (group->num_childs < tmigr_childs_per_group) { > > + group->num_childs++; > > + return group; > > + } > > This would result in SMT siblings not sharing groups on regular Intel > systems, right? Since they get enumerated last. Indeed. Will fix. > > + } > > + /* Allocate and set up a new group */ > > + group = kzalloc_node(sizeof(*group), GFP_KERNEL, node); > > + if (!group) > > + return ERR_PTR(-ENOMEM); > > + > > + if (!zalloc_cpumask_var_node(&group->cpus, GFP_KERNEL, node)) { > > + kfree(group); > > + return ERR_PTR(-ENOMEM); > > + } > > So if you place that cpumask last, you can do: > > group = kzalloc_node(sizeof(*group) + cpumask_size(), > GFP_KERNEL, node); Hrm, that would allocate extra space for CPUMASK_OFF_STACK=n. I'll have a look. Thanks, tglx