From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1763803AbcINO2W (ORCPT ); Wed, 14 Sep 2016 10:28:22 -0400 Received: from mga03.intel.com ([134.134.136.65]:19866 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756513AbcINO2S (ORCPT ); Wed, 14 Sep 2016 10:28:18 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,334,1470726000"; d="scan'208";a="8479738" Subject: Re: [PATCH v2 26/33] Task fork and exit for rdtgroup To: "Luck, Tony" References: <1473328647-33116-1-git-send-email-fenghua.yu@intel.com> <1473328647-33116-27-git-send-email-fenghua.yu@intel.com> <57D88800.1090302@intel.com> <20160913233502.GA4444@intel.com> Cc: Fenghua Yu , Thomas Gleixner , "H. Peter Anvin" , Ingo Molnar , Peter Zijlstra , Tejun Heo , Borislav Petkov , Stephane Eranian , Marcelo Tosatti , David Carrillo-Cisneros , Shaohua Li , Ravi V Shankar , Vikas Shivappa , Sai Prakhya , linux-kernel , x86 From: Dave Hansen Message-ID: <57D95E7D.2090608@intel.com> Date: Wed, 14 Sep 2016 07:28:13 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0 MIME-Version: 1.0 In-Reply-To: <20160913233502.GA4444@intel.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/13/2016 04:35 PM, Luck, Tony wrote: > On Tue, Sep 13, 2016 at 04:13:04PM -0700, Dave Hansen wrote: >> Yikes, is this a new global lock and possible atomic_inc() on a shared >> variable in the fork() path? Has there been any performance or >> scalability testing done on this code? >> >> That mutex could be a disaster for fork() once the filesystem is >> mounted. Even if it goes away, if you have a large number of processes >> in an rdtgroup and they are forking a lot, you're bound to see the >> rdtgrp->refcount get bounced around a lot. > > The mutex is (almost certainly) going away. Oh, cool. That's good to know. > The atomic_inc() > is likely staying (but only applies to tasks that are in > resource groups other than the default one. But on a system > where we partition the cache between containers/VMs, that may > essentially be all processes. Yeah, that's what worries me. We had/have quite a few regressions from when something runs inside vs. outside of certain cgroups. We definitely don't want to be adding more of those. > We only really use the refcount to decide whether the group > can be removed ... since that is the rare operation, perhaps > we could put all the work there and have it count them with: > > n = 0; > rcu_read_lock(); > for_each_process(p) > if (p->rdtgroup == this_rdtgroup) > n++; > rcu_read_unlock(); > if (n != 0) > return -EBUSY; Yeah, that seems sane. I'm sure it can be optimized even more than that, but that at least gets it out of the fast path.