From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934741AbeE2NMr (ORCPT ); Tue, 29 May 2018 09:12:47 -0400 Received: from mail-wr0-f194.google.com ([209.85.128.194]:32917 "EHLO mail-wr0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934367AbeE2NMn (ORCPT ); Tue, 29 May 2018 09:12:43 -0400 X-Google-Smtp-Source: AB8JxZoDwwlNL/7JM27UXX3ngVzYYpGVmSYhDsaDLqd7S46UUffrCfMFnU6K7wnmdudZmZRj5ia7eA== Date: Tue, 29 May 2018 15:12:38 +0200 From: Juri Lelli To: Waiman Long Cc: Tejun Heo , Li Zefan , Johannes Weiner , Peter Zijlstra , Ingo Molnar , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@fb.com, pjt@google.com, luto@amacapital.net, Mike Galbraith , torvalds@linux-foundation.org, Roman Gushchin Subject: Re: [PATCH v8 4/6] cpuset: Make generate_sched_domains() recognize isolated_cpus Message-ID: <20180529131238.GE8985@localhost.localdomain> References: <1526590545-3350-1-git-send-email-longman@redhat.com> <1526590545-3350-5-git-send-email-longman@redhat.com> <20180524102837.GA3948@localhost.localdomain> <45d70c88-e9f5-716a-ee9a-33dc111159cc@redhat.com> <8e610b98-970c-a309-5821-fc8e6aca892f@redhat.com> <20180529062703.GA8985@localhost.localdomain> <8164a41b-3218-c618-64a6-52747344c4db@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8164a41b-3218-c618-64a6-52747344c4db@redhat.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 29/05/18 08:40, Waiman Long wrote: > On 05/29/2018 02:27 AM, Juri Lelli wrote: > > On 28/05/18 21:24, Waiman Long wrote: > >> On 05/28/2018 09:12 PM, Waiman Long wrote: > >>> On 05/24/2018 06:28 AM, Juri Lelli wrote: > >>>> On 17/05/18 16:55, Waiman Long wrote: > >>>> > >>>> [...] > >>>> > >>>>> @@ -849,7 +860,12 @@ static void rebuild_sched_domains_locked(void) > >>>>> * passing doms with offlined cpu to partition_sched_domains(). > >>>>> * Anyways, hotplug work item will rebuild sched domains. > >>>>> */ > >>>>> - if (!cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask)) > >>>>> + if (!top_cpuset.isolation_count && > >>>>> + !cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask)) > >>>>> + goto out; > >>>>> + > >>>>> + if (top_cpuset.isolation_count && > >>>>> + !cpumask_subset(top_cpuset.effective_cpus, cpu_active_mask)) > >>>>> goto out; > >>>> Do we cover the case in which hotplug removed one of the isolated cpus > >>>> from cpu_active_mask? > >>> Yes, you are right. That is the remnant of my original patch that allows > >>> only one isolated_cpus at root. Thanks for spotting that. > >> I am sorry. I would like to take it back my previous comment. The code > >> above looks for inconsistency in the state of the effective_cpus mask to > >> find out if it is racing with a hotplug event. If it is, we can skip the > >> domain generation as the hotplug event will do that too. The checks are > >> still valid with the current patchset. So I don't think we need to make > >> any change here. > > Yes, these checks are valid, but don't we also need to check for hotplug > > races w.r.t. isolated CPUs (of some other sub domain)? > > It is not actually a race. Both the hotplug event and any changes to cpu > lists or flags are serialized by the cpuset_mutex. It is just that we > may be doing the same work twice that we are wasting cpu cycles. So we > are doing a quick check to avoid this. The check isn't exhaustive and we > can certainly miss some cases. Doing a more throughout check may need as > much time as doing the sched domain generation itself and so you are > actually wasting more CPU cycles on average as the chance of a hotplug > event is very low. Fair enough. Thanks, - Juri