From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754613AbeE2G1K (ORCPT ); Tue, 29 May 2018 02:27:10 -0400 Received: from mail-wr0-f196.google.com ([209.85.128.196]:40643 "EHLO mail-wr0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754440AbeE2G1I (ORCPT ); Tue, 29 May 2018 02:27:08 -0400 X-Google-Smtp-Source: AB8JxZqEEzHaHnod2JM3KX1lOHPPMTcaTeCJQZJ9IuVejwQfggExSejpoP9z1ydAmUUYUzuKUnzjdw== Date: Tue, 29 May 2018 08:27:03 +0200 From: Juri Lelli To: Waiman Long Cc: Tejun Heo , Li Zefan , Johannes Weiner , Peter Zijlstra , Ingo Molnar , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@fb.com, pjt@google.com, luto@amacapital.net, Mike Galbraith , torvalds@linux-foundation.org, Roman Gushchin Subject: Re: [PATCH v8 4/6] cpuset: Make generate_sched_domains() recognize isolated_cpus Message-ID: <20180529062703.GA8985@localhost.localdomain> References: <1526590545-3350-1-git-send-email-longman@redhat.com> <1526590545-3350-5-git-send-email-longman@redhat.com> <20180524102837.GA3948@localhost.localdomain> <45d70c88-e9f5-716a-ee9a-33dc111159cc@redhat.com> <8e610b98-970c-a309-5821-fc8e6aca892f@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8e610b98-970c-a309-5821-fc8e6aca892f@redhat.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 28/05/18 21:24, Waiman Long wrote: > On 05/28/2018 09:12 PM, Waiman Long wrote: > > On 05/24/2018 06:28 AM, Juri Lelli wrote: > >> On 17/05/18 16:55, Waiman Long wrote: > >> > >> [...] > >> > >>> @@ -849,7 +860,12 @@ static void rebuild_sched_domains_locked(void) > >>> * passing doms with offlined cpu to partition_sched_domains(). > >>> * Anyways, hotplug work item will rebuild sched domains. > >>> */ > >>> - if (!cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask)) > >>> + if (!top_cpuset.isolation_count && > >>> + !cpumask_equal(top_cpuset.effective_cpus, cpu_active_mask)) > >>> + goto out; > >>> + > >>> + if (top_cpuset.isolation_count && > >>> + !cpumask_subset(top_cpuset.effective_cpus, cpu_active_mask)) > >>> goto out; > >> Do we cover the case in which hotplug removed one of the isolated cpus > >> from cpu_active_mask? > > Yes, you are right. That is the remnant of my original patch that allows > > only one isolated_cpus at root. Thanks for spotting that. > > I am sorry. I would like to take it back my previous comment. The code > above looks for inconsistency in the state of the effective_cpus mask to > find out if it is racing with a hotplug event. If it is, we can skip the > domain generation as the hotplug event will do that too. The checks are > still valid with the current patchset. So I don't think we need to make > any change here. Yes, these checks are valid, but don't we also need to check for hotplug races w.r.t. isolated CPUs (of some other sub domain)?