From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on archive.lwn.net X-Spam-Level: X-Spam-Status: No, score=-4.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_DNSWL_HI,T_RP_MATCHES_RCVD autolearn=ham autolearn_force=no version=3.4.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by archive.lwn.net (Postfix) with ESMTP id 67A187D0F4 for ; Thu, 22 Mar 2018 21:50:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751625AbeCVVuw convert rfc822-to-8bit (ORCPT ); Thu, 22 Mar 2018 17:50:52 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:44288 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751464AbeCVVuv (ORCPT ); Thu, 22 Mar 2018 17:50:51 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 27E794040073; Thu, 22 Mar 2018 21:50:51 +0000 (UTC) Received: from llong.remote.csb (ovpn-122-15.rdu2.redhat.com [10.10.122.15]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9AA32215CDAA; Thu, 22 Mar 2018 21:50:49 +0000 (UTC) Subject: Re: [PATCH v6 2/2] cpuset: Add cpuset.sched_load_balance to v2 To: Juri Lelli Cc: Tejun Heo , Li Zefan , Johannes Weiner , Peter Zijlstra , Ingo Molnar , cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@fb.com, pjt@google.com, luto@amacapital.net, efault@gmx.de, torvalds@linux-foundation.org, Roman Gushchin References: <1521649309-26690-1-git-send-email-longman@redhat.com> <1521649309-26690-3-git-send-email-longman@redhat.com> <20180322084120.GE7231@localhost.localdomain> From: Waiman Long Organization: Red Hat Message-ID: Date: Thu, 22 Mar 2018 17:50:48 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0 MIME-Version: 1.0 In-Reply-To: <20180322084120.GE7231@localhost.localdomain> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT Content-Language: en-US X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Thu, 22 Mar 2018 21:50:51 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Thu, 22 Mar 2018 21:50:51 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'longman@redhat.com' RCPT:'' Sender: linux-doc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-doc@vger.kernel.org On 03/22/2018 04:41 AM, Juri Lelli wrote: > Hi Waiman, > > On 21/03/18 12:21, Waiman Long wrote: >> The sched_load_balance flag is needed to enable CPU isolation similar >> to what can be done with the "isolcpus" kernel boot parameter. >> >> The sched_load_balance flag implies an implicit !cpu_exclusive as >> it doesn't make sense to have an isolated CPU being load-balanced in >> another cpuset. >> >> For v2, this flag is hierarchical and is inherited by child cpusets. It >> is not allowed to have this flag turn off in a parent cpuset, but on >> in a child cpuset. >> >> This flag is set by the parent and is not delegatable. >> >> Signed-off-by: Waiman Long >> --- >> Documentation/cgroup-v2.txt | 22 ++++++++++++++++++ >> kernel/cgroup/cpuset.c | 56 +++++++++++++++++++++++++++++++++++++++------ >> 2 files changed, 71 insertions(+), 7 deletions(-) >> >> diff --git a/Documentation/cgroup-v2.txt b/Documentation/cgroup-v2.txt >> index ed8ec66..c970bd7 100644 >> --- a/Documentation/cgroup-v2.txt >> +++ b/Documentation/cgroup-v2.txt >> @@ -1514,6 +1514,28 @@ Cpuset Interface Files >> it is a subset of "cpuset.mems". Its value will be affected >> by memory nodes hotplug events. >> >> + cpuset.sched_load_balance >> + A read-write single value file which exists on non-root cgroups. >> + The default is "1" (on), and the other possible value is "0" >> + (off). >> + >> + When it is on, tasks within this cpuset will be load-balanced >> + by the kernel scheduler. Tasks will be moved from CPUs with >> + high load to other CPUs within the same cpuset with less load >> + periodically. >> + >> + When it is off, there will be no load balancing among CPUs on >> + this cgroup. Tasks will stay in the CPUs they are running on >> + and will not be moved to other CPUs. >> + >> + This flag is hierarchical and is inherited by child cpusets. It >> + can be turned off only when the CPUs in this cpuset aren't >> + listed in the cpuset.cpus of other sibling cgroups, and all >> + the child cpusets, if present, have this flag turned off. >> + >> + Once it is off, it cannot be turned back on as long as the >> + parent cgroup still has this flag in the off state. >> + > I'm afraid that this will not work for SCHED_DEADLINE (at least for how > it is implemented today). As you can see in Documentation [1] the only > way a user has to perform partitioned/clustered scheduling is to create > subset of exclusive cpusets and then assign deadline tasks to them. The > other thing to take into account here is that a root_domain is created > for each exclusive set and we use such root_domain to keep information > about admitted bandwidth and speed up load balancing decisions (there is > a max heap tracking deadlines of active tasks on each root_domain). > Now, AFAIR distinct root_domain(s) are created when parent group has > sched_load_balance disabled and cpus_exclusive set (in cgroup v1 that > is). So, what we normally do is create, say, cpus_exclusive groups for > the different clusters and then disable sched_load_balance at root level > (so that each cluster gets its own root_domain). Also, > sched_load_balance is enabled in children groups (as load balancing > inside clusters is what we actually needed :). That looks like an undocumented side effect to me. I would rather see an explicit control file that enable root_domain and break it free from cpu_exclusive && !sched_load_balance, e.g. sched_root_domain(?). > IIUC your proposal this will not be permitted with cgroup v2 because > sched_load_balance won't be present at root level and children groups > won't be able to set sched_load_balance back to 1 if that was set to 0 > in some parent. Is that true? Yes, that is the current plan. > Look, the way things work today is most probably not perfect (just to > say one thing, we need to disable load balancing for all classes at root > level just because DEADLINE wants to set restricted affinities to his > tasks :/) and we could probably think on how to change how this all > work. So, let's first see if IIUC what you are proposing (and its > implications). :) > Cgroup v2 is supposed to allow us to have a fresh start to rethink what is a more sane way of partitioning resources without worrying about backward compatibility. So I think it is time to design a new way for deadline tasks to work with cpuset v2. Cheers, Longman -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html