From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755064AbbCMXt7 (ORCPT ); Fri, 13 Mar 2015 19:49:59 -0400 Received: from mail-pa0-f51.google.com ([209.85.220.51]:32950 "EHLO mail-pa0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752552AbbCMXt5 (ORCPT ); Fri, 13 Mar 2015 19:49:57 -0400 From: Kevin Hilman To: Lai Jiangshan Cc: , Frederic Weisbecker , Christoph Lameter , Mike Galbraith , "Paul E. McKenney" , Tejun Heo , Viresh Kumar Subject: Re: [PATCH 3/4] workqueue: Create low-level unbound workqueues cpumask References: <1426136412-7594-1-git-send-email-laijs@cn.fujitsu.com> <1426136412-7594-4-git-send-email-laijs@cn.fujitsu.com> Date: Fri, 13 Mar 2015 16:49:53 -0700 In-Reply-To: <1426136412-7594-4-git-send-email-laijs@cn.fujitsu.com> (Lai Jiangshan's message of "Thu, 12 Mar 2015 13:00:11 +0800") Message-ID: <7h4mpopie6.fsf@deeprootsystems.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Lai Jiangshan writes: > From: Frederic Weisbecker > > Create a cpumask that limit the affinity of all unbound workqueues. > This cpumask is controlled though a file at the root of the workqueue > sysfs directory. > > It works on a lower-level than the per WQ_SYSFS workqueues cpumask files > such that the effective cpumask applied for a given unbound workqueue is > the intersection of /sys/devices/virtual/workqueue/$WORKQUEUE/cpumask and > the new /sys/devices/virtual/workqueue/cpumask_unbounds file. > > This patch implements the basic infrastructure and the read interface. > cpumask_unbounds is initially set to cpu_possible_mask. > > Cc: Christoph Lameter > Cc: Kevin Hilman > Cc: Lai Jiangshan > Cc: Mike Galbraith > Cc: Paul E. McKenney > Cc: Tejun Heo > Cc: Viresh Kumar > Signed-off-by: Frederic Weisbecker > Signed-off-by: Lai Jiangshan [...] > @@ -5094,6 +5116,9 @@ static int __init init_workqueues(void) > > WARN_ON(__alignof__(struct pool_workqueue) < __alignof__(long long)); > > + BUG_ON(!alloc_cpumask_var(&wq_unbound_cpumask, GFP_KERNEL)); > + cpumask_copy(wq_unbound_cpumask, cpu_possible_mask); > + As I mentioned in an earlier discussion[1], I still think this could default too the housekeeping CPUs in the NO_HZ_FULL case: #ifdef CONFIG_NO_HZ_FULL cpumask_complement(wq_unbound_cpumask, tick_nohz_full_mask); #else cpumask_copy(wq_unbound_cpumask, cpu_possible_mask); #endif But that could also be left to a future optimization as well. Kevin [1] https://lkml.org/lkml/2014/2/14/666