From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751338AbeA2GcQ convert rfc822-to-8bit (ORCPT ); Mon, 29 Jan 2018 01:32:16 -0500 Received: from mout.gmx.net ([212.227.17.21]:59624 "EHLO mout.gmx.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751032AbeA2GcP (ORCPT ); Mon, 29 Jan 2018 01:32:15 -0500 Message-ID: <1517207502.7290.53.camel@gmx.de> Subject: Re: =?UTF-8?Q?=E7=AD=94=E5=A4=8D=3A?= Re: [RFC PATCH V5 5/5] workqueue: introduce a way to setworkqueue's scheduler From: Mike Galbraith To: wen.yang99@zte.com.cn Cc: tj@kernel.org, zhong.weidong@zte.com.cn, jiang.biao2@zte.com.cn, tan.hu@zte.com.cn, jiangshanlai@gmail.com, xiaolong.ye@intel.com, linux-kernel@vger.kernel.org Date: Mon, 29 Jan 2018 07:31:42 +0100 In-Reply-To: <201801291350022080675@zte.com.cn> References: 1517030127-21391-1-git-send-email-wen.yang99@zte.com.cn,1517045504.15811.18.camel@gmx.de <201801291350022080675@zte.com.cn> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.20.5 Mime-Version: 1.0 Content-Transfer-Encoding: 8BIT X-Provags-ID: V03:K0:7Mxjj9vbjuF9rE8xTVNEAvBLxtbp71VpLmY2XeVCIgBDyOUjsxW QZ0+3JIqvHnvQKvA0lAPqzTwzUnoXUBMSPeNTvUVE8RVqNQlIKbhjIpMRzAo+8u51RLnt8n JG6qNJBDkuZmvMMd9MT6RD9/BGfD0BPgiXYtj46hEx+WhseoB4QuK5RlFUPKBB2gd/7/C5K YdpiUws8IxctLpwWnF/0Q== X-UI-Out-Filterresults: notjunk:1;V01:K0:zDz/mLMpsoY=:uDZn0k8mwGvppmUWdGHlK1 HEosycsn9HEbz0QKb0REdrOwj4dhtYa7GVcHZDUakqnDpukv8BJbxevrNtkWc3C+UMxuhSenn O520btokDM/p2IHaKkROs0/VhSJ6OvyDoDybALq3CFsMks3Il1Fw8Q4PGoThLqUM8wPbfYSGL pUIBw5+B1K3Nzf4v7Z2cQPG2ado6i9dVTbVKayKPNK0nZ5ZXn0pc0w+VYS6IKUEy9duHasYQp 8EMuITgUcjD1eJIILm4EP5EMU1P+a5n2DwPWb7aB1WqTOaElBEPwmeUIkAxuGbmZ9IYx+jnRL aLNMpoQ9U53+0qV+NLnX0hU8pclBNGm4FcqHCSfC3vQ+cKSu7vDDpXxPHmNusQKUQdZlq6uh+ bXMvBUR71vo3ImLtEGOFvSN6FesqKSkt46HwuohRTMXehRK6zEXRbCg5DkwyrzFe3qbdf/ixZ 27vns7ovpyw3mErB573ICxQ/3VOWK4PiLkg+zz0rGYOGAopvWAJYg5FUyrkLoNhRnXYhvneQ7 QV6csk7He64c2j6C1oASadRP41aoJ5hRQG3IoFmWijB8xtfRSlng7jp7gseXS7bK4ywO/czcd qdcYjDtWh8/dWK03Gx0MpglcYduJ3PcXArGdUAA+Ie8Lr4xy03xjKY9+cnL6O8/JyKM3hO6He 0AIR4U3DIV5KYLHW2bvco6vwjoPVJpfZ8XKis5MrU/8KDaxhbMQAo6pljW5JR2ca20EM2iHsx kicSEMV2gapDF3OMiK9+Chj5eNMSN9LqW01jUBoOM7xM37c6BK/9h+aZlfUNQgBVbAH7Wv4Oa qdyD90wFdgdYln1/FaXBCXYRVYvrqtsCGD32EADA1s28oOx93A= Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 2018-01-29 at 13:50 +0800, wen.yang99@zte.com.cn wrote: > > > What happens when a new kworker needs to be spawned? > create_worker -> worker_attach_to_pool, in the function > worker_attach_to_pool,  we add this chunk: > > --- a/kernel/workqueue.c > +++ b/kernel/workqueue.c > @@ -1699,6 +1699,7 @@ static void worker_attach_to_pool(struct worker *worker, >          * online CPUs.  It'll be re-applied when any of the CPUs come up. >          */ >         set_cpus_allowed_ptr(worker->task, pool->attrs->cpumask); > +       sched_setattr(worker->task, &pool->attrs->sched_attr); >   >         /* >          * The pool->attach_mutex ensures %POOL_DISASSOCIATED remains >   > pool->attach_mutex may guarante it, add  sched_setattr may apply the > wq's sched_attr to the spawned kworer. That doesn't help kthreadd get to a CPU in a box being saturated by RT. As long as you are careful, it's not a problem, I just mentioned it because it's a hole. -Mike