From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932920AbcA2S3A (ORCPT ); Fri, 29 Jan 2016 13:29:00 -0500 Received: from mail-yk0-f175.google.com ([209.85.160.175]:35989 "EHLO mail-yk0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756720AbcA2S25 (ORCPT ); Fri, 29 Jan 2016 13:28:57 -0500 Date: Fri, 29 Jan 2016 13:28:56 -0500 From: Tejun Heo To: Peter Zijlstra Cc: Thierry Reding , Ulrich Obergfell , Ingo Molnar , Andrew Morton , linux-kernel@vger.kernel.org, kernel-team@fb.com, Jon Hunter , linux-tegra@vger.kernel.org, rmk+kernel@arm.linux.org.uk, Johannes Weiner , linux-mm@kvack.org Subject: Re: [PATCH] workqueue: warn if memory reclaim tries to flush !WQ_MEM_RECLAIM workqueue Message-ID: <20160129182856.GP3628@mtj.duckdns.org> References: <20151203093350.GP17308@twins.programming.kicks-ass.net> <20151203100018.GO11639@twins.programming.kicks-ass.net> <20151203144811.GA27463@mtj.duckdns.org> <20151203150442.GR17308@twins.programming.kicks-ass.net> <20151203150604.GC27463@mtj.duckdns.org> <20151203192616.GJ27463@mtj.duckdns.org> <20160126173843.GA11115@ulmo.nvidia.com> <20160128101210.GC6357@twins.programming.kicks-ass.net> <20160129110941.GK32380@htj.duckdns.org> <20160129151739.GA1087@worktop> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160129151739.GA1087@worktop> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hey, Peter. On Fri, Jan 29, 2016 at 04:17:39PM +0100, Peter Zijlstra wrote: > On Fri, Jan 29, 2016 at 06:09:41AM -0500, Tejun Heo wrote: > > I posted a patch to disable > > disable flush dependency checks on those workqueues and there's a > > outreachy project to weed out the users of the old interface, so > > hopefully this won't be an issue soon. > > Will that same project review all workqueue users for the strict per-cpu > stuff, so we can finally kill that weird stuff you do on hotplug? Unfortunately not. We do want to distinguish cpu-affine for correctness and as an optimization; however, making that distinction is unlikely to make the dynamic worker affinity binding go away. We can't forcifully shut down workers which are executing work items which are affine as an optimization when the CPU goes down. Thanks. -- tejun