From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965558AbcBCSz0 (ORCPT ); Wed, 3 Feb 2016 13:55:26 -0500 Received: from mail-yk0-f177.google.com ([209.85.160.177]:35842 "EHLO mail-yk0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965758AbcBCSzY (ORCPT ); Wed, 3 Feb 2016 13:55:24 -0500 Date: Wed, 3 Feb 2016 13:55:23 -0500 From: Tejun Heo To: Mike Galbraith Cc: LKML Subject: Re: [PATCH wq/for-4.5-fixes] workqueue: handle NUMA_NO_NODE for unbound pool_workqueue lookup Message-ID: <20160203185523.GL14091@mtj.duckdns.org> References: <1454424264.11183.46.camel@gmail.com> <20160203185425.GK14091@mtj.duckdns.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160203185425.GK14091@mtj.duckdns.org> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 03, 2016 at 01:54:25PM -0500, Tejun Heo wrote: > Fix it by mapping NUMA_NO_NODE to the default pool_workqueue from > unbound_pwq_by_node(). This is a temporary workaround. The long term > solution is keeping CPU -> NODE mapping stable across CPU off/online > cycles which is in the works. Forgot to mention. Can you please test this? Once verified, I'll route it through wq/for-4.5-fixes. Thanks. -- tejun