From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753117AbcHOWzO (ORCPT ); Mon, 15 Aug 2016 18:55:14 -0400 Received: from mail-qt0-f194.google.com ([209.85.216.194]:33009 "EHLO mail-qt0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750796AbcHOWzM (ORCPT ); Mon, 15 Aug 2016 18:55:12 -0400 Date: Mon, 15 Aug 2016 18:55:10 -0400 From: Tejun Heo To: Bhaktipriya Shridhar Cc: Miguel Ojeda Sandonis , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] cfag12864b: Remove deprecated create_singlethread_workqueue Message-ID: <20160815225510.GE3672@mtj.duckdns.org> References: <20160813153807.GA3818@Karyakshetra> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160813153807.GA3818@Karyakshetra> User-Agent: Mutt/1.6.2 (2016-07-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Aug 13, 2016 at 09:08:07PM +0530, Bhaktipriya Shridhar wrote: > The workqueue has a single workitem(&cfag12864b_work) and hence doesn't > require ordering. Also, it is not being used on a memory reclaim path. > Hence, the singlethreaded workqueue has been replaced with the use of > system_wq. > > System workqueues have been able to handle high level of concurrency > for a long time now and hence it's not required to have a singlethreaded > workqueue just to gain concurrency. Unlike a dedicated per-cpu workqueue > created with create_singlethread_workqueue(), system_wq allows multiple > work items to overlap executions even on the same CPU; however, a > per-cpu workqueue doesn't have any CPU locality or global ordering > guarantee unless the target CPU is explicitly specified and thus the > increase of local concurrency shouldn't make any difference. > > Work item has been sync cancelled in cfag12864b_disable() to ensure that > there are no pending tasks while disconnecting the driver. > > Signed-off-by: Bhaktipriya Shridhar Acked-by: Tejun Heo Thanks. -- tejun