From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F87EC6FD1D for ; Mon, 20 Mar 2023 12:24:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231317AbjCTMYb (ORCPT ); Mon, 20 Mar 2023 08:24:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230252AbjCTMY3 (ORCPT ); Mon, 20 Mar 2023 08:24:29 -0400 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B819F4237 for ; Mon, 20 Mar 2023 05:24:25 -0700 (PDT) Received: by verein.lst.de (Postfix, from userid 2407) id A687468C4E; Mon, 20 Mar 2023 13:24:20 +0100 (CET) Date: Mon, 20 Mar 2023 13:24:20 +0100 From: Christoph Hellwig To: Qu Wenruo Cc: Christoph Hellwig , Chris Mason , Josef Bacik , David Sterba , Johannes Thumshirn , linux-btrfs@vger.kernel.org Subject: Re: [PATCH 01/10] btrfs: use a plain workqueue for ordered_extent processing Message-ID: <20230320122420.GA9008@lst.de> References: <20230314165910.373347-1-hch@lst.de> <20230314165910.373347-2-hch@lst.de> <65e3dc23-6e86-dc4d-0a1b-2ec69060dd44@gmx.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <65e3dc23-6e86-dc4d-0a1b-2ec69060dd44@gmx.com> User-Agent: Mutt/1.5.17 (2007-11-01) Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org On Mon, Mar 20, 2023 at 07:35:45PM +0800, Qu Wenruo wrote: > In fact, I believe we not only need to add workqueue_set_max_active() for > the thread_pool= mount option, but also add a test case for thread_pool=1 > to shake out all the possible hidden bugs... > > Mind me to send a patch adding the max_active setting for all plain > workqueues? I don't think blindly doing that is a good idea. As explained in my reply to Dave, going all the way back to 2014, all workqueues hat had a less than default treshold never used it to start with. I'm also really curious (and I might have to do some digging) what the intended use case is, and if it actually works as-is. I know one of your workloads mentioned a higher concurrency for some HDD workloads, do you still remember what the workloads are? Because I'm pretty sure it won't be needed for all workqueues, and the fact that btrfs is the only caller of workqueue_set_max_active in the entire kernel makes me very sceptical that we do need it everywhere. So I'd be much happier to figure out where we needed it first, but even if not and we want to restore historic behavior from some point in the past we'd still only need to apply it selectively.