From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7ED2C74A5B for ; Tue, 21 Mar 2023 12:49:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230318AbjCUMtA (ORCPT ); Tue, 21 Mar 2023 08:49:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230374AbjCUMs6 (ORCPT ); Tue, 21 Mar 2023 08:48:58 -0400 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83C86CC31 for ; Tue, 21 Mar 2023 05:48:52 -0700 (PDT) Received: by verein.lst.de (Postfix, from userid 2407) id 0D24F68CFE; Tue, 21 Mar 2023 13:48:46 +0100 (CET) Date: Tue, 21 Mar 2023 13:48:44 +0100 From: Christoph Hellwig To: Qu Wenruo Cc: Christoph Hellwig , Chris Mason , Josef Bacik , David Sterba , Johannes Thumshirn , linux-btrfs@vger.kernel.org Subject: Re: [PATCH 01/10] btrfs: use a plain workqueue for ordered_extent processing Message-ID: <20230321124844.GA10470@lst.de> References: <20230314165910.373347-1-hch@lst.de> <20230314165910.373347-2-hch@lst.de> <65e3dc23-6e86-dc4d-0a1b-2ec69060dd44@gmx.com> <20230320122420.GA9008@lst.de> <675712c0-ac72-f923-247c-31f0b22a8657@gmx.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <675712c0-ac72-f923-247c-31f0b22a8657@gmx.com> User-Agent: Mutt/1.5.17 (2007-11-01) Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org On Tue, Mar 21, 2023 at 07:19:30AM +0800, Qu Wenruo wrote: >> I'm also really curious (and I might have to do some digging) what >> the intended use case is, and if it actually works as-is. I know >> one of your workloads mentioned a higher concurrency for some HDD >> workloads, do you still remember what the workloads are? > > In fact, recently we had a patchset trying to adding a new mount option to > pin workqueue loads to certain CPUs. > > The idea of that patchset is to limit the CPU usage for compression/csum > generation. > > This can also apply to scrub workload. ... and is totally unrelated to using either a core workqueue max_active value or reimplementing that in btrfs. > The other thing is, I also want to test if we could really have certain > deadlock related to workqueue. > Currently thread_pool= mount option only changes the btrfs workqueues, not > the new plain ones. What kind of deadlock testing do you want, and why does it apply to all workqueues in btrfs and not other workqueues? Note that you'd also need to change the btrfs_workqueue code to actually apply literally everywhere. Maybe we can start by going back to me request, and can actually come up with a definition for what thread_pool= is supposed to affect, and how users should pick values for it? There is a definition in the btrfs(5) man page, but that one has been wrong at least since 2014 and the switch to use workqueues.