From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6F2EE1FF5F9; Tue, 9 Sep 2025 20:53:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757451193; cv=none; b=bgpEPivyn7al6cCzkcUfclWxzlOms7vir+2KTVM4TSfzQGdKnfwWg6dCOTxzKsnPf6/s1M8rSLzEfY21aZfipZuXJnokPOfZQItzoYTrhy8+rO7clsBlAa37TqoElTrB89kZ2qTJIOKNkUWUzJqGQZ5oZdDKn9DnrE9HhLgGUbo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757451193; c=relaxed/simple; bh=lRNvw5Vej2PNjUqrctTC6POIwS0tW4NvDAn8P9RwduA=; h=Date:From:To:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=fapfX/k6V91EeXl3a7kEQNgVunn83Pw3RPYxMiYs73bsau5SZrh0ypE9WW7rftFWUrb2Z2E7MfEleoEsV00zsZn4PZOLNk9PQJE4S19fDuCcHmAlLR+GIXMrnWNDaCMm7mzHy+lvwkd5ISvqo4EZFymKYadnSOPKRs90li6Iy9U= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=HCgi2h/o; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="HCgi2h/o" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D628DC4CEF4; Tue, 9 Sep 2025 20:53:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757451193; bh=lRNvw5Vej2PNjUqrctTC6POIwS0tW4NvDAn8P9RwduA=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=HCgi2h/oQxBkyv+fOnr8t5TFe2z+Zv5AnIySO30rnXXrcif2Z2sLz6aR01iGQGeuI dZBqoYvtKNZZWUZq4gvUKQvHs/q5pp0nRCSZjlCVWy0HgfSu6R4wUWsCU2wEttsIc1 jEpxu+T8CL2JuMA/a+2gdTh68sm3ozEM5+s4sQARkQct8uykEGEn1HlAkgoFDwWLQZ OY66NiLuCqvt6lqitD5Vkb86PQsVKe4MtFDop2g87p4dW4bUfRx+Ik4GNg+RqES6Hj bQlqo5j4UWTwKa/G43M0oNoa2jTBn0NVhgOGtSKk0xGYOBUpPB4uZGtew6K4ThgU/H s7nARvIYuMpcw== Date: Tue, 9 Sep 2025 22:53:08 +0200 From: Mauro Carvalho Chehab To: Jonathan Corbet Cc: Linux Doc Mailing List , =?UTF-8?B?QmrDtnJu?= Roy Baron , Alex Gaynor , Alice Ryhl , Boqun Feng , Gary Guo , Trevor Gross , linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org Subject: Re: [PATCH v4 08/19] tools/docs: sphinx-build-wrapper: add a wrapper for sphinx-build Message-ID: <20250909225308.30a42062@foz.lan> In-Reply-To: <87y0qnv4j2.fsf@trenco.lwn.net> References: <87plbzwubl.fsf@trenco.lwn.net> <7tk2mkydbcblodhipoddued5smsc3ifnmeqen5wv7eu3mbmvgi@nwxqo5366umj> <87y0qnv4j2.fsf@trenco.lwn.net> X-Mailer: Claws Mail 4.3.1 (GTK 3.24.49; x86_64-redhat-linux-gnu) Precedence: bulk X-Mailing-List: rust-for-linux@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Em Tue, 09 Sep 2025 12:56:17 -0600 Jonathan Corbet escreveu: > Mauro Carvalho Chehab writes: > > > Basically, what happens is that the number of jobs can be on > > different places: > > There is a lot of complexity there, and spread out between __init__(), > run_sphinx(), and handle_pdf(). Is there any way to create a single > figure_out_how_many_damn_jobs() and coalesce that logic there? That > would help make that part of the system a bit more comprehensible. I'll try to better organize it, but run_sphinx() does something different than handle_pdf(): - run_sphinx: claims all tokens; - handle_pdf: use future.concurrent and handle parallelism inside it. Perhaps I can move the future.concurrent parallelism to jobserver library to simplify the code a little bit while offering an interface somewhat similar to run_sphinx logic. Let's see if I can find a way to do it while keeping the code generic (*). Will take a look on it probably on Thursday of Friday. (*) I did one similar attempt at devel time adding a subprocess call wrapper there, but didn't like much the solution, but this was before the need to use futures.concurrent. > That said, I've been unable to make this change break in my testing. I > guess I'm not seeing a lot of impediments to applying the next version > at this point. Great! I'll probably be respinning the next (hopefully final) version by the end of this week, if I don't get sidetracked with other things. Thanks, Mauro