From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B338C3382E7 for ; Wed, 11 Feb 2026 08:16:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770797802; cv=none; b=neEtZ7sIvCljVn1ppjLSdRvVe/CIIb9YX3MisiqJBtD1PQoDKqB8ejOYqnEwJMaVFL4ybEXIqyyIn2XULRwq1LlRLNZ5EN2q7bU3f5aG1iBseS7CLbxOolIM/hJ3LkkBTvIcvxvcoqbNU7jWJwFAXWNrDjVw1kL1MMNT8M8eMhQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770797802; c=relaxed/simple; bh=M72G2mLh/ZPjBJntqxGSFJRmCzo8n+rp9WMe5eYHyJ0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=AC5u8Wkhvw2M7FkIf3mrWVUHH5nerIsO40LNT5KjN7Lpi185TucQCQf7WnAuHn8RsJe+i91vhsXMAUdnRbklYVg6IWGLJXCGi/hL4JVCd1+VIj3h08+N4nD67rIRS6Z85OWhsAwtj8gbxvmQ6XavCHidB+SacTXCDnz9MvidNpc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uUmeK6My; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--aliceryhl.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uUmeK6My" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-658b82db041so8150353a12.1 for ; Wed, 11 Feb 2026 00:16:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770797799; x=1771402599; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9T4h+/rjLr1sBmgdVrcdKQN1OfKBnfrGuf/DCV1l4ug=; b=uUmeK6MyEoPJS/vI/D94KPpJ7Ta2/dULs7MziI5289bmf8HnPB2YsykJZZX0JXFtUm fkay+EAGNqLad7TOp+yQyLaHLZfIw3a7hgJMcMOb+n1lFdMFXfzf+bcoUE6X4hZilqA5 y6rZ+zh79GsZ0onNaU+E1vHmV0Y5x/PyYpWT7ZC3EYK81GB4QKV+PRuE4TIJbR6zLp1d 4P5hfZHav1Y3HeQyKkrr+vuzRFn7ihizCOiJhcSlTjuEytfmE+PZWGyFJjjvBEixHyuu PQ7AW/4PiwyNm4a2avo9UYiEEivNMng1S5j+POOIDLVIWz65jmDMz6OFmDucuacpigta gdSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770797799; x=1771402599; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9T4h+/rjLr1sBmgdVrcdKQN1OfKBnfrGuf/DCV1l4ug=; b=OyKIOpgjOHClp0Br85QDPm+qokDreinX//xwACtvskR00mYmlPvOrIluN+bXBdMcMg ECJRKGlm+bU67roR57dc7ap2nl5MY6ZaA/xikpINp7xwOpxdxqMDYKmfKkM1rikxKrv9 b9nT3uY1ULjHoCYlRAKCieoVezACwe6+5l1GYZx1XCxgXcFvpOISNr4ZGkd3/iDKAivc Uh03oy6y6TcjA7VFTs7bigjkmBMxO5UqwgAo9HUR367V9MmA1flRTp5YZX+KuRv4pl5c 1+mlXB4k4W/fUZmfjiFZKIeYIYsVoFgJ+/pSCyh5ciRXHSns25cZTCnyM5n2eWto0P70 BsjA== X-Forwarded-Encrypted: i=1; AJvYcCWMPv7dp4MQC8V0Gu4gyh9hoVcpwAohOmH8V6p+t0BzixrCDxa8qGa1GO8xbiLwx82JbCXloXGTl3Ablt4=@vger.kernel.org X-Gm-Message-State: AOJu0Ywlgbhk5htKBxE3Afk92ahxskp0w8ogZWyL/m9JuSC1UJG2jqW7 mSxa0fPhN2Qz4Bd+b7fnUYdq2rguI+sIuWlYpwyw0KWX9CUAZvXRCgNJ/n7q4JLUul/H9tUkNs1 i6PCPbAg08gmqALFsYw== X-Received: from edhr9.prod.google.com ([2002:a50:8d89:0:b0:65a:3735:74dd]) (user=aliceryhl job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5110:b0:659:7d96:f715 with SMTP id 4fb4d7f45d1cf-65a4024118amr505843a12.26.1770797798924; Wed, 11 Feb 2026 00:16:38 -0800 (PST) Date: Wed, 11 Feb 2026 08:16:38 +0000 In-Reply-To: <20260210155025.1b9ad2f1@fedora> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260209155843.725dcfe1@fedora> <20260210101525.7fb85f25@fedora> <20260210134913.33cb674f@fedora> <20260210145156.108ab292@fedora> <20260210155025.1b9ad2f1@fedora> Message-ID: Subject: Re: [RFC PATCH 2/4] rust: sync: Add dma_fence abstractions From: Alice Ryhl To: Boris Brezillon Cc: "Christian =?utf-8?B?S8O2bmln?=" , Philipp Stanner , phasta@kernel.org, Danilo Krummrich , David Airlie , Simona Vetter , Gary Guo , Benno Lossin , Daniel Almeida , Joel Fernandes , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, rust-for-linux@vger.kernel.org Content-Type: text/plain; charset="utf-8" On Tue, Feb 10, 2026 at 03:50:25PM +0100, Boris Brezillon wrote: > On Tue, 10 Feb 2026 14:11:12 +0000 > Alice Ryhl wrote: > > > On Tue, Feb 10, 2026 at 02:51:56PM +0100, Boris Brezillon wrote: > > > On Tue, 10 Feb 2026 13:26:48 +0000 > > > Alice Ryhl wrote: > > > > > > > On Tue, Feb 10, 2026 at 01:49:13PM +0100, Boris Brezillon wrote: > > > > > On Tue, 10 Feb 2026 10:15:04 +0000 > > > > > Alice Ryhl wrote: > > > > > > > > > > > /// The owner of this value must ensure that this fence is signalled. > > > > > > struct MustBeSignalled<'fence> { ... } > > > > > > /// Proof value indicating that the fence has either already been > > > > > > /// signalled, or it will be. The lifetime ensures that you cannot mix > > > > > > /// up the proof value. > > > > > > struct WillBeSignalled<'fence> { ... } > > > > > > > > > > Sorry, I have more questions, unfortunately. Seems that > > > > > {Must,Will}BeSignalled are targeting specific fences (at least that's > > > > > what the doc and 'fence lifetime says), but in practice, the WorkItem > > > > > backing the scheduler can queue 0-N jobs (0 if no jobs have their deps > > > > > met, and N > 1 if more than one job is ready). Similarly, an IRQ > > > > > handler can signal 0-N fences (can be that the IRQ has nothing to do we > > > > > job completion, or, it can be that multiple jobs have completed). How > > > > > is this MustBeSignalled object going to be instantiated in practice if > > > > > it's done before the DmaFenceWorkItem::run() function is called? > > > > > > > > The {Must,Will}BeSignalled closure pair needs to wrap the piece of code > > > > that ensures a specific fence is signalled. If you have code that > > > > manages a collection of fences and invokes code for specific fences > > > > depending on outside conditions, then that's a different matter. > > > > > > > > After all, transfer_to_wq() has two components: > > > > 1. Logic to ensure any spawned workqueue job eventually gets to run. > > > > 2. Once the individual job runs, logic specific to the one fence ensures > > > > that this one fence gets signalled. > > > > > > Okay, that's a change compared to how things are modeled in C (and in > > > JobQueue) at the moment: the WorkItem is not embedded in a specific > > > job, it's something that's attached to the JobQueue. The idea being > > > that the WorkItem represents a task to be done on the queue itself > > > (check if the first element in the queue is ready for execution), not on > > > a particular job. Now, we could change that and have a per-job WorkItem, > > > but ultimately, we'll have to make sure jobs are dequeued in order > > > (deps on JobN can be met before deps on Job0, but we still want JobN to > > > be submitted after Job0), and we'd pay the WorkItem overhead once per > > > Job instead of once per JobQueue. Probably not the end of the world, > > > but it's worth considering, still. > > > > It sounds like the fix here is to have transfer_to_job_queue() instead > > of trying to do it at the workqueue level. > > Hm, so Job would be something like that (naming/trait-def are just > suggestions to get the discussion going): > > trait JobConsumer { > type FenceType; > type JobData; > > fn run(self: MustBeSignalled) -> Result>; > } > > struct Job { > fence: MustBeSignalled, > data: T::JobData, > } The fence field of Job would be PublishedFence or PrivateFence (or just DriverDmaFence). The MustBeSignalled/WillBeSignaled types should only exist temporarily in a function scope. Any time you transfer from one function scope to another (like our transfer_to_job_queue() or transfer_to_wq() examples), that results in finishing the MustBeSignalled/WillBeSignaled scope on one thread and creating a new MustBeSignalled/WillBeSignaled scope on another thread. One could imagine a model where there is no lifetime and you can carry it around as you wish. That model works okay in most regards, but it gives up the ability to ensure that dma_fence_lockdep_map is properly configured to catch mistakes. The lifetime prohibits you from using the normal ownership semantics to e.g. transfer the MustBeSignalled into a random workqueue, enforcing that you can only transfer it into a workqueue by using the provided methods, which sets up the lockdep dependencies correctly and ensures that dma_fence_lockdep_map is taken in the workqueue job too. > I guess that would do. > > And then we need to flag the WorkItem that's exposed by the > JobQueue as a DmaFenceWorkItem so that > bindings::dma_fence_begin_signalling() is called before entry and > lockdep can do its job and check that nothing forbidden happens in > this WorkItem. In the case of JobQueue, it may make sense to just have the job queue implementation do that manually. I do not think the workqueue-level API can fully enforce that the job queue can't make mistakes here. > > > > And {Must,Will}BeSignalled exists to help model part (2.). But what you > > > > described with the IRQ callback falls into (1.) instead, which is > > > > outside the scope of {Must,Will}BeSignalled (or at least requires more > > > > complex APIs). > > > > > > For IRQ callbacks, it's not just about making sure they run, but also > > > making sure nothing in there can lead to deadlocks, which is basically > > > #2, except it's not scoped to a particular fence. It's just a "fences > > > can be signaled from there" marker. We could restrict it to "fences of > > > this particular implementation can be signaled from there" but not > > > "this particular fence instance will be signaled next, if any", because > > > that we don't know until we've walked some HW state to figure out which > > > job is complete and thus which fence we need to signal (the interrupt > > > we get is most likely multiplexing completion on multiple GPU contexts, > > > so before we can even get to our per-context in-flight-jobs FIFO, we > > > need to demux this thing). > > > > All I can say is that this is a different use-case for the C api > > dma_fence_begin_signalling(). This different usage also seems useful, > > but it would be one that does not involve {Must,Will}BeSignalled > > arguments at all. > > > > After all, dma_fence_begin_signalling() only requires those arguments if > > you want to convert a PrivateFence into a PublishedFence. (I guess a > > better name is PublishableFence.) If you're not trying to prove that a > > specific fence will be signalled, then you don't need the > > {Must,Will}BeSignalled arguments. > > Okay, so that would be another function returning some sort of guard > then? What I find confusing is the fact > dma_fence::dma_fence_begin_signalling() matches the C function name > which is not per-fence, but just this lock-guard model flagging a > section from which any fence can be signalled, so maybe we should > name your dma_fence_begin_signalling() proposal differently, dunno. Yes we would need multiple methods that call dma_fence_begin_signalling() depending on why you are calling it. Alice