From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAB28238D54; Wed, 28 Jan 2026 12:04:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769601854; cv=none; b=pH9Qse2jD7iz3N/Kn/uGyt2t8WBXU3zEXFiBNHsy8slKU/X/y2WJ4Hv5l3yMgF/7nqWO6TPkjvsJUNPcRq9Pv3IkOa19/DQ7UOJZhiH8iVoBgGxmXV0BFYfyai4lltsTw1HvBPhtQsvCF+iZLFSeCK79m4N1YRM32ktxTVL35wA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769601854; c=relaxed/simple; bh=ZmWp47hdVxyqv1Ds+2Hq8SoLybTurOLu+L6KbG5zDis=; h=Mime-Version:Content-Type:Date:Message-Id:Subject:Cc:To:From: References:In-Reply-To; b=uyvQz1nesuLugqVfxcbUArLkuArvjdIEeb6gogshqUwIbFh5XAcSZMfSqhe3w0oBPe3KGRflAkTogF3es5QvNO0pYrJaGbKYIdfxrxjLT4hgQkkJf7F9QhGoV11t/PnMIz/PaOHjU8jWASfUPCn0IYYUEf+GKigQseAFbObYtQE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Hx8cf+jz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Hx8cf+jz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9DA89C4CEF1; Wed, 28 Jan 2026 12:04:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1769601854; bh=ZmWp47hdVxyqv1Ds+2Hq8SoLybTurOLu+L6KbG5zDis=; h=Date:Subject:Cc:To:From:References:In-Reply-To:From; b=Hx8cf+jzc7ub0hEm2EJD7hjBM5GYjypHZfOvHh9LRFTILWZObQdgV/HK/IWv+uLPy QzwMaPaZwA+V+WErhedeik+QeYs1j80V+8P+jOhgKApsZ1s79rGFAC6RUq4+D5Ardn n62+ipYg5+OX6sXbx9BsgWCbDyUkOOSoZW9EPkqcJk4sUK+fsaoJcXefEI/9RvlHNC eqkqF2D98GiwsjvyxkKugFrsM93/vx/v/+zhBRaGrlGsp/aOZuQGevj0bw7jmsrvBG cv+blyDHbdlOGVFLw2q2+VMmk/cjQy252+jpZJPmDd65c5tggySUXW4QONEzl5E3Ao RqgAJXywJWoUw== Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=UTF-8 Date: Wed, 28 Jan 2026 13:04:02 +0100 Message-Id: Subject: Re: [PATCH RFC v6 05/26] nova-core: mm: Add support to use PRAMIN windows to write to VRAM Cc: "Zhi Wang" , , "Maarten Lankhorst" , "Maxime Ripard" , "Thomas Zimmermann" , "David Airlie" , "Simona Vetter" , "Jonathan Corbet" , "Alex Deucher" , "Christian Koenig" , "Jani Nikula" , "Joonas Lahtinen" , "Rodrigo Vivi" , "Tvrtko Ursulin" , "Huang Rui" , "Matthew Auld" , "Matthew Brost" , "Lucas De Marchi" , "Thomas Hellstrom" , "Helge Deller" , "Alice Ryhl" , "Miguel Ojeda" , "Alex Gaynor" , "Boqun Feng" , "Gary Guo" , "Bjorn Roy Baron" , "Benno Lossin" , "Andreas Hindborg" , "Trevor Gross" , "John Hubbard" , "Alistair Popple" , "Timur Tabi" , "Edwin Peer" , "Alexandre Courbot" , "Andrea Righi" , "Andy Ritger" , "Alexey Ivanov" , "Balbir Singh" , "Philipp Stanner" , "Elle Rhumsaa" , "Daniel Almeida" , , , , , , , , To: "Joel Fernandes" From: "Danilo Krummrich" References: <20260120204303.3229303-1-joelagnelf@nvidia.com> <20260120204303.3229303-6-joelagnelf@nvidia.com> <20260121100745.2b5a58e5.zhiw@nvidia.com> In-Reply-To: On Fri Jan 23, 2026 at 12:16 AM CET, Joel Fernandes wrote: > My plan is to make TLB and PRAMIN use immutable references in their funct= ion > calls and then implement internal locking. I've already done this for the= GPU > buddy functions, so it should be doable, and we'll keep it consistent. As= a > result, we will have finer-grain locking on the memory management objects > instead of requiring to globally lock a common GpuMm object. I'll plan on > doing this for v7. > > Also, the PTE allocation race you mentioned is already handled by PRAMIN > serialization. Since threads must hold the PRAMIN lock to write page tabl= e > entries, concurrent writers are not possible: > > Thread A: acquire PRAMIN lock > Thread A: read PDE (via PRAMIN) -> NULL > Thread A: alloc PT page, write PDE > Thread A: release PRAMIN lock > > Thread B: acquire PRAMIN lock > Thread B: read PDE (via PRAMIN) -> sees A's pointer > Thread B: uses existing PT page, no allocation needed This won't work unfortunately. We have to separate allocations and modifications of the page tabe. Or in o= ther words, we must not allocate new PDEs or PTEs while holding the lock protect= ing the page table from modifications. Once we have VM_BIND in nova-drm, we will have the situation that userspace passes jobs to modify the GPUs virtual address space and hence the page tab= les. Such a jobs has mainly three stages. (1) The submit stage. This is where the job is initialized, dependencies are set up and the driver has to pre-allocate all kinds of structures that are required throughout the subsequent stages of the job. (2) The run stage. This is the stage where the job is staged for execution and its DMA f= ence has been made public (i.e. it is accessible by userspace). This is the stage where we are in the DMA fence signalling critical section, hence we can't do any non-atomic allocations, since otherwis= e we could deadlock in MMU notifier callbacks for instance. This is the stage where the page table is actually modified. Hence, w= e can't acquire any locks that might be held elsewhere while doing non-atomic allocations. Also note that this is transitive, e.g. if yo= u take lock A and somewhere else a lock B is taked while A is already h= eld and we do non-atomic allocations while holding B, then A can't be hel= d in the DMA fence signalling critical path either. It is also worth noting that this is the stage where we know the exac= t operations we have to execute based on the VM_BIND request from users= pace. For instance, in the submit stage we may only know that userspace wan= ts that we map a BO with a certain offset in the GPUs virtual address sp= ace at [0x0, 0x1000000]. What we don't know is what exact operations this= does require, i.e. "What do we have to unmap first?", "Are there any overlapping mappings that we have to truncate?", etc. So, we have to consider this when we pre-allocate in the submit stage= . (3) The cleanup stage. This is where the job has been signaled and hence left the DMA fence signalling critical section. In this stage the job is cleaned up, which includes freeing data that= is not required anymore, such as PTEs and PDEs.