From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 04CFECA0EFF for ; Thu, 21 Aug 2025 11:54:48 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A220210E954; Thu, 21 Aug 2025 11:54:47 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=collabora.com header.i=@collabora.com header.b="ly8rgEvX"; dkim-atps=neutral Received: from bali.collaboradmins.com (bali.collaboradmins.com [148.251.105.195]) by gabe.freedesktop.org (Postfix) with ESMTPS id 0EEC510E948; Thu, 21 Aug 2025 11:54:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1755777284; bh=69wzCZHhSFHeHuTH3cr2PUYgHqtI+f+GALGK+wokVyA=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=ly8rgEvX5U+tO7SJMmv4YFydyn0vAw3UgGN9Bx2xeLJA5ukPvo3AsnW6jngbMxY+N lKez3ppvehdP16jThVFB4BjSU9smlrCz50csvQJA989iC1n8oX+lc+TXvjdNMKveBd ILd9DPlfPzlvVTSOMKPpT3sCrzyqYFWnS2g7jHC7R69oDiG3xAaO4GJ/7z7x6JQswH h7cIxm7z2uUZj71qJIEzgqj4Tkiq7UuJ38zd9DeerR9Tp9rxEhHkXuMaUJ/5OXOYkN fH21M9MwIPuYBCT4FwQinAJOwGQWfQPTAVp7QdLJ1+Miy5lvb+fUDqnBVskzT0s/GA VVCum848K4ViA== Received: from fedora (unknown [IPv6:2a01:e0a:2c:6930:d919:a6e:5ea1:8a9f]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bbrezillon) by bali.collaboradmins.com (Postfix) with ESMTPSA id EF55C17E127B; Thu, 21 Aug 2025 13:54:43 +0200 (CEST) Date: Thu, 21 Aug 2025 13:54:37 +0200 From: Boris Brezillon To: Adrian Larumbe Cc: Caterina Shablia , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Frank Binns , Matt Coster , Karol Herbst , Lyude Paul , Danilo Krummrich , Steven Price , Liviu Dudau , Lucas De Marchi , Thomas =?UTF-8?B?SGVsbHN0csO2bQ==?= , Rodrigo Vivi , dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, nouveau@lists.freedesktop.org, intel-xe@lists.freedesktop.org, asahi@lists.linux.dev, Asahi Lina Subject: Re: [PATCH v4 4/7] drm/gpuvm: Add a helper to check if two VA can be merged Message-ID: <20250821135437.499d9a6a@fedora> In-Reply-To: References: <20250707170442.1437009-1-caterina.shablia@collabora.com> <20250707170442.1437009-5-caterina.shablia@collabora.com> Organization: Collabora X-Mailer: Claws Mail 4.3.1 (GTK 3.24.49; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: nouveau@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Nouveau development list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: nouveau-bounces@lists.freedesktop.org Sender: "Nouveau" On Tue, 22 Jul 2025 20:17:14 +0100 Adrian Larumbe wrote: > On 07.07.2025 17:04, Caterina Shablia wrote: > > From: Boris Brezillon > > > > We are going to add flags/properties that will impact the VA merging > > ability. Instead of sprinkling tests all over the place in > > __drm_gpuvm_sm_map(), let's add a helper aggregating all these checks > > can call it for every existing VA we walk through in the > > __drm_gpuvm_sm_map() loop. > > > > Signed-off-by: Boris Brezillon > > Signed-off-by: Caterina Shablia > > --- > > drivers/gpu/drm/drm_gpuvm.c | 47 +++++++++++++++++++++++++++++-------- > > 1 file changed, 37 insertions(+), 10 deletions(-) > > > > diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c > > index 05978c5c38b1..dc3c2f906400 100644 > > --- a/drivers/gpu/drm/drm_gpuvm.c > > +++ b/drivers/gpu/drm/drm_gpuvm.c > > @@ -2098,12 +2098,48 @@ op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv, > > return fn->sm_step_unmap(&op, priv); > > } > > > > +static bool can_merge(struct drm_gpuvm *gpuvm, const struct drm_gpuva *a, > > + const struct drm_gpuva *b) > > +{ > > + /* Only GEM-based mappings can be merged, and they must point to > > + * the same GEM object. > > + */ > > + if (a->gem.obj != b->gem.obj || !a->gem.obj) > > + return false; > > + > > + /* Let's keep things simple for now and force all flags to match. */ > > + if (a->flags != b->flags) > > + return false; > > + > > + /* Order VAs for the rest of the checks. */ > > + if (a->va.addr > b->va.addr) > > + swap(a, b); > > + > > + /* We assume the caller already checked that VAs overlap or are > > + * contiguous. > > + */ > > + if (drm_WARN_ON(gpuvm->drm, b->va.addr > a->va.addr + a->va.range)) > > + return false; > > + > > + /* We intentionally ignore u64 underflows because all we care about > > + * here is whether the VA diff matches the GEM offset diff. > > + */ > > + return b->va.addr - a->va.addr == b->gem.offset - a->gem.offset; > > If we're reordering the VAs for the rest of the checks, when could underflow happen? I think this comments predates the re-ordering (I originally tried not to order VAs). > > > +} > > + > > static int > > __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, > > const struct drm_gpuvm_ops *ops, void *priv, > > const struct drm_gpuvm_map_req *req) > > { > > struct drm_gpuva *va, *next; > > + struct drm_gpuva reqva = { > > + .va.addr = req->va.addr, > > + .va.range = req->va.range, > > + .gem.offset = req->gem.offset, > > + .gem.obj = req->gem.obj, > > + .flags = req->flags, > > + }; > > u64 req_end = req->va.addr + req->va.range; > > int ret; > > > > @@ -2116,12 +2152,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, > > u64 addr = va->va.addr; > > u64 range = va->va.range; > > u64 end = addr + range; > > - bool merge = !!va->gem.obj; > > + bool merge = can_merge(gpuvm, va, &reqva); > > > > if (addr == req->va.addr) { > > - merge &= obj == req->gem.obj && > > - offset == req->gem.offset; > > - > > if (end == req_end) { > > ret = op_unmap_cb(ops, priv, va, merge); > > if (ret) > > @@ -2163,8 +2196,6 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, > > }; > > struct drm_gpuva_op_unmap u = { .va = va }; > > > > - merge &= obj == req->gem.obj && > > - offset + ls_range == req->gem.offset; > > u.keep = merge; > > > > if (end == req_end) { > > @@ -2196,10 +2227,6 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, > > break; > > } > > } else if (addr > req->va.addr) { > > - merge &= obj == req->gem.obj && > > - offset == req->gem.offset + > > - (addr - req->va.addr); > > - > > if (end == req_end) { > > ret = op_unmap_cb(ops, priv, va, merge); > > if (ret) > > -- > > 2.47.2 > > > Adrian Larumbe