From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 95943383325; Wed, 13 May 2026 02:03:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=96.67.55.147 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778637803; cv=none; b=RtPsyfxKlTj+lnMO/C3o1QXiH+MYSqdmXukvALDvKFrIG6mFMBdOBwZOM4/WZvazxwB9WEbI0Mk4MPp6OGGUK2+JIMCDpcaH3KhYZc2y1SuznpN7+LBHC4Nw/55OCg7b3EOBa14hfKJSKiaKUTMkuYqytngzv3Mq6vW9vS/hyXY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778637803; c=relaxed/simple; bh=AwA6ZIGb32+jwjecyQAjZJvrow/SEkhb22WWBbSMOtE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SpMgq6shqRCLm3MthqpfRpEOr9FmTPTZausVMIa/ZndJIy6kSaP6ie2tZvFZWIn71LDLsVfxCkuF9hFiL6X0iISNZnhW3n1U7GHizFP5tJj7dHDNRNgGvLXv2CBYGWFez86bB0qZwOd/vArXggnK4UnZRVSyOLZddnNvpS4zInw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com; spf=pass smtp.mailfrom=surriel.com; dkim=pass (2048-bit key) header.d=surriel.com header.i=@surriel.com header.b=Uu4lwIZJ; arc=none smtp.client-ip=96.67.55.147 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=surriel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=surriel.com header.i=@surriel.com header.b="Uu4lwIZJ" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=surriel.com ; s=mail; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=N+F425vwfV1dlvrSyZE575xpot+t9UUXSJNgUZ2Q2kE=; b=Uu4lwIZJzWeADDEZ7NnkN9vxVg R1y05zyn2aYErvuS/02eoikUA6FWinl41DRsU/KUJ3Z6wySv3t/5ppfHJ9fQuegX1UbkdOgpJlcTE jLvdhdZ4R6PEvYdSRH0F3kn8EeZ9i9w4cMDH+oDg2Aksj+vf/usVqfAg6bNzdapzmo4GPXKxt4Hfx CsBCyZWFHu9afqQTZTpad+CnV7qttCOM+oiUx5cgL5wVB4dlV3EZ7XJpXZVnm41h8z1AL7ESZ6Tnw Dhtz+a4mXZkkNLDjyySqogO9hQwxwKQl80H9R6EDNrV0YwljJw5s09zwktemUlQCTyhdJGadRZmLh F70Q4Mxg==; Received: from fangorn.home.surriel.com ([10.0.13.7]) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1wMywG-0000000022e-3oGc; Tue, 12 May 2026 22:03:08 -0400 From: Rik van Riel To: linux-kernel@vger.kernel.org Cc: robin.murphy@arm.com, joro@8bytes.org, will@kernel.org, iommu@lists.linux.dev, kyle@mcmartin.ca, kernel-team@meta.com, Rik van Riel , Rik van Riel Subject: [PATCH 2/5] iova: drop dead cached_node / cached32_node infrastructure Date: Tue, 12 May 2026 22:00:19 -0400 Message-ID: <20260513020304.1528751-3-riel@surriel.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260513020304.1528751-1-riel@surriel.com> References: <20260513020304.1528751-1-riel@surriel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Rik van Riel After the augmented-rbtree port, cached_node and cached32_node are still maintained on every insert and delete but are never consulted on the allocation path. Drop the fields and their helpers. The one piece of useful work in __cached_rbnode_delete_update() was resetting iovad->max32_alloc_size when an iova in the 32-bit range was freed (so the next 32-bit alloc can retry). That logic is preserved by moving it inline into remove_iova(). No external consumers reference the cached_node fields. Assisted-by: Claude:claude-opus-4.7 Signed-off-by: Rik van Riel --- drivers/iommu/iova.c | 35 +++-------------------------------- include/linux/iova.h | 2 -- 2 files changed, 3 insertions(+), 34 deletions(-) diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 953188e296f0..c358ce981cae 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -57,8 +57,6 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, spin_lock_init(&iovad->iova_rbtree_lock); iovad->rbroot = RB_ROOT; - iovad->cached_node = &iovad->anchor.node; - iovad->cached32_node = &iovad->anchor.node; iovad->granule = granule; iovad->start_pfn = start_pfn; iovad->dma_32bit_pfn = 1UL << (32 - iova_shift(iovad)); @@ -71,34 +69,6 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule, } EXPORT_SYMBOL_GPL(init_iova_domain); -static void -__cached_rbnode_insert_update(struct iova_domain *iovad, struct iova *new) -{ - if (new->pfn_hi < iovad->dma_32bit_pfn) - iovad->cached32_node = &new->node; - else - iovad->cached_node = &new->node; -} - -static void -__cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free) -{ - struct iova *cached_iova; - - cached_iova = to_iova(iovad->cached32_node); - if (free == cached_iova || - (free->pfn_hi < iovad->dma_32bit_pfn && - free->pfn_lo >= cached_iova->pfn_lo)) - iovad->cached32_node = rb_next(&free->node); - - if (free->pfn_lo < iovad->dma_32bit_pfn) - iovad->max32_alloc_size = iovad->dma_32bit_pfn; - - cached_iova = to_iova(iovad->cached_node); - if (free->pfn_lo >= cached_iova->pfn_lo) - iovad->cached_node = rb_next(&free->node); -} - /* Insert the iova into domain rbtree by holding writer lock */ static void iova_insert_rbtree(struct rb_root *root, struct iova *iova, @@ -221,7 +191,6 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad, new->pfn_hi = new_pfn + size - 1; iova_insert_rbtree(&iovad->rbroot, new, gap_node); - __cached_rbnode_insert_update(iovad, new); spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); return 0; @@ -308,7 +277,9 @@ static void remove_iova(struct iova_domain *iovad, struct iova *iova) struct iova *next_iova = NULL; assert_spin_locked(&iovad->iova_rbtree_lock); - __cached_rbnode_delete_update(iovad, iova); + + if (iova->pfn_lo < iovad->dma_32bit_pfn) + iovad->max32_alloc_size = iovad->dma_32bit_pfn; next_node = rb_next(&iova->node); if (next_node) { diff --git a/include/linux/iova.h b/include/linux/iova.h index 52635a72c5c5..3c4cc81e5182 100644 --- a/include/linux/iova.h +++ b/include/linux/iova.h @@ -30,8 +30,6 @@ struct iova_rcache; struct iova_domain { spinlock_t iova_rbtree_lock; /* Lock to protect update of rbtree */ struct rb_root rbroot; /* iova domain rbtree root */ - struct rb_node *cached_node; /* Save last alloced node */ - struct rb_node *cached32_node; /* Save last 32-bit alloced node */ unsigned long granule; /* pfn granularity for this domain */ unsigned long start_pfn; /* Lower limit for this domain */ unsigned long dma_32bit_pfn; -- 2.52.0