From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 987154A13AC for ; Wed, 6 May 2026 17:53:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778090033; cv=none; b=TCh6n0X5chjl1+4Fxm4HP/UEc+Qxyu5WvL7tVqce3dcZykGHOpNvRhikrzxzC2lsius0xZrFyv0ssO2Bw/vJYzK84QZe9gh/s5bhtn+lfviNMtnJvN337L+7VglToTJp54UNLACUt/HWm3d6o71qDYs3LbTdfRE+cV3yuzViP1w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778090033; c=relaxed/simple; bh=wnFqSmEXKzsnB/u/mjRfMXJEvr7Ke3sxm8mmWQCr8qw=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=VpuWVGnmyISA2jFA9DihR6LfdCeMqeteRQwp+2y6bBKgdHhTGh+pWB0j1t4cLQcMDHDftfJBSErd5WhklMG2SGlQyZIdGX1u+4mcdq1trn2gVKaAFD6i5u8RAotEZBpg/5kt84EUxzacMG3VRhDADL2vYO840Jfoee62ILKzhYs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UDqatCWH; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UDqatCWH" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-2b46da8c48eso121715ad.1 for ; Wed, 06 May 2026 10:53:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1778090021; x=1778694821; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=6hNJkjM7lsYWdxDnxF/NStv4UkFAgxjJPdYoLTpx42s=; b=UDqatCWHS1Rk6I1da9NaG/palepLXxkGnnpuYptgTKVI9Bo4KRKet+E/wYdDoVkGiP 4OjBwFnX04vX4M5DCiVqvz8gUuNPHPaqyMonkdzJ847na+P/i6RMhDbQyins9MNzehPr xVWaEg/NenQInVjHfaT8GcqtDCsVXwGL2bzQXl4gD6H+JNbrwID6/XyFnYlSaOEmo4WV CmC4k9uVBXy99MYFKqAhIOuVVqg6xBrI10iWNB2xbRxtL9pCojx2AKz50Vzh5+lTOHNv rknJCgdaIDuT0T8itHf3Z3dhU3JufzxSlnQ/uKfqzscyLfqmx28GnywZ1krXL1lPljs/ fGaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1778090021; x=1778694821; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6hNJkjM7lsYWdxDnxF/NStv4UkFAgxjJPdYoLTpx42s=; b=LP0gBLGzYqfO3RQcRydYjhEnfCTcvDY65iuGuJzbNhPHtHJQiJiEjrAWQadPG1PFfw Qqx94cugPT/OBxdz35Y/EAKTzCLaEMVhDEKa92ZzFNwx1mAj6VPWepS7WlhdOJDxB7GH t5GEgWM/kZncqu8AG70uiZArB4gv9WOzcSU2YjmKHPCsHNicQFrqSZsA3xgX9CHshAeC yC55xPGzYSR5RbmP04MBonF6avEkcyFBc6jdfnM6JvTFHxmLbY0Xr4/l+gIilI4WqFte upgSCke1dQvURcIpw1t9/osmSGurgeWsg5M52naC2mQtdtrNqNKTu1pEg8GTnR9Rz+z4 gYdQ== X-Forwarded-Encrypted: i=1; AFNElJ8ZY0qXJNb/d86g4SXwsCz+bN2MW7MSQ48U92E4ni/1HJT+gzIISYL+FLEwBNX5HoQsgcg+scEAz8+yNhQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwvPKIrdBfLKAGK+6IhoH4l0T/81OHT78h4/ZWMlzj1/2+BONPG Yg37cGsHKK55I3Oqg8g4mdBH2LA12BcoOvvXR98ohPPQGTPA2krCmTXa6vCyiTHO9w== X-Gm-Gg: AeBDietZ6ADpgXYjNVPITz57z/VevGFri+jkO8JK7XrGY281ia6DUrCS5tvJFbZ3qGe T2unh5a9FWQbggOLanvjXTSTQSum2j4537cBaxqY18W0NWWwWSy2bRfjZno5JVSx9efCsiPN/u+ YTB3JRSScm4JmXcL07sf/B36KS4pDXPNkDfcjEkb2U2z7B1J9BLFZ1Gn2GbE1c7mkQc6rzZyOne B9awaNSh+Jk4VaUNMZyUdHEEjUAKD537OrGePQgTqVM5UR1q9ZzfOVlHRY3wm5Dn3Wqg+9ODaiq 9Glr4THWQiwIzkq4CJZDu7/kamAoeL9W/bJGRn99oMpGE54gqDg90dN8swUrf5CoQiHj8FDc6oO y5IA3zgkhcYN/y9uGGcKLNG0ygNN41XKBJfiGatJU9zXnbLakswvcF2jC7ke4LyRJCBOr/s9I5x 0mYZPGPOXswPAqJA5obpyNVKtXrqFb9eGkOYUDw1xNFFl+ORlGBnhrSe/htGt6i25k/6K9bdHNV WzOUSYD X-Received: by 2002:a17:903:4b03:b0:2b0:b458:2dc3 with SMTP id d9443c01a7336-2ba7cf2a362mr3560235ad.21.1778090020281; Wed, 06 May 2026 10:53:40 -0700 (PDT) Received: from google.com (153.46.83.34.bc.googleusercontent.com. [34.83.46.153]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-839682a20ebsm6346338b3a.53.2026.05.06.10.53.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 May 2026 10:53:39 -0700 (PDT) Date: Wed, 6 May 2026 17:53:36 +0000 From: Samiullah Khawaja To: Leon Romanovsky Cc: Marek Szyprowski , Robin Murphy , Jon Mason , Dave Jiang , Allen Hubbe , iommu@lists.linux.dev, linux-kernel@vger.kernel.org, ntb@lists.linux.dev Subject: Re: [PATCH v2 4/6] dma-debug: Record DMA attributes in debug entry Message-ID: References: <20260501-dma-attrs-debug-v2-0-8dbac75cd501@nvidia.com> <20260501-dma-attrs-debug-v2-4-8dbac75cd501@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20260501-dma-attrs-debug-v2-4-8dbac75cd501@nvidia.com> On Fri, May 01, 2026 at 09:35:08AM +0300, Leon Romanovsky wrote: >From: Leon Romanovsky > >To enable reliable comparison of DMA attributes between map and >unmap operations, store the attribute value in dma_debug_entry. > >Signed-off-by: Leon Romanovsky >--- > kernel/dma/debug.c | 48 +++++++++++++++++++++++++++++------------------- > 1 file changed, 29 insertions(+), 19 deletions(-) > >diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c >index 3b53495337f5c..f07e6a1e9fbab 100644 >--- a/kernel/dma/debug.c >+++ b/kernel/dma/debug.c >@@ -63,7 +63,7 @@ enum map_err_types { > * @sg_mapped_ents: 'mapped_ents' from dma_map_sg > * @paddr: physical start address of the mapping > * @map_err_type: track whether dma_mapping_error() was checked >- * @is_cache_clean: driver promises not to write to buffer while mapped >+ * @attrs: dma attributes > * @stack_len: number of backtrace entries in @stack_entries > * @stack_entries: stack of backtrace history > */ >@@ -78,7 +78,7 @@ struct dma_debug_entry { > int sg_mapped_ents; > phys_addr_t paddr; > enum map_err_types map_err_type; >- bool is_cache_clean; >+ unsigned long attrs; > #ifdef CONFIG_STACKTRACE > unsigned int stack_len; > unsigned long stack_entries[DMA_DEBUG_STACKTRACE_ENTRIES]; >@@ -478,6 +478,9 @@ static int active_cacheline_insert(struct dma_debug_entry *entry, > bool *overlap_cache_clean) > { > phys_addr_t cln = to_cacheline_number(entry); >+ bool is_cache_clean = entry->attrs & >+ (DMA_ATTR_DEBUGGING_IGNORE_CACHELINES | >+ DMA_ATTR_REQUIRE_COHERENT); > unsigned long flags; > int rc; > >@@ -495,12 +498,15 @@ static int active_cacheline_insert(struct dma_debug_entry *entry, > if (rc == -EEXIST) { > struct dma_debug_entry *existing; > >- active_cacheline_inc_overlap(cln, entry->is_cache_clean); >+ active_cacheline_inc_overlap(cln, is_cache_clean); > existing = radix_tree_lookup(&dma_active_cacheline, cln); > /* A lookup failure here after we got -EEXIST is unexpected. */ > WARN_ON(!existing); > if (existing) >- *overlap_cache_clean = existing->is_cache_clean; >+ *overlap_cache_clean = >+ existing->attrs & >+ (DMA_ATTR_DEBUGGING_IGNORE_CACHELINES | >+ DMA_ATTR_REQUIRE_COHERENT); > } > spin_unlock_irqrestore(&radix_lock, flags); > >@@ -544,12 +550,13 @@ void debug_dma_dump_mappings(struct device *dev) > if (!dev || dev == entry->dev) { > cln = to_cacheline_number(entry); > dev_info(entry->dev, >- "%s idx %d P=%pa D=%llx L=%llx cln=%pa %s %s\n", >+ "%s idx %d P=%pa D=%llx L=%llx cln=%pa %s %s attrs=0x%lx\n", > type2name[entry->type], idx, > &entry->paddr, entry->dev_addr, > entry->size, &cln, > dir2name[entry->direction], >- maperr2str[entry->map_err_type]); >+ maperr2str[entry->map_err_type], >+ entry->attrs); > } > } > spin_unlock_irqrestore(&bucket->lock, flags); >@@ -575,14 +582,15 @@ static int dump_show(struct seq_file *seq, void *v) > list_for_each_entry(entry, &bucket->list, list) { > cln = to_cacheline_number(entry); > seq_printf(seq, >- "%s %s %s idx %d P=%pa D=%llx L=%llx cln=%pa %s %s\n", >+ "%s %s %s idx %d P=%pa D=%llx L=%llx cln=%pa %s %s attrs=0x%lx\n", > dev_driver_string(entry->dev), > dev_name(entry->dev), > type2name[entry->type], idx, > &entry->paddr, entry->dev_addr, > entry->size, &cln, > dir2name[entry->direction], >- maperr2str[entry->map_err_type]); >+ maperr2str[entry->map_err_type], >+ entry->attrs); > } > spin_unlock_irqrestore(&bucket->lock, flags); > } >@@ -594,16 +602,14 @@ DEFINE_SHOW_ATTRIBUTE(dump); > * Wrapper function for adding an entry to the hash. > * This function takes care of locking itself. > */ >-static void add_dma_entry(struct dma_debug_entry *entry, unsigned long attrs) >+static void add_dma_entry(struct dma_debug_entry *entry) > { >+ unsigned long attrs = entry->attrs; > bool overlap_cache_clean; > struct hash_bucket *bucket; > unsigned long flags; > int rc; > >- entry->is_cache_clean = attrs & (DMA_ATTR_DEBUGGING_IGNORE_CACHELINES | >- DMA_ATTR_REQUIRE_COHERENT); >- > bucket = get_hash_bucket(entry, &flags); > hash_bucket_add(bucket, entry); > put_hash_bucket(bucket, flags); >@@ -612,9 +618,10 @@ static void add_dma_entry(struct dma_debug_entry *entry, unsigned long attrs) > if (rc == -ENOMEM) { > pr_err_once("cacheline tracking ENOMEM, dma-debug disabled\n"); > global_disable = true; >- } else if (rc == -EEXIST && >- !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && >- !(entry->is_cache_clean && overlap_cache_clean) && >+ } else if (rc == -EEXIST && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && >+ !(attrs & (DMA_ATTR_DEBUGGING_IGNORE_CACHELINES | >+ DMA_ATTR_REQUIRE_COHERENT) && >+ overlap_cache_clean) && > dma_get_cache_alignment() >= L1_CACHE_BYTES && > !(IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && > is_swiotlb_active(entry->dev))) { >@@ -1250,6 +1257,7 @@ void debug_dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, > entry->size = size; > entry->direction = direction; > entry->map_err_type = MAP_ERR_NOT_CHECKED; >+ entry->attrs = attrs; > > if (!(attrs & DMA_ATTR_MMIO)) { > check_for_stack(dev, phys); >@@ -1258,7 +1266,7 @@ void debug_dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, > check_for_illegal_area(dev, phys_to_virt(phys), size); > } > >- add_dma_entry(entry, attrs); >+ add_dma_entry(entry); > } > > void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) >@@ -1345,10 +1353,11 @@ void debug_dma_map_sg(struct device *dev, struct scatterlist *sg, > entry->direction = direction; > entry->sg_call_ents = nents; > entry->sg_mapped_ents = mapped_ents; >+ entry->attrs = attrs; > > check_sg_segment(dev, s); > >- add_dma_entry(entry, attrs); >+ add_dma_entry(entry); > } > } > >@@ -1440,8 +1449,9 @@ void debug_dma_alloc_coherent(struct device *dev, size_t size, Unrelated to this patch/series, but I am wondering whether we should rename this function to debug_dma_alloc_attrs() as it is called from dma_alloc_attrs(). > entry->size = size; > entry->dev_addr = dma_addr; > entry->direction = DMA_BIDIRECTIONAL; >+ entry->attrs = attrs; > >- add_dma_entry(entry, attrs); >+ add_dma_entry(entry); > } > > void debug_dma_free_coherent(struct device *dev, size_t size, >@@ -1585,7 +1595,7 @@ void debug_dma_alloc_pages(struct device *dev, struct page *page, > entry->dev_addr = dma_addr; > entry->direction = direction; > >- add_dma_entry(entry, 0); >+ add_dma_entry(entry); > } > > void debug_dma_free_pages(struct device *dev, struct page *page, > >-- >2.53.0 > > Reviewed-by: Samiullah Khawaja