public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] iommu: Use dev_dbg for group handling code
@ 2026-04-07 13:06 Lin Ruifeng
  2026-04-07 15:31 ` Pranjal Shrivastava
  0 siblings, 1 reply; 2+ messages in thread
From: Lin Ruifeng @ 2026-04-07 13:06 UTC (permalink / raw)
  To: joro, will, robin.murphy, jroedel; +Cc: iommu, linux-kernel

When devices are frequently registered/unregistered, there will be a large
number of iommu group adding/remove messages, which will flood the dmesg buffer
and flush out other logs. For iommu group handling messages, we can use dev_dbg
to output logs only when needed.

Signed-off-by: Lin Ruifeng <linruifeng4@huawei.com>
---
 drivers/iommu/iommu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 50718ab810a4..18ceaba6cf0a 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1283,7 +1283,7 @@ static struct group_device *iommu_group_alloc_device(struct iommu_group *group,
 
 	trace_add_device_to_group(group->id, dev);
 
-	dev_info(dev, "Adding to iommu group %d\n", group->id);
+	dev_dbg(dev, "Adding to iommu group %d\n", group->id);
 
 	return device;
 
@@ -1337,7 +1337,7 @@ void iommu_group_remove_device(struct device *dev)
 	if (!group)
 		return;
 
-	dev_info(dev, "Removing from iommu group %d\n", group->id);
+	dev_dbg(dev, "Removing from iommu group %d\n", group->id);
 
 	__iommu_group_remove_device(dev);
 }
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] iommu: Use dev_dbg for group handling code
  2026-04-07 13:06 [PATCH] iommu: Use dev_dbg for group handling code Lin Ruifeng
@ 2026-04-07 15:31 ` Pranjal Shrivastava
  0 siblings, 0 replies; 2+ messages in thread
From: Pranjal Shrivastava @ 2026-04-07 15:31 UTC (permalink / raw)
  To: Lin Ruifeng; +Cc: joro, will, robin.murphy, jroedel, iommu, linux-kernel

On Tue, Apr 07, 2026 at 09:06:38PM +0800, Lin Ruifeng wrote:
> When devices are frequently registered/unregistered, there will be a large
> number of iommu group adding/remove messages, which will flood the dmesg buffer
> and flush out other logs. For iommu group handling messages, we can use dev_dbg
> to output logs only when needed.
> 
> Signed-off-by: Lin Ruifeng <linruifeng4@huawei.com>
> ---
>  drivers/iommu/iommu.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 50718ab810a4..18ceaba6cf0a 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -1283,7 +1283,7 @@ static struct group_device *iommu_group_alloc_device(struct iommu_group *group,
>  
>  	trace_add_device_to_group(group->id, dev);
>  
> -	dev_info(dev, "Adding to iommu group %d\n", group->id);
> +	dev_dbg(dev, "Adding to iommu group %d\n", group->id);
>  
>  	return device;
>  
> @@ -1337,7 +1337,7 @@ void iommu_group_remove_device(struct device *dev)
>  	if (!group)
>  		return;
>  
> -	dev_info(dev, "Removing from iommu group %d\n", group->id);
> +	dev_dbg(dev, "Removing from iommu group %d\n", group->id);
>  
>  	__iommu_group_remove_device(dev);
>  }

I believe this was discussed roughly an year ago as well:

https://lore.kernel.org/linux-iommu/84cb9155-4793-45f9-bb67-6926e103dc84@arm.com/

And 5 years ago as well:

https://lore.kernel.org/linux-iommu/20200302154426.GC6540@8bytes.org/

The general consensus is that the maintainers and the community would
like to keep this log in place as it helps us debug issues just based on
dmesg logs shared over an email.

I understand that losing dmesg logs in such situations could be
frustrating but I guess you could try increasing the log buffer size..

Thanks,
Praan

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-04-07 15:31 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-07 13:06 [PATCH] iommu: Use dev_dbg for group handling code Lin Ruifeng
2026-04-07 15:31 ` Pranjal Shrivastava

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox