dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 1/3] drm/amdgpu: Add WARN_ON to the resource clear function
@ 2025-07-08  6:54 Arunpravin Paneer Selvam
  2025-07-08  6:54 ` [PATCH v3 2/3] drm/amdgpu: Reset the clear flag in buddy during resume Arunpravin Paneer Selvam
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Arunpravin Paneer Selvam @ 2025-07-08  6:54 UTC (permalink / raw)
  To: dri-devel, amd-gfx, christian.koenig, matthew.auld, matthew.brost
  Cc: alexander.deucher, Arunpravin Paneer Selvam, stable

Set the dirty bit when the memory resource is not cleared
during BO release.

v2(Christian):
  - Drop the cleared flag set to false.
  - Improve the amdgpu_vram_mgr_set_clear_state() function.

v3:
  - Add back the resource clear flag set function call after
    being wiped during eviction (Christian).
  - Modified the patch subject name.

Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Suggested-by: Christian König <christian.koenig@amd.com>
Cc: stable@vger.kernel.org
Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality")
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
index b256cbc2bc27..2c88d5fd87da 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
@@ -66,7 +66,10 @@ to_amdgpu_vram_mgr_resource(struct ttm_resource *res)
 
 static inline void amdgpu_vram_mgr_set_cleared(struct ttm_resource *res)
 {
-	to_amdgpu_vram_mgr_resource(res)->flags |= DRM_BUDDY_CLEARED;
+	struct amdgpu_vram_mgr_resource *ares = to_amdgpu_vram_mgr_resource(res);
+
+	WARN_ON(ares->flags & DRM_BUDDY_CLEARED);
+	ares->flags |= DRM_BUDDY_CLEARED;
 }
 
 #endif
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 2/3] drm/amdgpu: Reset the clear flag in buddy during resume
  2025-07-08  6:54 [PATCH v3 1/3] drm/amdgpu: Add WARN_ON to the resource clear function Arunpravin Paneer Selvam
@ 2025-07-08  6:54 ` Arunpravin Paneer Selvam
  2025-07-08  7:23   ` Christian König
  2025-07-08  9:00   ` Matthew Auld
  2025-07-08  6:54 ` [PATCH v3 3/3] drm/buddy: Add a new unit test case for buffer clearance issue Arunpravin Paneer Selvam
  2025-07-08  7:20 ` [PATCH v3 1/3] drm/amdgpu: Add WARN_ON to the resource clear function Christian König
  2 siblings, 2 replies; 9+ messages in thread
From: Arunpravin Paneer Selvam @ 2025-07-08  6:54 UTC (permalink / raw)
  To: dri-devel, amd-gfx, christian.koenig, matthew.auld, matthew.brost
  Cc: alexander.deucher, Arunpravin Paneer Selvam, stable

- Added a handler in DRM buddy manager to reset the cleared
  flag for the blocks in the freelist.

- This is necessary because, upon resuming, the VRAM becomes
  cluttered with BIOS data, yet the VRAM backend manager
  believes that everything has been cleared.

v2:
  - Add lock before accessing drm_buddy_clear_reset_blocks()(Matthew Auld)
  - Force merge the two dirty blocks.(Matthew Auld)
  - Add a new unit test case for this issue.(Matthew Auld)
  - Having this function being able to flip the state either way would be
    good. (Matthew Brost)

v3(Matthew Auld):
  - Do merge step first to avoid the use of extra reset flag.

Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Suggested-by: Christian König <christian.koenig@amd.com>
Cc: stable@vger.kernel.org
Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality")
Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3812
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c   |  2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h      |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 17 ++++++++
 drivers/gpu/drm/drm_buddy.c                  | 43 ++++++++++++++++++++
 include/drm/drm_buddy.h                      |  2 +
 5 files changed, 65 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index a59f194e3360..b89e46f29b51 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -5193,6 +5193,8 @@ int amdgpu_device_resume(struct drm_device *dev, bool notify_clients)
 		dev->dev->power.disable_depth--;
 #endif
 	}
+
+	amdgpu_vram_mgr_clear_reset_blocks(adev);
 	adev->in_suspend = false;
 
 	if (amdgpu_acpi_smart_shift_update(dev, AMDGPU_SS_DEV_D0))
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 208b7d1d8a27..450e4bf093b7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -154,6 +154,7 @@ int amdgpu_vram_mgr_reserve_range(struct amdgpu_vram_mgr *mgr,
 				  uint64_t start, uint64_t size);
 int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
 				      uint64_t start);
+void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev);
 
 bool amdgpu_res_cpu_visible(struct amdgpu_device *adev,
 			    struct ttm_resource *res);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
index abdc52b0895a..07c936e90d8e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
@@ -782,6 +782,23 @@ uint64_t amdgpu_vram_mgr_vis_usage(struct amdgpu_vram_mgr *mgr)
 	return atomic64_read(&mgr->vis_usage);
 }
 
+/**
+ * amdgpu_vram_mgr_clear_reset_blocks - reset clear blocks
+ *
+ * @adev: amdgpu device pointer
+ *
+ * Reset the cleared drm buddy blocks.
+ */
+void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev)
+{
+	struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
+	struct drm_buddy *mm = &mgr->mm;
+
+	mutex_lock(&mgr->lock);
+	drm_buddy_reset_clear(mm, false);
+	mutex_unlock(&mgr->lock);
+}
+
 /**
  * amdgpu_vram_mgr_intersects - test each drm buddy block for intersection
  *
diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
index a1e652b7631d..a94061f373de 100644
--- a/drivers/gpu/drm/drm_buddy.c
+++ b/drivers/gpu/drm/drm_buddy.c
@@ -405,6 +405,49 @@ drm_get_buddy(struct drm_buddy_block *block)
 }
 EXPORT_SYMBOL(drm_get_buddy);
 
+/**
+ * drm_buddy_reset_clear - reset blocks clear state
+ *
+ * @mm: DRM buddy manager
+ * @is_clear: blocks clear state
+ *
+ * Reset the clear state based on @is_clear value for each block
+ * in the freelist.
+ */
+void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear)
+{
+	u64 root_size, size, start;
+	unsigned int order;
+	int i;
+
+	size = mm->size;
+	for (i = 0; i < mm->n_roots; ++i) {
+		order = ilog2(size) - ilog2(mm->chunk_size);
+		start = drm_buddy_block_offset(mm->roots[i]);
+		__force_merge(mm, start, start + size, order);
+
+		root_size = mm->chunk_size << order;
+		size -= root_size;
+	}
+
+	for (i = 0; i <= mm->max_order; ++i) {
+		struct drm_buddy_block *block;
+
+		list_for_each_entry_reverse(block, &mm->free_list[i], link) {
+			if (is_clear != drm_buddy_block_is_clear(block)) {
+				if (is_clear) {
+					mark_cleared(block);
+					mm->clear_avail += drm_buddy_block_size(mm, block);
+				} else {
+					clear_reset(block);
+					mm->clear_avail -= drm_buddy_block_size(mm, block);
+				}
+			}
+		}
+	}
+}
+EXPORT_SYMBOL(drm_buddy_reset_clear);
+
 /**
  * drm_buddy_free_block - free a block
  *
diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h
index 9689a7c5dd36..513837632b7d 100644
--- a/include/drm/drm_buddy.h
+++ b/include/drm/drm_buddy.h
@@ -160,6 +160,8 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
 			 u64 new_size,
 			 struct list_head *blocks);
 
+void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear);
+
 void drm_buddy_free_block(struct drm_buddy *mm, struct drm_buddy_block *block);
 
 void drm_buddy_free_list(struct drm_buddy *mm,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 3/3] drm/buddy: Add a new unit test case for buffer clearance issue
  2025-07-08  6:54 [PATCH v3 1/3] drm/amdgpu: Add WARN_ON to the resource clear function Arunpravin Paneer Selvam
  2025-07-08  6:54 ` [PATCH v3 2/3] drm/amdgpu: Reset the clear flag in buddy during resume Arunpravin Paneer Selvam
@ 2025-07-08  6:54 ` Arunpravin Paneer Selvam
  2025-07-08  7:20 ` [PATCH v3 1/3] drm/amdgpu: Add WARN_ON to the resource clear function Christian König
  2 siblings, 0 replies; 9+ messages in thread
From: Arunpravin Paneer Selvam @ 2025-07-08  6:54 UTC (permalink / raw)
  To: dri-devel, amd-gfx, christian.koenig, matthew.auld, matthew.brost
  Cc: alexander.deucher, Arunpravin Paneer Selvam

Add a new unit test case for buffer clearance issue during
resume.

Using a non-power-of-two mm size, allocate alternating blocks of
4KiB in an even sequence and free them as cleared. All alternate
blocks should be marked as dirty and the split blocks should be
merged back to their original size when the blocks clear reset
function is called.

Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
---
 drivers/gpu/drm/tests/drm_buddy_test.c | 41 ++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/drivers/gpu/drm/tests/drm_buddy_test.c b/drivers/gpu/drm/tests/drm_buddy_test.c
index 7a0e523651f0..26f8be8ceecd 100644
--- a/drivers/gpu/drm/tests/drm_buddy_test.c
+++ b/drivers/gpu/drm/tests/drm_buddy_test.c
@@ -408,6 +408,47 @@ static void drm_test_buddy_alloc_clear(struct kunit *test)
 				"buddy_alloc hit an error size=%lu\n", ps);
 	drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED);
 	drm_buddy_fini(&mm);
+
+	/*
+	 * Using a non-power-of-two mm size, allocate alternating blocks of 4KiB in an
+	 * even sequence and free them as cleared. All blocks should be marked as
+	 * dirty and the split blocks should be merged back to their original
+	 * size when the blocks clear reset function is called.
+	 */
+	KUNIT_EXPECT_FALSE(test, drm_buddy_init(&mm, mm_size, ps));
+	KUNIT_EXPECT_EQ(test, mm.max_order, max_order);
+
+	i = 0;
+	n_pages = mm_size / ps;
+	do {
+		if (i % 2 == 0)
+			KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+									    ps, ps, &allocated, 0),
+						"buddy_alloc hit an error size=%lu\n", ps);
+	} while (++i < n_pages);
+
+	drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED);
+	drm_buddy_reset_clear(&mm, false);
+	KUNIT_EXPECT_EQ(test, mm.clear_avail, 0);
+
+	/*
+	 * Using a non-power-of-two mm size, allocate alternating blocks of 4KiB in an
+	 * odd sequence and free them as cleared. All blocks should be marked as
+	 * cleared and the split blocks should be merged back to their original
+	 * size when the blocks clear reset function is called.
+	 */
+	i = 0;
+	do {
+		if (i % 2 != 0)
+			KUNIT_ASSERT_FALSE_MSG(test, drm_buddy_alloc_blocks(&mm, 0, mm_size,
+									    ps, ps, &allocated, 0),
+						"buddy_alloc hit an error size=%lu\n", ps);
+	} while (++i < n_pages);
+
+	drm_buddy_free_list(&mm, &allocated, DRM_BUDDY_CLEARED);
+	drm_buddy_reset_clear(&mm, true);
+	KUNIT_EXPECT_EQ(test, mm.clear_avail, mm_size);
+	drm_buddy_fini(&mm);
 }
 
 static void drm_test_buddy_alloc_contiguous(struct kunit *test)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/3] drm/amdgpu: Add WARN_ON to the resource clear function
  2025-07-08  6:54 [PATCH v3 1/3] drm/amdgpu: Add WARN_ON to the resource clear function Arunpravin Paneer Selvam
  2025-07-08  6:54 ` [PATCH v3 2/3] drm/amdgpu: Reset the clear flag in buddy during resume Arunpravin Paneer Selvam
  2025-07-08  6:54 ` [PATCH v3 3/3] drm/buddy: Add a new unit test case for buffer clearance issue Arunpravin Paneer Selvam
@ 2025-07-08  7:20 ` Christian König
  2 siblings, 0 replies; 9+ messages in thread
From: Christian König @ 2025-07-08  7:20 UTC (permalink / raw)
  To: Arunpravin Paneer Selvam, dri-devel, amd-gfx, matthew.auld,
	matthew.brost
  Cc: alexander.deucher, stable



On 08.07.25 08:54, Arunpravin Paneer Selvam wrote:
> Set the dirty bit when the memory resource is not cleared
> during BO release.
> 
> v2(Christian):
>   - Drop the cleared flag set to false.
>   - Improve the amdgpu_vram_mgr_set_clear_state() function.
> 
> v3:
>   - Add back the resource clear flag set function call after
>     being wiped during eviction (Christian).
>   - Modified the patch subject name.
> 
> Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
> Suggested-by: Christian König <christian.koenig@amd.com>

> Cc: stable@vger.kernel.org
> Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality")

Those two lines should probably be dropped since this here is only adding a warning now.

With that done Reviewed-by: Christian König <christian.koenig@amd.com>

Regards,
Christian.

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
> index b256cbc2bc27..2c88d5fd87da 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.h
> @@ -66,7 +66,10 @@ to_amdgpu_vram_mgr_resource(struct ttm_resource *res)
>  
>  static inline void amdgpu_vram_mgr_set_cleared(struct ttm_resource *res)
>  {
> -	to_amdgpu_vram_mgr_resource(res)->flags |= DRM_BUDDY_CLEARED;
> +	struct amdgpu_vram_mgr_resource *ares = to_amdgpu_vram_mgr_resource(res);
> +
> +	WARN_ON(ares->flags & DRM_BUDDY_CLEARED);
> +	ares->flags |= DRM_BUDDY_CLEARED;
>  }
>  
>  #endif


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 2/3] drm/amdgpu: Reset the clear flag in buddy during resume
  2025-07-08  6:54 ` [PATCH v3 2/3] drm/amdgpu: Reset the clear flag in buddy during resume Arunpravin Paneer Selvam
@ 2025-07-08  7:23   ` Christian König
  2025-07-08  9:00   ` Matthew Auld
  1 sibling, 0 replies; 9+ messages in thread
From: Christian König @ 2025-07-08  7:23 UTC (permalink / raw)
  To: Arunpravin Paneer Selvam, dri-devel, amd-gfx, matthew.auld,
	matthew.brost
  Cc: alexander.deucher, stable

On 08.07.25 08:54, Arunpravin Paneer Selvam wrote:
> - Added a handler in DRM buddy manager to reset the cleared
>   flag for the blocks in the freelist.
> 
> - This is necessary because, upon resuming, the VRAM becomes
>   cluttered with BIOS data, yet the VRAM backend manager
>   believes that everything has been cleared.
> 
> v2:
>   - Add lock before accessing drm_buddy_clear_reset_blocks()(Matthew Auld)
>   - Force merge the two dirty blocks.(Matthew Auld)
>   - Add a new unit test case for this issue.(Matthew Auld)
>   - Having this function being able to flip the state either way would be
>     good. (Matthew Brost)
> 
> v3(Matthew Auld):
>   - Do merge step first to avoid the use of extra reset flag.
> 
> Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
> Suggested-by: Christian König <christian.koenig@amd.com>
> Cc: stable@vger.kernel.org
> Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality")

Acked-by: Christian König <christian.koenig@amd.com>

> Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3812

I'm not 100% sure if that really fully closes this issue. Keep an eye open if the warning we added in patch #1 ever triggers.

Regards,
Christian.

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c   |  2 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h      |  1 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 17 ++++++++
>  drivers/gpu/drm/drm_buddy.c                  | 43 ++++++++++++++++++++
>  include/drm/drm_buddy.h                      |  2 +
>  5 files changed, 65 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index a59f194e3360..b89e46f29b51 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -5193,6 +5193,8 @@ int amdgpu_device_resume(struct drm_device *dev, bool notify_clients)
>  		dev->dev->power.disable_depth--;
>  #endif
>  	}
> +
> +	amdgpu_vram_mgr_clear_reset_blocks(adev);
>  	adev->in_suspend = false;
>  
>  	if (amdgpu_acpi_smart_shift_update(dev, AMDGPU_SS_DEV_D0))
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 208b7d1d8a27..450e4bf093b7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -154,6 +154,7 @@ int amdgpu_vram_mgr_reserve_range(struct amdgpu_vram_mgr *mgr,
>  				  uint64_t start, uint64_t size);
>  int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
>  				      uint64_t start);
> +void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev);
>  
>  bool amdgpu_res_cpu_visible(struct amdgpu_device *adev,
>  			    struct ttm_resource *res);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
> index abdc52b0895a..07c936e90d8e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
> @@ -782,6 +782,23 @@ uint64_t amdgpu_vram_mgr_vis_usage(struct amdgpu_vram_mgr *mgr)
>  	return atomic64_read(&mgr->vis_usage);
>  }
>  
> +/**
> + * amdgpu_vram_mgr_clear_reset_blocks - reset clear blocks
> + *
> + * @adev: amdgpu device pointer
> + *
> + * Reset the cleared drm buddy blocks.
> + */
> +void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev)
> +{
> +	struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
> +	struct drm_buddy *mm = &mgr->mm;
> +
> +	mutex_lock(&mgr->lock);
> +	drm_buddy_reset_clear(mm, false);
> +	mutex_unlock(&mgr->lock);
> +}
> +
>  /**
>   * amdgpu_vram_mgr_intersects - test each drm buddy block for intersection
>   *
> diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
> index a1e652b7631d..a94061f373de 100644
> --- a/drivers/gpu/drm/drm_buddy.c
> +++ b/drivers/gpu/drm/drm_buddy.c
> @@ -405,6 +405,49 @@ drm_get_buddy(struct drm_buddy_block *block)
>  }
>  EXPORT_SYMBOL(drm_get_buddy);
>  
> +/**
> + * drm_buddy_reset_clear - reset blocks clear state
> + *
> + * @mm: DRM buddy manager
> + * @is_clear: blocks clear state
> + *
> + * Reset the clear state based on @is_clear value for each block
> + * in the freelist.
> + */
> +void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear)
> +{
> +	u64 root_size, size, start;
> +	unsigned int order;
> +	int i;
> +
> +	size = mm->size;
> +	for (i = 0; i < mm->n_roots; ++i) {
> +		order = ilog2(size) - ilog2(mm->chunk_size);
> +		start = drm_buddy_block_offset(mm->roots[i]);
> +		__force_merge(mm, start, start + size, order);
> +
> +		root_size = mm->chunk_size << order;
> +		size -= root_size;
> +	}
> +
> +	for (i = 0; i <= mm->max_order; ++i) {
> +		struct drm_buddy_block *block;
> +
> +		list_for_each_entry_reverse(block, &mm->free_list[i], link) {
> +			if (is_clear != drm_buddy_block_is_clear(block)) {
> +				if (is_clear) {
> +					mark_cleared(block);
> +					mm->clear_avail += drm_buddy_block_size(mm, block);
> +				} else {
> +					clear_reset(block);
> +					mm->clear_avail -= drm_buddy_block_size(mm, block);
> +				}
> +			}
> +		}
> +	}
> +}
> +EXPORT_SYMBOL(drm_buddy_reset_clear);
> +
>  /**
>   * drm_buddy_free_block - free a block
>   *
> diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h
> index 9689a7c5dd36..513837632b7d 100644
> --- a/include/drm/drm_buddy.h
> +++ b/include/drm/drm_buddy.h
> @@ -160,6 +160,8 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
>  			 u64 new_size,
>  			 struct list_head *blocks);
>  
> +void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear);
> +
>  void drm_buddy_free_block(struct drm_buddy *mm, struct drm_buddy_block *block);
>  
>  void drm_buddy_free_list(struct drm_buddy *mm,


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 2/3] drm/amdgpu: Reset the clear flag in buddy during resume
  2025-07-08  6:54 ` [PATCH v3 2/3] drm/amdgpu: Reset the clear flag in buddy during resume Arunpravin Paneer Selvam
  2025-07-08  7:23   ` Christian König
@ 2025-07-08  9:00   ` Matthew Auld
  2025-07-10  7:14     ` Arunpravin Paneer Selvam
  1 sibling, 1 reply; 9+ messages in thread
From: Matthew Auld @ 2025-07-08  9:00 UTC (permalink / raw)
  To: Arunpravin Paneer Selvam, dri-devel, amd-gfx, christian.koenig,
	matthew.brost
  Cc: alexander.deucher, stable

On 08/07/2025 07:54, Arunpravin Paneer Selvam wrote:
> - Added a handler in DRM buddy manager to reset the cleared
>    flag for the blocks in the freelist.
> 
> - This is necessary because, upon resuming, the VRAM becomes
>    cluttered with BIOS data, yet the VRAM backend manager
>    believes that everything has been cleared.
> 
> v2:
>    - Add lock before accessing drm_buddy_clear_reset_blocks()(Matthew Auld)
>    - Force merge the two dirty blocks.(Matthew Auld)
>    - Add a new unit test case for this issue.(Matthew Auld)
>    - Having this function being able to flip the state either way would be
>      good. (Matthew Brost)
> 
> v3(Matthew Auld):
>    - Do merge step first to avoid the use of extra reset flag.
> 
> Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
> Suggested-by: Christian König <christian.koenig@amd.com>
> Cc: stable@vger.kernel.org
> Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality")
> Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3812

Reviewed-by: Matthew Auld <matthew.auld@intel.com>

> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c   |  2 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h      |  1 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 17 ++++++++
>   drivers/gpu/drm/drm_buddy.c                  | 43 ++++++++++++++++++++
>   include/drm/drm_buddy.h                      |  2 +
>   5 files changed, 65 insertions(+)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index a59f194e3360..b89e46f29b51 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -5193,6 +5193,8 @@ int amdgpu_device_resume(struct drm_device *dev, bool notify_clients)
>   		dev->dev->power.disable_depth--;
>   #endif
>   	}
> +
> +	amdgpu_vram_mgr_clear_reset_blocks(adev);
>   	adev->in_suspend = false;
>   
>   	if (amdgpu_acpi_smart_shift_update(dev, AMDGPU_SS_DEV_D0))
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 208b7d1d8a27..450e4bf093b7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -154,6 +154,7 @@ int amdgpu_vram_mgr_reserve_range(struct amdgpu_vram_mgr *mgr,
>   				  uint64_t start, uint64_t size);
>   int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
>   				      uint64_t start);
> +void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev);
>   
>   bool amdgpu_res_cpu_visible(struct amdgpu_device *adev,
>   			    struct ttm_resource *res);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
> index abdc52b0895a..07c936e90d8e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
> @@ -782,6 +782,23 @@ uint64_t amdgpu_vram_mgr_vis_usage(struct amdgpu_vram_mgr *mgr)
>   	return atomic64_read(&mgr->vis_usage);
>   }
>   
> +/**
> + * amdgpu_vram_mgr_clear_reset_blocks - reset clear blocks
> + *
> + * @adev: amdgpu device pointer
> + *
> + * Reset the cleared drm buddy blocks.
> + */
> +void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev)
> +{
> +	struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
> +	struct drm_buddy *mm = &mgr->mm;
> +
> +	mutex_lock(&mgr->lock);
> +	drm_buddy_reset_clear(mm, false);
> +	mutex_unlock(&mgr->lock);
> +}
> +
>   /**
>    * amdgpu_vram_mgr_intersects - test each drm buddy block for intersection
>    *
> diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
> index a1e652b7631d..a94061f373de 100644
> --- a/drivers/gpu/drm/drm_buddy.c
> +++ b/drivers/gpu/drm/drm_buddy.c
> @@ -405,6 +405,49 @@ drm_get_buddy(struct drm_buddy_block *block)
>   }
>   EXPORT_SYMBOL(drm_get_buddy);
>   
> +/**
> + * drm_buddy_reset_clear - reset blocks clear state
> + *
> + * @mm: DRM buddy manager
> + * @is_clear: blocks clear state
> + *
> + * Reset the clear state based on @is_clear value for each block
> + * in the freelist.
> + */
> +void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear)
> +{
> +	u64 root_size, size, start;
> +	unsigned int order;
> +	int i;
> +
> +	size = mm->size;
> +	for (i = 0; i < mm->n_roots; ++i) {
> +		order = ilog2(size) - ilog2(mm->chunk_size);
> +		start = drm_buddy_block_offset(mm->roots[i]);
> +		__force_merge(mm, start, start + size, order);
> +
> +		root_size = mm->chunk_size << order;
> +		size -= root_size;
> +	}
> +
> +	for (i = 0; i <= mm->max_order; ++i) {
> +		struct drm_buddy_block *block;
> +
> +		list_for_each_entry_reverse(block, &mm->free_list[i], link) {
> +			if (is_clear != drm_buddy_block_is_clear(block)) {
> +				if (is_clear) {
> +					mark_cleared(block);
> +					mm->clear_avail += drm_buddy_block_size(mm, block);
> +				} else {
> +					clear_reset(block);
> +					mm->clear_avail -= drm_buddy_block_size(mm, block);
> +				}
> +			}
> +		}
> +	}
> +}
> +EXPORT_SYMBOL(drm_buddy_reset_clear);
> +
>   /**
>    * drm_buddy_free_block - free a block
>    *
> diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h
> index 9689a7c5dd36..513837632b7d 100644
> --- a/include/drm/drm_buddy.h
> +++ b/include/drm/drm_buddy.h
> @@ -160,6 +160,8 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
>   			 u64 new_size,
>   			 struct list_head *blocks);
>   
> +void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear);
> +
>   void drm_buddy_free_block(struct drm_buddy *mm, struct drm_buddy_block *block);
>   
>   void drm_buddy_free_list(struct drm_buddy *mm,


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 2/3] drm/amdgpu: Reset the clear flag in buddy during resume
  2025-07-08  9:00   ` Matthew Auld
@ 2025-07-10  7:14     ` Arunpravin Paneer Selvam
  2025-07-10 14:20       ` Matthew Auld
  0 siblings, 1 reply; 9+ messages in thread
From: Arunpravin Paneer Selvam @ 2025-07-10  7:14 UTC (permalink / raw)
  To: Matthew Auld, dri-devel, amd-gfx, christian.koenig, matthew.brost
  Cc: alexander.deucher, stable


On 7/8/2025 2:30 PM, Matthew Auld wrote:
> On 08/07/2025 07:54, Arunpravin Paneer Selvam wrote:
>> - Added a handler in DRM buddy manager to reset the cleared
>>    flag for the blocks in the freelist.
>>
>> - This is necessary because, upon resuming, the VRAM becomes
>>    cluttered with BIOS data, yet the VRAM backend manager
>>    believes that everything has been cleared.
>>
>> v2:
>>    - Add lock before accessing drm_buddy_clear_reset_blocks()(Matthew 
>> Auld)
>>    - Force merge the two dirty blocks.(Matthew Auld)
>>    - Add a new unit test case for this issue.(Matthew Auld)
>>    - Having this function being able to flip the state either way 
>> would be
>>      good. (Matthew Brost)
>>
>> v3(Matthew Auld):
>>    - Do merge step first to avoid the use of extra reset flag.
>>
>> Signed-off-by: Arunpravin Paneer Selvam 
>> <Arunpravin.PaneerSelvam@amd.com>
>> Suggested-by: Christian König <christian.koenig@amd.com>
>> Cc: stable@vger.kernel.org
>> Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality")
>> Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3812
>
> Reviewed-by: Matthew Auld <matthew.auld@intel.com>

Is this RB also for the unit test case (patch 3).

Thanks,

Arun.

>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c   |  2 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h      |  1 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 17 ++++++++
>>   drivers/gpu/drm/drm_buddy.c                  | 43 ++++++++++++++++++++
>>   include/drm/drm_buddy.h                      |  2 +
>>   5 files changed, 65 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> index a59f194e3360..b89e46f29b51 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> @@ -5193,6 +5193,8 @@ int amdgpu_device_resume(struct drm_device 
>> *dev, bool notify_clients)
>>           dev->dev->power.disable_depth--;
>>   #endif
>>       }
>> +
>> +    amdgpu_vram_mgr_clear_reset_blocks(adev);
>>       adev->in_suspend = false;
>>         if (amdgpu_acpi_smart_shift_update(dev, AMDGPU_SS_DEV_D0))
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> index 208b7d1d8a27..450e4bf093b7 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> @@ -154,6 +154,7 @@ int amdgpu_vram_mgr_reserve_range(struct 
>> amdgpu_vram_mgr *mgr,
>>                     uint64_t start, uint64_t size);
>>   int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
>>                         uint64_t start);
>> +void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev);
>>     bool amdgpu_res_cpu_visible(struct amdgpu_device *adev,
>>                   struct ttm_resource *res);
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
>> index abdc52b0895a..07c936e90d8e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
>> @@ -782,6 +782,23 @@ uint64_t amdgpu_vram_mgr_vis_usage(struct 
>> amdgpu_vram_mgr *mgr)
>>       return atomic64_read(&mgr->vis_usage);
>>   }
>>   +/**
>> + * amdgpu_vram_mgr_clear_reset_blocks - reset clear blocks
>> + *
>> + * @adev: amdgpu device pointer
>> + *
>> + * Reset the cleared drm buddy blocks.
>> + */
>> +void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev)
>> +{
>> +    struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
>> +    struct drm_buddy *mm = &mgr->mm;
>> +
>> +    mutex_lock(&mgr->lock);
>> +    drm_buddy_reset_clear(mm, false);
>> +    mutex_unlock(&mgr->lock);
>> +}
>> +
>>   /**
>>    * amdgpu_vram_mgr_intersects - test each drm buddy block for 
>> intersection
>>    *
>> diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
>> index a1e652b7631d..a94061f373de 100644
>> --- a/drivers/gpu/drm/drm_buddy.c
>> +++ b/drivers/gpu/drm/drm_buddy.c
>> @@ -405,6 +405,49 @@ drm_get_buddy(struct drm_buddy_block *block)
>>   }
>>   EXPORT_SYMBOL(drm_get_buddy);
>>   +/**
>> + * drm_buddy_reset_clear - reset blocks clear state
>> + *
>> + * @mm: DRM buddy manager
>> + * @is_clear: blocks clear state
>> + *
>> + * Reset the clear state based on @is_clear value for each block
>> + * in the freelist.
>> + */
>> +void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear)
>> +{
>> +    u64 root_size, size, start;
>> +    unsigned int order;
>> +    int i;
>> +
>> +    size = mm->size;
>> +    for (i = 0; i < mm->n_roots; ++i) {
>> +        order = ilog2(size) - ilog2(mm->chunk_size);
>> +        start = drm_buddy_block_offset(mm->roots[i]);
>> +        __force_merge(mm, start, start + size, order);
>> +
>> +        root_size = mm->chunk_size << order;
>> +        size -= root_size;
>> +    }
>> +
>> +    for (i = 0; i <= mm->max_order; ++i) {
>> +        struct drm_buddy_block *block;
>> +
>> +        list_for_each_entry_reverse(block, &mm->free_list[i], link) {
>> +            if (is_clear != drm_buddy_block_is_clear(block)) {
>> +                if (is_clear) {
>> +                    mark_cleared(block);
>> +                    mm->clear_avail += drm_buddy_block_size(mm, block);
>> +                } else {
>> +                    clear_reset(block);
>> +                    mm->clear_avail -= drm_buddy_block_size(mm, block);
>> +                }
>> +            }
>> +        }
>> +    }
>> +}
>> +EXPORT_SYMBOL(drm_buddy_reset_clear);
>> +
>>   /**
>>    * drm_buddy_free_block - free a block
>>    *
>> diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h
>> index 9689a7c5dd36..513837632b7d 100644
>> --- a/include/drm/drm_buddy.h
>> +++ b/include/drm/drm_buddy.h
>> @@ -160,6 +160,8 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
>>                u64 new_size,
>>                struct list_head *blocks);
>>   +void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear);
>> +
>>   void drm_buddy_free_block(struct drm_buddy *mm, struct 
>> drm_buddy_block *block);
>>     void drm_buddy_free_list(struct drm_buddy *mm,
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 2/3] drm/amdgpu: Reset the clear flag in buddy during resume
  2025-07-10  7:14     ` Arunpravin Paneer Selvam
@ 2025-07-10 14:20       ` Matthew Auld
  2025-07-10 14:54         ` Arunpravin Paneer Selvam
  0 siblings, 1 reply; 9+ messages in thread
From: Matthew Auld @ 2025-07-10 14:20 UTC (permalink / raw)
  To: Arunpravin Paneer Selvam, dri-devel, amd-gfx, christian.koenig,
	matthew.brost
  Cc: alexander.deucher, stable

On 10/07/2025 08:14, Arunpravin Paneer Selvam wrote:
> 
> On 7/8/2025 2:30 PM, Matthew Auld wrote:
>> On 08/07/2025 07:54, Arunpravin Paneer Selvam wrote:
>>> - Added a handler in DRM buddy manager to reset the cleared
>>>    flag for the blocks in the freelist.
>>>
>>> - This is necessary because, upon resuming, the VRAM becomes
>>>    cluttered with BIOS data, yet the VRAM backend manager
>>>    believes that everything has been cleared.
>>>
>>> v2:
>>>    - Add lock before accessing drm_buddy_clear_reset_blocks()(Matthew 
>>> Auld)
>>>    - Force merge the two dirty blocks.(Matthew Auld)
>>>    - Add a new unit test case for this issue.(Matthew Auld)
>>>    - Having this function being able to flip the state either way 
>>> would be
>>>      good. (Matthew Brost)
>>>
>>> v3(Matthew Auld):
>>>    - Do merge step first to avoid the use of extra reset flag.
>>>
>>> Signed-off-by: Arunpravin Paneer Selvam 
>>> <Arunpravin.PaneerSelvam@amd.com>
>>> Suggested-by: Christian König <christian.koenig@amd.com>
>>> Cc: stable@vger.kernel.org
>>> Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality")
>>> Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3812
>>
>> Reviewed-by: Matthew Auld <matthew.auld@intel.com>
> 
> Is this RB also for the unit test case (patch 3).

Feel free to apply my r-b there also.

> 
> Thanks,
> 
> Arun.
> 
>>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c   |  2 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h      |  1 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 17 ++++++++
>>>   drivers/gpu/drm/drm_buddy.c                  | 43 ++++++++++++++++++++
>>>   include/drm/drm_buddy.h                      |  2 +
>>>   5 files changed, 65 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/ 
>>> gpu/drm/amd/amdgpu/amdgpu_device.c
>>> index a59f194e3360..b89e46f29b51 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> @@ -5193,6 +5193,8 @@ int amdgpu_device_resume(struct drm_device 
>>> *dev, bool notify_clients)
>>>           dev->dev->power.disable_depth--;
>>>   #endif
>>>       }
>>> +
>>> +    amdgpu_vram_mgr_clear_reset_blocks(adev);
>>>       adev->in_suspend = false;
>>>         if (amdgpu_acpi_smart_shift_update(dev, AMDGPU_SS_DEV_D0))
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/ 
>>> drm/amd/amdgpu/amdgpu_ttm.h
>>> index 208b7d1d8a27..450e4bf093b7 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>>> @@ -154,6 +154,7 @@ int amdgpu_vram_mgr_reserve_range(struct 
>>> amdgpu_vram_mgr *mgr,
>>>                     uint64_t start, uint64_t size);
>>>   int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
>>>                         uint64_t start);
>>> +void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev);
>>>     bool amdgpu_res_cpu_visible(struct amdgpu_device *adev,
>>>                   struct ttm_resource *res);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/ 
>>> gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
>>> index abdc52b0895a..07c936e90d8e 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
>>> @@ -782,6 +782,23 @@ uint64_t amdgpu_vram_mgr_vis_usage(struct 
>>> amdgpu_vram_mgr *mgr)
>>>       return atomic64_read(&mgr->vis_usage);
>>>   }
>>>   +/**
>>> + * amdgpu_vram_mgr_clear_reset_blocks - reset clear blocks
>>> + *
>>> + * @adev: amdgpu device pointer
>>> + *
>>> + * Reset the cleared drm buddy blocks.
>>> + */
>>> +void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev)
>>> +{
>>> +    struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
>>> +    struct drm_buddy *mm = &mgr->mm;
>>> +
>>> +    mutex_lock(&mgr->lock);
>>> +    drm_buddy_reset_clear(mm, false);
>>> +    mutex_unlock(&mgr->lock);
>>> +}
>>> +
>>>   /**
>>>    * amdgpu_vram_mgr_intersects - test each drm buddy block for 
>>> intersection
>>>    *
>>> diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
>>> index a1e652b7631d..a94061f373de 100644
>>> --- a/drivers/gpu/drm/drm_buddy.c
>>> +++ b/drivers/gpu/drm/drm_buddy.c
>>> @@ -405,6 +405,49 @@ drm_get_buddy(struct drm_buddy_block *block)
>>>   }
>>>   EXPORT_SYMBOL(drm_get_buddy);
>>>   +/**
>>> + * drm_buddy_reset_clear - reset blocks clear state
>>> + *
>>> + * @mm: DRM buddy manager
>>> + * @is_clear: blocks clear state
>>> + *
>>> + * Reset the clear state based on @is_clear value for each block
>>> + * in the freelist.
>>> + */
>>> +void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear)
>>> +{
>>> +    u64 root_size, size, start;
>>> +    unsigned int order;
>>> +    int i;
>>> +
>>> +    size = mm->size;
>>> +    for (i = 0; i < mm->n_roots; ++i) {
>>> +        order = ilog2(size) - ilog2(mm->chunk_size);
>>> +        start = drm_buddy_block_offset(mm->roots[i]);
>>> +        __force_merge(mm, start, start + size, order);
>>> +
>>> +        root_size = mm->chunk_size << order;
>>> +        size -= root_size;
>>> +    }
>>> +
>>> +    for (i = 0; i <= mm->max_order; ++i) {
>>> +        struct drm_buddy_block *block;
>>> +
>>> +        list_for_each_entry_reverse(block, &mm->free_list[i], link) {
>>> +            if (is_clear != drm_buddy_block_is_clear(block)) {
>>> +                if (is_clear) {
>>> +                    mark_cleared(block);
>>> +                    mm->clear_avail += drm_buddy_block_size(mm, block);
>>> +                } else {
>>> +                    clear_reset(block);
>>> +                    mm->clear_avail -= drm_buddy_block_size(mm, block);
>>> +                }
>>> +            }
>>> +        }
>>> +    }
>>> +}
>>> +EXPORT_SYMBOL(drm_buddy_reset_clear);
>>> +
>>>   /**
>>>    * drm_buddy_free_block - free a block
>>>    *
>>> diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h
>>> index 9689a7c5dd36..513837632b7d 100644
>>> --- a/include/drm/drm_buddy.h
>>> +++ b/include/drm/drm_buddy.h
>>> @@ -160,6 +160,8 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
>>>                u64 new_size,
>>>                struct list_head *blocks);
>>>   +void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear);
>>> +
>>>   void drm_buddy_free_block(struct drm_buddy *mm, struct 
>>> drm_buddy_block *block);
>>>     void drm_buddy_free_list(struct drm_buddy *mm,
>>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 2/3] drm/amdgpu: Reset the clear flag in buddy during resume
  2025-07-10 14:20       ` Matthew Auld
@ 2025-07-10 14:54         ` Arunpravin Paneer Selvam
  0 siblings, 0 replies; 9+ messages in thread
From: Arunpravin Paneer Selvam @ 2025-07-10 14:54 UTC (permalink / raw)
  To: Matthew Auld, dri-devel, amd-gfx, christian.koenig, matthew.brost
  Cc: alexander.deucher, stable


On 7/10/2025 7:50 PM, Matthew Auld wrote:
> On 10/07/2025 08:14, Arunpravin Paneer Selvam wrote:
>>
>> On 7/8/2025 2:30 PM, Matthew Auld wrote:
>>> On 08/07/2025 07:54, Arunpravin Paneer Selvam wrote:
>>>> - Added a handler in DRM buddy manager to reset the cleared
>>>>    flag for the blocks in the freelist.
>>>>
>>>> - This is necessary because, upon resuming, the VRAM becomes
>>>>    cluttered with BIOS data, yet the VRAM backend manager
>>>>    believes that everything has been cleared.
>>>>
>>>> v2:
>>>>    - Add lock before accessing 
>>>> drm_buddy_clear_reset_blocks()(Matthew Auld)
>>>>    - Force merge the two dirty blocks.(Matthew Auld)
>>>>    - Add a new unit test case for this issue.(Matthew Auld)
>>>>    - Having this function being able to flip the state either way 
>>>> would be
>>>>      good. (Matthew Brost)
>>>>
>>>> v3(Matthew Auld):
>>>>    - Do merge step first to avoid the use of extra reset flag.
>>>>
>>>> Signed-off-by: Arunpravin Paneer Selvam 
>>>> <Arunpravin.PaneerSelvam@amd.com>
>>>> Suggested-by: Christian König <christian.koenig@amd.com>
>>>> Cc: stable@vger.kernel.org
>>>> Fixes: a68c7eaa7a8f ("drm/amdgpu: Enable clear page functionality")
>>>> Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3812
>>>
>>> Reviewed-by: Matthew Auld <matthew.auld@intel.com>
>>
>> Is this RB also for the unit test case (patch 3).
>
> Feel free to apply my r-b there also.
Thanks!
>
>>
>> Thanks,
>>
>> Arun.
>>
>>>
>>>> ---
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c   |  2 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h      |  1 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 17 ++++++++
>>>>   drivers/gpu/drm/drm_buddy.c                  | 43 
>>>> ++++++++++++++++++++
>>>>   include/drm/drm_buddy.h                      |  2 +
>>>>   5 files changed, 65 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/ 
>>>> gpu/drm/amd/amdgpu/amdgpu_device.c
>>>> index a59f194e3360..b89e46f29b51 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>>> @@ -5193,6 +5193,8 @@ int amdgpu_device_resume(struct drm_device 
>>>> *dev, bool notify_clients)
>>>>           dev->dev->power.disable_depth--;
>>>>   #endif
>>>>       }
>>>> +
>>>> +    amdgpu_vram_mgr_clear_reset_blocks(adev);
>>>>       adev->in_suspend = false;
>>>>         if (amdgpu_acpi_smart_shift_update(dev, AMDGPU_SS_DEV_D0))
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/ 
>>>> drm/amd/amdgpu/amdgpu_ttm.h
>>>> index 208b7d1d8a27..450e4bf093b7 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>>>> @@ -154,6 +154,7 @@ int amdgpu_vram_mgr_reserve_range(struct 
>>>> amdgpu_vram_mgr *mgr,
>>>>                     uint64_t start, uint64_t size);
>>>>   int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
>>>>                         uint64_t start);
>>>> +void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev);
>>>>     bool amdgpu_res_cpu_visible(struct amdgpu_device *adev,
>>>>                   struct ttm_resource *res);
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c 
>>>> b/drivers/ gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
>>>> index abdc52b0895a..07c936e90d8e 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
>>>> @@ -782,6 +782,23 @@ uint64_t amdgpu_vram_mgr_vis_usage(struct 
>>>> amdgpu_vram_mgr *mgr)
>>>>       return atomic64_read(&mgr->vis_usage);
>>>>   }
>>>>   +/**
>>>> + * amdgpu_vram_mgr_clear_reset_blocks - reset clear blocks
>>>> + *
>>>> + * @adev: amdgpu device pointer
>>>> + *
>>>> + * Reset the cleared drm buddy blocks.
>>>> + */
>>>> +void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev)
>>>> +{
>>>> +    struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
>>>> +    struct drm_buddy *mm = &mgr->mm;
>>>> +
>>>> +    mutex_lock(&mgr->lock);
>>>> +    drm_buddy_reset_clear(mm, false);
>>>> +    mutex_unlock(&mgr->lock);
>>>> +}
>>>> +
>>>>   /**
>>>>    * amdgpu_vram_mgr_intersects - test each drm buddy block for 
>>>> intersection
>>>>    *
>>>> diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
>>>> index a1e652b7631d..a94061f373de 100644
>>>> --- a/drivers/gpu/drm/drm_buddy.c
>>>> +++ b/drivers/gpu/drm/drm_buddy.c
>>>> @@ -405,6 +405,49 @@ drm_get_buddy(struct drm_buddy_block *block)
>>>>   }
>>>>   EXPORT_SYMBOL(drm_get_buddy);
>>>>   +/**
>>>> + * drm_buddy_reset_clear - reset blocks clear state
>>>> + *
>>>> + * @mm: DRM buddy manager
>>>> + * @is_clear: blocks clear state
>>>> + *
>>>> + * Reset the clear state based on @is_clear value for each block
>>>> + * in the freelist.
>>>> + */
>>>> +void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear)
>>>> +{
>>>> +    u64 root_size, size, start;
>>>> +    unsigned int order;
>>>> +    int i;
>>>> +
>>>> +    size = mm->size;
>>>> +    for (i = 0; i < mm->n_roots; ++i) {
>>>> +        order = ilog2(size) - ilog2(mm->chunk_size);
>>>> +        start = drm_buddy_block_offset(mm->roots[i]);
>>>> +        __force_merge(mm, start, start + size, order);
>>>> +
>>>> +        root_size = mm->chunk_size << order;
>>>> +        size -= root_size;
>>>> +    }
>>>> +
>>>> +    for (i = 0; i <= mm->max_order; ++i) {
>>>> +        struct drm_buddy_block *block;
>>>> +
>>>> +        list_for_each_entry_reverse(block, &mm->free_list[i], link) {
>>>> +            if (is_clear != drm_buddy_block_is_clear(block)) {
>>>> +                if (is_clear) {
>>>> +                    mark_cleared(block);
>>>> +                    mm->clear_avail += drm_buddy_block_size(mm, 
>>>> block);
>>>> +                } else {
>>>> +                    clear_reset(block);
>>>> +                    mm->clear_avail -= drm_buddy_block_size(mm, 
>>>> block);
>>>> +                }
>>>> +            }
>>>> +        }
>>>> +    }
>>>> +}
>>>> +EXPORT_SYMBOL(drm_buddy_reset_clear);
>>>> +
>>>>   /**
>>>>    * drm_buddy_free_block - free a block
>>>>    *
>>>> diff --git a/include/drm/drm_buddy.h b/include/drm/drm_buddy.h
>>>> index 9689a7c5dd36..513837632b7d 100644
>>>> --- a/include/drm/drm_buddy.h
>>>> +++ b/include/drm/drm_buddy.h
>>>> @@ -160,6 +160,8 @@ int drm_buddy_block_trim(struct drm_buddy *mm,
>>>>                u64 new_size,
>>>>                struct list_head *blocks);
>>>>   +void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear);
>>>> +
>>>>   void drm_buddy_free_block(struct drm_buddy *mm, struct 
>>>> drm_buddy_block *block);
>>>>     void drm_buddy_free_list(struct drm_buddy *mm,
>>>
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-07-10 14:54 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-08  6:54 [PATCH v3 1/3] drm/amdgpu: Add WARN_ON to the resource clear function Arunpravin Paneer Selvam
2025-07-08  6:54 ` [PATCH v3 2/3] drm/amdgpu: Reset the clear flag in buddy during resume Arunpravin Paneer Selvam
2025-07-08  7:23   ` Christian König
2025-07-08  9:00   ` Matthew Auld
2025-07-10  7:14     ` Arunpravin Paneer Selvam
2025-07-10 14:20       ` Matthew Auld
2025-07-10 14:54         ` Arunpravin Paneer Selvam
2025-07-08  6:54 ` [PATCH v3 3/3] drm/buddy: Add a new unit test case for buffer clearance issue Arunpravin Paneer Selvam
2025-07-08  7:20 ` [PATCH v3 1/3] drm/amdgpu: Add WARN_ON to the resource clear function Christian König

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).