* [PATCH 0/2] Let userspace know about swapped out panthor GEM objects
@ 2026-04-20 15:46 Nicolas Frattaroli
2026-04-20 15:46 ` [PATCH 1/2] drm/fdinfo: Add "evicted" memory accounting Nicolas Frattaroli
2026-04-20 15:47 ` [PATCH 2/2] drm/panthor: Implement evicted status for GEM objects Nicolas Frattaroli
0 siblings, 2 replies; 5+ messages in thread
From: Nicolas Frattaroli @ 2026-04-20 15:46 UTC (permalink / raw)
To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Boris Brezillon, Steven Price, Liviu Dudau
Cc: dri-devel, linux-kernel, kernel, Nicolas Frattaroli
Panthor has recently gained a GEM shrinker. It allows evicting memory
that backs unused GEM objects to swap.
In this series, both fdinfo and Panthor's gems debugfs are extended so
that information on evicted pages can be gathered by users through these
two methods.
Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
---
Nicolas Frattaroli (2):
drm/fdinfo: Add "evicted" memory accounting
drm/panthor: Implement evicted status for GEM objects
drivers/gpu/drm/drm_file.c | 8 ++++++++
drivers/gpu/drm/panthor/panthor_gem.c | 10 ++++++++++
drivers/gpu/drm/panthor/panthor_gem.h | 11 +++++++++++
include/drm/drm_file.h | 2 ++
include/drm/drm_gem.h | 2 ++
5 files changed, 33 insertions(+)
---
base-commit: 3f9357c30a44734d45e3093c521d52b2aefb09f5
change-id: 20260420-panthor-bo-reclaim-observability-970679c9533c
Best regards,
--
Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH 1/2] drm/fdinfo: Add "evicted" memory accounting 2026-04-20 15:46 [PATCH 0/2] Let userspace know about swapped out panthor GEM objects Nicolas Frattaroli @ 2026-04-20 15:46 ` Nicolas Frattaroli 2026-04-20 15:47 ` [PATCH 2/2] drm/panthor: Implement evicted status for GEM objects Nicolas Frattaroli 1 sibling, 0 replies; 5+ messages in thread From: Nicolas Frattaroli @ 2026-04-20 15:46 UTC (permalink / raw) To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Boris Brezillon, Steven Price, Liviu Dudau Cc: dri-devel, linux-kernel, kernel, Nicolas Frattaroli Currently, there's no way to know for certain how much GPU memory was swapped out. The difference between total and resident memory would include newly allocated pages, which are not resident, but also aren't swapped out. Add a new drm_gem_object_status so drivers can signal when an object has been evicted to swap, and add a new "evicted" counter to drm_memory_stats. Due to how the supported_flags bitmask is determined, the "evicted" count won't be printed to fdinfo if there's no swapped out pages. Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> --- drivers/gpu/drm/drm_file.c | 8 ++++++++ include/drm/drm_file.h | 2 ++ include/drm/drm_gem.h | 2 ++ 3 files changed, 12 insertions(+) diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c index ec820686b302..5078172976c0 100644 --- a/drivers/gpu/drm/drm_file.c +++ b/drivers/gpu/drm/drm_file.c @@ -868,6 +868,7 @@ int drm_memory_stats_is_zero(const struct drm_memory_stats *stats) stats->private == 0 && stats->resident == 0 && stats->purgeable == 0 && + stats->evicted == 0 && stats->active == 0); } EXPORT_SYMBOL(drm_memory_stats_is_zero); @@ -901,6 +902,10 @@ void drm_print_memory_stats(struct drm_printer *p, if (supported_status & DRM_GEM_OBJECT_PURGEABLE) drm_fdinfo_print_size(p, prefix, "purgeable", region, stats->purgeable); + + if (supported_status & DRM_GEM_OBJECT_EVICTED) + drm_fdinfo_print_size(p, prefix, "evicted", region, + stats->evicted); } EXPORT_SYMBOL(drm_print_memory_stats); @@ -954,6 +959,9 @@ void drm_show_memory_stats(struct drm_printer *p, struct drm_file *file) if (s & DRM_GEM_OBJECT_PURGEABLE) status.purgeable += add_size; + + if (s & DRM_GEM_OBJECT_EVICTED) + status.evicted += add_size; } spin_unlock(&file->table_lock); diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h index 6ee70ad65e1f..213dfecac342 100644 --- a/include/drm/drm_file.h +++ b/include/drm/drm_file.h @@ -500,6 +500,7 @@ void drm_send_event_timestamp_locked(struct drm_device *dev, * @resident: Total size of GEM objects backing pages * @purgeable: Total size of GEM objects that can be purged (resident and not active) * @active: Total size of GEM objects active on one or more engines + * @evicted: Total size of GEM objects that have been evicted to swap * * Used by drm_print_memory_stats() */ @@ -509,6 +510,7 @@ struct drm_memory_stats { u64 resident; u64 purgeable; u64 active; + u64 evicted; }; enum drm_gem_object_status; diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 86f5846154f7..b42ea2e582cf 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -53,6 +53,7 @@ struct drm_gem_object; * @DRM_GEM_OBJECT_RESIDENT: object is resident in memory (ie. not unpinned) * @DRM_GEM_OBJECT_PURGEABLE: object marked as purgeable by userspace * @DRM_GEM_OBJECT_ACTIVE: object is currently used by an active submission + * @DRM_GEM_OBJECT_EVICTED: object is evicted to swap * * Bitmask of status used for fdinfo memory stats, see &drm_gem_object_funcs.status * and drm_show_fdinfo(). Note that an object can report DRM_GEM_OBJECT_PURGEABLE @@ -67,6 +68,7 @@ enum drm_gem_object_status { DRM_GEM_OBJECT_RESIDENT = BIT(0), DRM_GEM_OBJECT_PURGEABLE = BIT(1), DRM_GEM_OBJECT_ACTIVE = BIT(2), + DRM_GEM_OBJECT_EVICTED = BIT(3), }; /** -- 2.53.0 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] drm/panthor: Implement evicted status for GEM objects 2026-04-20 15:46 [PATCH 0/2] Let userspace know about swapped out panthor GEM objects Nicolas Frattaroli 2026-04-20 15:46 ` [PATCH 1/2] drm/fdinfo: Add "evicted" memory accounting Nicolas Frattaroli @ 2026-04-20 15:47 ` Nicolas Frattaroli 2026-04-20 16:17 ` Boris Brezillon 1 sibling, 1 reply; 5+ messages in thread From: Nicolas Frattaroli @ 2026-04-20 15:47 UTC (permalink / raw) To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Boris Brezillon, Steven Price, Liviu Dudau Cc: dri-devel, linux-kernel, kernel, Nicolas Frattaroli For fdinfo to be able to fill its evicted counter with data, panthor needs to keep track of whether a GEM object has ever been reclaimed. Just checking whether the pages are resident isn't enough, as newly allocated objects also won't be resident. Do this with a new atomic_t member on panthor_gem_object. It's increased when an object gets evicted by the shrinker. While it's allowed to wrap around to below zero and assume a value less than a previous observed value, the reclaim counter will never return to 0 for any particular object once it's been reclaimed at least once. Use this new member to then set the appropriate DRM_GEM_OBJECT_EVICTED status flag for fdinfo, and use it in the gems debugfs. It's possible to distinguish evicted non-resident pages from newly allocated non-resident pages by checking whether reclaimed_count is != 0. Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> --- drivers/gpu/drm/panthor/panthor_gem.c | 10 ++++++++++ drivers/gpu/drm/panthor/panthor_gem.h | 11 +++++++++++ 2 files changed, 21 insertions(+) diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c index 69cef05b6ef7..4b761b39565d 100644 --- a/drivers/gpu/drm/panthor/panthor_gem.c +++ b/drivers/gpu/drm/panthor/panthor_gem.c @@ -687,6 +687,10 @@ static void panthor_gem_evict_locked(struct panthor_gem_object *bo) if (drm_WARN_ON_ONCE(bo->base.dev, !bo->backing.pages)) return; + /* Don't ever wrap around as far as 0, jump from INT_MIN to 1 */ + if (!atomic_inc_unless_negative(&bo->reclaimed_count)) + atomic_set(&bo->reclaimed_count, 1); + panthor_gem_dev_map_cleanup_locked(bo); panthor_gem_backing_cleanup_locked(bo); panthor_gem_update_reclaim_state_locked(bo, NULL); @@ -788,6 +792,8 @@ static enum drm_gem_object_status panthor_gem_status(struct drm_gem_object *obj) if (drm_gem_is_imported(&bo->base) || bo->backing.pages) res |= DRM_GEM_OBJECT_RESIDENT; + else if (atomic_read(&bo->reclaimed_count)) + res |= DRM_GEM_OBJECT_EVICTED; return res; } @@ -1595,6 +1601,7 @@ static void panthor_gem_debugfs_print_flag_names(struct seq_file *m) static const char * const gem_state_flags_names[] = { [PANTHOR_DEBUGFS_GEM_STATE_IMPORTED_BIT] = "imported", [PANTHOR_DEBUGFS_GEM_STATE_EXPORTED_BIT] = "exported", + [PANTHOR_DEBUGFS_GEM_STATE_EVICTED_BIT] = "evicted", }; static const char * const gem_usage_flags_names[] = { @@ -1648,6 +1655,9 @@ static void panthor_gem_debugfs_bo_print(struct panthor_gem_object *bo, if (drm_gem_is_imported(&bo->base)) gem_state_flags |= PANTHOR_DEBUGFS_GEM_STATE_FLAG_IMPORTED; + else if (!resident_size && atomic_read(&bo->reclaimed_count)) + gem_state_flags |= PANTHOR_DEBUGFS_GEM_STATE_FLAG_EVICTED; + if (bo->base.dma_buf) gem_state_flags |= PANTHOR_DEBUGFS_GEM_STATE_FLAG_EXPORTED; diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h index ae0491d0b121..1ab573f03330 100644 --- a/drivers/gpu/drm/panthor/panthor_gem.h +++ b/drivers/gpu/drm/panthor/panthor_gem.h @@ -19,12 +19,16 @@ struct panthor_vm; enum panthor_debugfs_gem_state_flags { PANTHOR_DEBUGFS_GEM_STATE_IMPORTED_BIT = 0, PANTHOR_DEBUGFS_GEM_STATE_EXPORTED_BIT = 1, + PANTHOR_DEBUGFS_GEM_STATE_EVICTED_BIT = 2, /** @PANTHOR_DEBUGFS_GEM_STATE_FLAG_IMPORTED: GEM BO is PRIME imported. */ PANTHOR_DEBUGFS_GEM_STATE_FLAG_IMPORTED = BIT(PANTHOR_DEBUGFS_GEM_STATE_IMPORTED_BIT), /** @PANTHOR_DEBUGFS_GEM_STATE_FLAG_EXPORTED: GEM BO is PRIME exported. */ PANTHOR_DEBUGFS_GEM_STATE_FLAG_EXPORTED = BIT(PANTHOR_DEBUGFS_GEM_STATE_EXPORTED_BIT), + + /** @PANTHOR_DEBUGFS_GEM_STATE_FLAG_EVICTED: GEM BO is evicted to swap. */ + PANTHOR_DEBUGFS_GEM_STATE_FLAG_EVICTED = BIT(PANTHOR_DEBUGFS_GEM_STATE_EVICTED_BIT), }; enum panthor_debugfs_gem_usage_flags { @@ -172,6 +176,13 @@ struct panthor_gem_object { /** @reclaim_state: Cached reclaim state */ enum panthor_gem_reclaim_state reclaim_state; + /** + * @reclaimed_count: How many times object has been evicted to swap. + * Never returns to 0 once incremented even on wrap-around, but may + * become < 0 and < the previous value if wrap-around occurs. + */ + atomic_t reclaimed_count; + /** * @exclusive_vm_root_gem: Root GEM of the exclusive VM this GEM object * is attached to. -- 2.53.0 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 2/2] drm/panthor: Implement evicted status for GEM objects 2026-04-20 15:47 ` [PATCH 2/2] drm/panthor: Implement evicted status for GEM objects Nicolas Frattaroli @ 2026-04-20 16:17 ` Boris Brezillon 2026-04-20 17:46 ` Nicolas Frattaroli 0 siblings, 1 reply; 5+ messages in thread From: Boris Brezillon @ 2026-04-20 16:17 UTC (permalink / raw) To: Nicolas Frattaroli Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Steven Price, Liviu Dudau, dri-devel, linux-kernel, kernel On Mon, 20 Apr 2026 17:47:00 +0200 Nicolas Frattaroli <nicolas.frattaroli@collabora.com> wrote: > For fdinfo to be able to fill its evicted counter with data, panthor > needs to keep track of whether a GEM object has ever been reclaimed. > Just checking whether the pages are resident isn't enough, as newly > allocated objects also won't be resident. > > Do this with a new atomic_t member on panthor_gem_object. It's increased > when an object gets evicted by the shrinker. While it's allowed to wrap > around to below zero and assume a value less than a previous observed > value, the reclaim counter will never return to 0 for any particular > object once it's been reclaimed at least once. > > Use this new member to then set the appropriate DRM_GEM_OBJECT_EVICTED > status flag for fdinfo, and use it in the gems debugfs. It's possible to > distinguish evicted non-resident pages from newly allocated non-resident > pages by checking whether reclaimed_count is != 0. > > Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> > --- > drivers/gpu/drm/panthor/panthor_gem.c | 10 ++++++++++ > drivers/gpu/drm/panthor/panthor_gem.h | 11 +++++++++++ > 2 files changed, 21 insertions(+) > > diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c > index 69cef05b6ef7..4b761b39565d 100644 > --- a/drivers/gpu/drm/panthor/panthor_gem.c > +++ b/drivers/gpu/drm/panthor/panthor_gem.c > @@ -687,6 +687,10 @@ static void panthor_gem_evict_locked(struct panthor_gem_object *bo) > if (drm_WARN_ON_ONCE(bo->base.dev, !bo->backing.pages)) > return; > > + /* Don't ever wrap around as far as 0, jump from INT_MIN to 1 */ > + if (!atomic_inc_unless_negative(&bo->reclaimed_count)) > + atomic_set(&bo->reclaimed_count, 1); Can't we just go atomic_add_unless(&bo->reclaimed_count, 1, INT_MAX); here, to handle the INT_MAX saturation? > + > panthor_gem_dev_map_cleanup_locked(bo); > panthor_gem_backing_cleanup_locked(bo); > panthor_gem_update_reclaim_state_locked(bo, NULL); > @@ -788,6 +792,8 @@ static enum drm_gem_object_status panthor_gem_status(struct drm_gem_object *obj) > > if (drm_gem_is_imported(&bo->base) || bo->backing.pages) > res |= DRM_GEM_OBJECT_RESIDENT; > + else if (atomic_read(&bo->reclaimed_count)) > + res |= DRM_GEM_OBJECT_EVICTED; > > return res; > } > @@ -1595,6 +1601,7 @@ static void panthor_gem_debugfs_print_flag_names(struct seq_file *m) > static const char * const gem_state_flags_names[] = { > [PANTHOR_DEBUGFS_GEM_STATE_IMPORTED_BIT] = "imported", > [PANTHOR_DEBUGFS_GEM_STATE_EXPORTED_BIT] = "exported", > + [PANTHOR_DEBUGFS_GEM_STATE_EVICTED_BIT] = "evicted", > }; > > static const char * const gem_usage_flags_names[] = { > @@ -1648,6 +1655,9 @@ static void panthor_gem_debugfs_bo_print(struct panthor_gem_object *bo, > > if (drm_gem_is_imported(&bo->base)) > gem_state_flags |= PANTHOR_DEBUGFS_GEM_STATE_FLAG_IMPORTED; > + else if (!resident_size && atomic_read(&bo->reclaimed_count)) > + gem_state_flags |= PANTHOR_DEBUGFS_GEM_STATE_FLAG_EVICTED; I think it'd be interesting to know the number of times a BO got evicted. > + > if (bo->base.dma_buf) > gem_state_flags |= PANTHOR_DEBUGFS_GEM_STATE_FLAG_EXPORTED; > > diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h > index ae0491d0b121..1ab573f03330 100644 > --- a/drivers/gpu/drm/panthor/panthor_gem.h > +++ b/drivers/gpu/drm/panthor/panthor_gem.h > @@ -19,12 +19,16 @@ struct panthor_vm; > enum panthor_debugfs_gem_state_flags { > PANTHOR_DEBUGFS_GEM_STATE_IMPORTED_BIT = 0, > PANTHOR_DEBUGFS_GEM_STATE_EXPORTED_BIT = 1, > + PANTHOR_DEBUGFS_GEM_STATE_EVICTED_BIT = 2, > > /** @PANTHOR_DEBUGFS_GEM_STATE_FLAG_IMPORTED: GEM BO is PRIME imported. */ > PANTHOR_DEBUGFS_GEM_STATE_FLAG_IMPORTED = BIT(PANTHOR_DEBUGFS_GEM_STATE_IMPORTED_BIT), > > /** @PANTHOR_DEBUGFS_GEM_STATE_FLAG_EXPORTED: GEM BO is PRIME exported. */ > PANTHOR_DEBUGFS_GEM_STATE_FLAG_EXPORTED = BIT(PANTHOR_DEBUGFS_GEM_STATE_EXPORTED_BIT), > + > + /** @PANTHOR_DEBUGFS_GEM_STATE_FLAG_EVICTED: GEM BO is evicted to swap. */ > + PANTHOR_DEBUGFS_GEM_STATE_FLAG_EVICTED = BIT(PANTHOR_DEBUGFS_GEM_STATE_EVICTED_BIT), > }; > > enum panthor_debugfs_gem_usage_flags { > @@ -172,6 +176,13 @@ struct panthor_gem_object { > /** @reclaim_state: Cached reclaim state */ > enum panthor_gem_reclaim_state reclaim_state; > > + /** > + * @reclaimed_count: How many times object has been evicted to swap. * > + * Never returns to 0 once incremented even on wrap-around, but may > + * become < 0 and < the previous value if wrap-around occurs. With the saturation I suggested, I'd just add that when INT_MAX is reached, it will stay there. > + */ > + atomic_t reclaimed_count; > + > /** > * @exclusive_vm_root_gem: Root GEM of the exclusive VM this GEM object > * is attached to. > ^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 2/2] drm/panthor: Implement evicted status for GEM objects 2026-04-20 16:17 ` Boris Brezillon @ 2026-04-20 17:46 ` Nicolas Frattaroli 0 siblings, 0 replies; 5+ messages in thread From: Nicolas Frattaroli @ 2026-04-20 17:46 UTC (permalink / raw) To: Boris Brezillon Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Steven Price, Liviu Dudau, dri-devel, linux-kernel, kernel On Monday, 20 April 2026 18:17:35 Central European Summer Time Boris Brezillon wrote: > On Mon, 20 Apr 2026 17:47:00 +0200 > Nicolas Frattaroli <nicolas.frattaroli@collabora.com> wrote: > > > For fdinfo to be able to fill its evicted counter with data, panthor > > needs to keep track of whether a GEM object has ever been reclaimed. > > Just checking whether the pages are resident isn't enough, as newly > > allocated objects also won't be resident. > > > > Do this with a new atomic_t member on panthor_gem_object. It's increased > > when an object gets evicted by the shrinker. While it's allowed to wrap > > around to below zero and assume a value less than a previous observed > > value, the reclaim counter will never return to 0 for any particular > > object once it's been reclaimed at least once. > > > > Use this new member to then set the appropriate DRM_GEM_OBJECT_EVICTED > > status flag for fdinfo, and use it in the gems debugfs. It's possible to > > distinguish evicted non-resident pages from newly allocated non-resident > > pages by checking whether reclaimed_count is != 0. > > > > Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com> > > --- > > drivers/gpu/drm/panthor/panthor_gem.c | 10 ++++++++++ > > drivers/gpu/drm/panthor/panthor_gem.h | 11 +++++++++++ > > 2 files changed, 21 insertions(+) > > > > diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c > > index 69cef05b6ef7..4b761b39565d 100644 > > --- a/drivers/gpu/drm/panthor/panthor_gem.c > > +++ b/drivers/gpu/drm/panthor/panthor_gem.c > > @@ -687,6 +687,10 @@ static void panthor_gem_evict_locked(struct panthor_gem_object *bo) > > if (drm_WARN_ON_ONCE(bo->base.dev, !bo->backing.pages)) > > return; > > > > + /* Don't ever wrap around as far as 0, jump from INT_MIN to 1 */ > > + if (!atomic_inc_unless_negative(&bo->reclaimed_count)) > > + atomic_set(&bo->reclaimed_count, 1); > > Can't we just go > > atomic_add_unless(&bo->reclaimed_count, 1, INT_MAX); > > here, to handle the INT_MAX saturation? Yeah, I was torn between the two. My way does keep a somewhat cyclical nature, so once it wraps, things could still detect that it's increasing from one observance to the next. But now that I think about it again, it's not a property of this count that's worth keeping, I think. Nothing relies on this behaviour currently, and at best it'll trip up any code that uses it for a < comparison later down the road. > > + > > panthor_gem_dev_map_cleanup_locked(bo); > > panthor_gem_backing_cleanup_locked(bo); > > panthor_gem_update_reclaim_state_locked(bo, NULL); > > @@ -788,6 +792,8 @@ static enum drm_gem_object_status panthor_gem_status(struct drm_gem_object *obj) > > > > if (drm_gem_is_imported(&bo->base) || bo->backing.pages) > > res |= DRM_GEM_OBJECT_RESIDENT; > > + else if (atomic_read(&bo->reclaimed_count)) > > + res |= DRM_GEM_OBJECT_EVICTED; > > > > return res; > > } > > @@ -1595,6 +1601,7 @@ static void panthor_gem_debugfs_print_flag_names(struct seq_file *m) > > static const char * const gem_state_flags_names[] = { > > [PANTHOR_DEBUGFS_GEM_STATE_IMPORTED_BIT] = "imported", > > [PANTHOR_DEBUGFS_GEM_STATE_EXPORTED_BIT] = "exported", > > + [PANTHOR_DEBUGFS_GEM_STATE_EVICTED_BIT] = "evicted", > > }; > > > > static const char * const gem_usage_flags_names[] = { > > @@ -1648,6 +1655,9 @@ static void panthor_gem_debugfs_bo_print(struct panthor_gem_object *bo, > > > > if (drm_gem_is_imported(&bo->base)) > > gem_state_flags |= PANTHOR_DEBUGFS_GEM_STATE_FLAG_IMPORTED; > > + else if (!resident_size && atomic_read(&bo->reclaimed_count)) > > + gem_state_flags |= PANTHOR_DEBUGFS_GEM_STATE_FLAG_EVICTED; > > I think it'd be interesting to know the number of times a BO got > evicted. Agreed. Should I add that as a separate column? I think I'll keep the state flag even if I add a column, because even if it's technically redundant, it'll still be useful to see without having to compare three different columns. > > + > > if (bo->base.dma_buf) > > gem_state_flags |= PANTHOR_DEBUGFS_GEM_STATE_FLAG_EXPORTED; > > > > diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h > > index ae0491d0b121..1ab573f03330 100644 > > --- a/drivers/gpu/drm/panthor/panthor_gem.h > > +++ b/drivers/gpu/drm/panthor/panthor_gem.h > > @@ -19,12 +19,16 @@ struct panthor_vm; > > enum panthor_debugfs_gem_state_flags { > > PANTHOR_DEBUGFS_GEM_STATE_IMPORTED_BIT = 0, > > PANTHOR_DEBUGFS_GEM_STATE_EXPORTED_BIT = 1, > > + PANTHOR_DEBUGFS_GEM_STATE_EVICTED_BIT = 2, > > > > /** @PANTHOR_DEBUGFS_GEM_STATE_FLAG_IMPORTED: GEM BO is PRIME imported. */ > > PANTHOR_DEBUGFS_GEM_STATE_FLAG_IMPORTED = BIT(PANTHOR_DEBUGFS_GEM_STATE_IMPORTED_BIT), > > > > /** @PANTHOR_DEBUGFS_GEM_STATE_FLAG_EXPORTED: GEM BO is PRIME exported. */ > > PANTHOR_DEBUGFS_GEM_STATE_FLAG_EXPORTED = BIT(PANTHOR_DEBUGFS_GEM_STATE_EXPORTED_BIT), > > + > > + /** @PANTHOR_DEBUGFS_GEM_STATE_FLAG_EVICTED: GEM BO is evicted to swap. */ > > + PANTHOR_DEBUGFS_GEM_STATE_FLAG_EVICTED = BIT(PANTHOR_DEBUGFS_GEM_STATE_EVICTED_BIT), > > }; > > > > enum panthor_debugfs_gem_usage_flags { > > @@ -172,6 +176,13 @@ struct panthor_gem_object { > > /** @reclaim_state: Cached reclaim state */ > > enum panthor_gem_reclaim_state reclaim_state; > > > > + /** > > + * @reclaimed_count: How many times object has been evicted to swap. > > * > > > + * Never returns to 0 once incremented even on wrap-around, but may > > + * become < 0 and < the previous value if wrap-around occurs. > > With the saturation I suggested, I'd just add that when INT_MAX is > reached, it will stay there. > > > + */ > > + atomic_t reclaimed_count; > > + > > /** > > * @exclusive_vm_root_gem: Root GEM of the exclusive VM this GEM object > > * is attached to. > > > > ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-04-20 17:47 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-04-20 15:46 [PATCH 0/2] Let userspace know about swapped out panthor GEM objects Nicolas Frattaroli 2026-04-20 15:46 ` [PATCH 1/2] drm/fdinfo: Add "evicted" memory accounting Nicolas Frattaroli 2026-04-20 15:47 ` [PATCH 2/2] drm/panthor: Implement evicted status for GEM objects Nicolas Frattaroli 2026-04-20 16:17 ` Boris Brezillon 2026-04-20 17:46 ` Nicolas Frattaroli
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox