public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Iddamsetty, Aravind" <aravind.iddamsetty@intel.com>
To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	<Intel-gfx@lists.freedesktop.org>,
	<dri-devel@lists.freedesktop.org>
Subject: Re: [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing
Date: Wed, 9 Aug 2023 10:03:23 +0530	[thread overview]
Message-ID: <90de2060-94f4-f985-e3fc-f4f01934119e@intel.com> (raw)
In-Reply-To: <b5904977-9693-34bc-55bf-28387b69e06a@linux.intel.com>



On 03-08-2023 14:19, Tvrtko Ursulin wrote:
> 
> On 03/08/2023 06:15, Iddamsetty, Aravind wrote:
>> On 27-07-2023 15:43, Tvrtko Ursulin wrote:
>>> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>>
>>> Use the newly added drm_print_memory_stats helper to show memory
>>> utilisation of our objects in drm/driver specific fdinfo output.
>>>
>>> To collect the stats we walk the per memory regions object lists
>>> and accumulate object size into the respective drm_memory_stats
>>> categories.
>>>
>>> Objects with multiple possible placements are reported in multiple
>>> regions for total and shared sizes, while other categories are
>>> counted only for the currently active region.
>>>
>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>>> Cc: Aravind Iddamsetty <aravind.iddamsetty@intel.com>
>>> Cc: Rob Clark <robdclark@gmail.com>> ---
>>>   drivers/gpu/drm/i915/i915_drm_client.c | 85 ++++++++++++++++++++++++++
>>>   1 file changed, 85 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c
>>> b/drivers/gpu/drm/i915/i915_drm_client.c
>>> index a61356012df8..9e7a6075ee25 100644
>>> --- a/drivers/gpu/drm/i915/i915_drm_client.c
>>> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
>>> @@ -45,6 +45,89 @@ void __i915_drm_client_free(struct kref *kref)
>>>   }
>>>     #ifdef CONFIG_PROC_FS
>>> +static void
>>> +obj_meminfo(struct drm_i915_gem_object *obj,
>>> +        struct drm_memory_stats stats[INTEL_REGION_UNKNOWN])
>>> +{
>>> +    struct intel_memory_region *mr;
>>> +    u64 sz = obj->base.size;
>>> +    enum intel_region_id id;
>>> +    unsigned int i;
>>> +
>>> +    /* Attribute size and shared to all possible memory regions. */
>>> +    for (i = 0; i < obj->mm.n_placements; i++) {
>>> +        mr = obj->mm.placements[i];
>>> +        id = mr->id;
>>> +
>>> +        if (obj->base.handle_count > 1)
>>> +            stats[id].shared += sz;
>>> +        else
>>> +            stats[id].private += sz;
>>> +    }
>>> +
>>> +    /* Attribute other categories to only the current region. */
>>> +    mr = obj->mm.region;
>>> +    if (mr)
>>> +        id = mr->id;
>>> +    else
>>> +        id = INTEL_REGION_SMEM;
>>> +
>>> +    if (!obj->mm.n_placements) {
>>
>> I guess we do not expect to have n_placements set to public objects, is
>> that right?
> 
> I think they are the only ones which can have placements. It is via
> I915_GEM_CREATE_EXT_MEMORY_REGIONS userspace is able to create them.
> 
> My main conundrum in this patch is a few lines above, the loop which
> adds shared and private.
> 
> Question is, if an object can be either smem or lmem, how do we want to
> report it? This patch adds the size for all possible regions and
> resident and active only to the currently active. But perhaps that is
> wrong. Maybe I should change it is only against the active region and
> multiple regions are just ignored. Then if object is migrated do access
> patterns or memory pressure, the total size would migrate too.
> 
> I think I was trying to achieve something here (have more visibility on
> what kind of backing store clients are allocating) which maybe does not
> work to well with the current categories.
> 
> Namely if userspace allocates say one 1MiB object with placement in
> either smem or lmem, and it is currently resident in lmem, I wanted it
> to show as:
> 
>  total-smem: 1 MiB
>  resident-smem: 0
>  total-lmem: 1 MiB
>  resident-lmem: 1 MiB
> 
> To constantly show how in theory client could be using memory from
> either region. Maybe that is misleading and should instead be:
> 
>  total-smem: 0
>  resident-smem: 0
>  total-lmem: 1 MiB
>  resident-lmem: 1 MiB
> 
> ?

I think the current implementation will not match with the memregion
info in query ioctl as well. While what you say is true I'm not sure if
there can be a client who is tracking the allocation say for an obj who
has 2 placements LMEM and SMEM, and might assume since I had made a
reservation in SMEM it shall not fail when i try to migrate there later.

Thanks,
Aravind.

> 
> And then if/when the same object gets migrated to smem it changes to
> (lets assume it is also not resident any more but got swapped out):
> 
>  total-smem: 1 MiB
>  resident-smem: 0
>  total-lmem: 0
>  resident-lmem: 0
> 
> Regards,
> 
> Tvrtko
> 
>>> +        if (obj->base.handle_count > 1)
>>> +            stats[id].shared += sz;
>>> +        else
>>> +            stats[id].private += sz;
>>> +    }
>>> +
>>> +    if (i915_gem_object_has_pages(obj)) {
>>> +        stats[id].resident += sz;
>>> +
>>> +        if (!dma_resv_test_signaled(obj->base.resv,
>>> +                        dma_resv_usage_rw(true)))
>>> +            stats[id].active += sz;
>>> +        else if (i915_gem_object_is_shrinkable(obj) &&
>>> +             obj->mm.madv == I915_MADV_DONTNEED)
>>> +            stats[id].purgeable += sz;
>>> +    }
>>> +}
>>> +
>>> +static void show_meminfo(struct drm_printer *p, struct drm_file *file)
>>> +{
>>> +    struct drm_memory_stats stats[INTEL_REGION_UNKNOWN] = {};
>>> +    struct drm_i915_file_private *fpriv = file->driver_priv;
>>> +    struct i915_drm_client *client = fpriv->client;
>>> +    struct drm_i915_private *i915 = fpriv->i915;
>>> +    struct drm_i915_gem_object *obj;
>>> +    struct intel_memory_region *mr;
>>> +    struct list_head *pos;
>>> +    unsigned int id;
>>> +
>>> +    /* Public objects. */
>>> +    spin_lock(&file->table_lock);
>>> +    idr_for_each_entry(&file->object_idr, obj, id)
>>> +        obj_meminfo(obj, stats);
>>> +    spin_unlock(&file->table_lock);
>>> +
>>> +    /* Internal objects. */
>>> +    rcu_read_lock();
>>> +    list_for_each_rcu(pos, &client->objects_list) {
>>> +        obj = i915_gem_object_get_rcu(list_entry(pos, typeof(*obj),
>>> +                             client_link));
>>> +        if (!obj)
>>> +            continue;
>>> +        obj_meminfo(obj, stats);
>>> +        i915_gem_object_put(obj);
>>> +    }
>>> +    rcu_read_unlock();
>>> +
>>> +    for_each_memory_region(mr, i915, id)
>>> +        drm_print_memory_stats(p,
>>> +                       &stats[id],
>>> +                       DRM_GEM_OBJECT_RESIDENT |
>>> +                       DRM_GEM_OBJECT_PURGEABLE,
>>> +                       mr->name);
>>> +}
>>> +
>>>   static const char * const uabi_class_names[] = {
>>>       [I915_ENGINE_CLASS_RENDER] = "render",
>>>       [I915_ENGINE_CLASS_COPY] = "copy",
>>> @@ -106,6 +189,8 @@ void i915_drm_client_fdinfo(struct drm_printer
>>> *p, struct drm_file *file)
>>>        *
>>> ******************************************************************
>>>        */
>>>   +    show_meminfo(p, file);
>>> +
>>>       if (GRAPHICS_VER(i915) < 8)
>>>           return;
>>>
>>
>>
>>

  reply	other threads:[~2023-08-09  4:33 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-27 10:13 [Intel-gfx] [PATCH v6 0/5] fdinfo memory stats Tvrtko Ursulin
2023-07-27 10:13 ` [Intel-gfx] [PATCH 1/5] drm/i915: Add ability for tracking buffer objects per client Tvrtko Ursulin
2023-08-03  5:30   ` Iddamsetty, Aravind
2023-07-27 10:13 ` [Intel-gfx] [PATCH 2/5] drm/i915: Record which client owns a VM Tvrtko Ursulin
2023-07-27 10:13 ` [Intel-gfx] [PATCH 3/5] drm/i915: Track page table backing store usage Tvrtko Ursulin
2023-07-27 10:13 ` [Intel-gfx] [PATCH 4/5] drm/i915: Account ring buffer and context state storage Tvrtko Ursulin
2023-07-27 10:13 ` [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing Tvrtko Ursulin
2023-08-03  5:15   ` Iddamsetty, Aravind
2023-08-03  8:49     ` Tvrtko Ursulin
2023-08-09  4:33       ` Iddamsetty, Aravind [this message]
2023-07-27 11:17 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for fdinfo memory stats (rev5) Patchwork
2023-07-27 11:40 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2023-07-27 12:58 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for fdinfo memory stats (rev6) Patchwork
2023-07-27 13:16 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2023-07-27 15:30 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2023-09-21 11:48 [Intel-gfx] [PATCH v7 0/5] fdinfo memory stats Tvrtko Ursulin
2023-09-21 11:48 ` [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing Tvrtko Ursulin
2023-09-22  8:48   ` Iddamsetty, Aravind
2023-09-22 10:57     ` Tvrtko Ursulin
2023-09-22 12:33       ` Iddamsetty, Aravind
2023-09-22 11:01   ` Andi Shyti
2023-07-07 13:02 [Intel-gfx] [PATCH v5 0/5] fdinfo memory stats Tvrtko Ursulin
2023-07-07 13:02 ` [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing Tvrtko Ursulin
2023-08-24 11:35   ` Upadhyay, Tejas
2023-09-20 14:22     ` Tvrtko Ursulin
2023-09-20 14:39       ` Rob Clark
2023-06-12 10:46 [Intel-gfx] [PATCH v4 0/5] fdinfo memory stats Tvrtko Ursulin
2023-06-12 10:46 ` [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing Tvrtko Ursulin
2023-06-08 14:51 [Intel-gfx] [PATCH v2 0/5] fdinfo memory stats Tvrtko Ursulin
2023-06-08 14:51 ` [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing Tvrtko Ursulin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=90de2060-94f4-f985-e3fc-f4f01934119e@intel.com \
    --to=aravind.iddamsetty@intel.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox