From: Matthew Auld <matthew.auld@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
intel-gfx@lists.freedesktop.org
Cc: "Vetter, Daniel" <daniel.vetter@intel.com>,
dri-devel@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH 2/2] drm/i915: add back the avail tracking
Date: Fri, 18 Jun 2021 14:57:26 +0100 [thread overview]
Message-ID: <a016ba03-c76c-c484-6591-b1b534a4d286@intel.com> (raw)
In-Reply-To: <fc9d656c-ec77-8522-8cd5-3ec492b8f236@linux.intel.com>
On 18/06/2021 14:44, Thomas Hellström wrote:
>
> On 6/18/21 3:31 PM, Matthew Auld wrote:
>> Looks like it got lost along the way, so add it back. This is needed for
>> the region query uAPI where we need to report an accurate available size
>> for lmem.
>
> Hmm. How is this uAPI intended to work in a multi-client environment
> where the returned value can be nothing but a snapshot of the current
> state, that can't be relied upon?
Ok, maybe I'm overselling it. s/accurate/current snapshot/. It does feel
more useful than just returning -1 or so?
Daniel, Jason, any thoughts? Or maybe mr->total is all that real
userspace really cares about?
>
>> This time around let's push it directly into the allocator, which
>> simplifies things, like not having to care about internal fragmentation,
>> or having to remember to track things for all possible interfaces that
>> might want to allocate or reserve pages.
>>
>> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> ---
>> drivers/gpu/drm/i915/i915_buddy.c | 6 ++++++
>> drivers/gpu/drm/i915/i915_buddy.h | 1 +
>> drivers/gpu/drm/i915/i915_debugfs.c | 5 +++--
>> drivers/gpu/drm/i915/i915_query.c | 2 +-
>> drivers/gpu/drm/i915/i915_ttm_buddy_manager.c | 13 +++++++++++++
>> drivers/gpu/drm/i915/i915_ttm_buddy_manager.h | 2 ++
>> drivers/gpu/drm/i915/intel_memory_region.c | 8 ++++++++
>> drivers/gpu/drm/i915/intel_memory_region.h | 4 ++++
>> 8 files changed, 38 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/i915_buddy.c
>> b/drivers/gpu/drm/i915/i915_buddy.c
>> index 29dd7d0310c1..27cd2487a18f 100644
>> --- a/drivers/gpu/drm/i915/i915_buddy.c
>> +++ b/drivers/gpu/drm/i915/i915_buddy.c
>> @@ -80,6 +80,7 @@ int i915_buddy_init(struct i915_buddy_mm *mm, u64
>> size, u64 chunk_size)
>> size = round_down(size, chunk_size);
>> mm->size = size;
>> + mm->avail = size;
>> mm->chunk_size = chunk_size;
>> mm->max_order = ilog2(size) - ilog2(chunk_size);
>> @@ -159,6 +160,8 @@ void i915_buddy_fini(struct i915_buddy_mm *mm)
>> i915_block_free(mm, mm->roots[i]);
>> }
>> + GEM_WARN_ON(mm->avail != mm->size);
>> +
>> kfree(mm->roots);
>> kfree(mm->free_list);
>> kmem_cache_destroy(mm->slab_blocks);
>> @@ -235,6 +238,7 @@ void i915_buddy_free(struct i915_buddy_mm *mm,
>> struct i915_buddy_block *block)
>> {
>> GEM_BUG_ON(!i915_buddy_block_is_allocated(block));
>> + mm->avail += i915_buddy_block_size(mm, block);
>> __i915_buddy_free(mm, block);
>> }
>> @@ -288,6 +292,7 @@ i915_buddy_alloc(struct i915_buddy_mm *mm,
>> unsigned int order)
>> }
>> mark_allocated(block);
>> + mm->avail -= i915_buddy_block_size(mm, block);
>> kmemleak_update_trace(block);
>> return block;
>> @@ -373,6 +378,7 @@ int i915_buddy_alloc_range(struct i915_buddy_mm *mm,
>> }
>> mark_allocated(block);
>> + mm->avail -= i915_buddy_block_size(mm, block);
>> list_add_tail(&block->link, &allocated);
>> continue;
>> }
>> diff --git a/drivers/gpu/drm/i915/i915_buddy.h
>> b/drivers/gpu/drm/i915/i915_buddy.h
>> index 37f8c42071d1..feb7c1bb6244 100644
>> --- a/drivers/gpu/drm/i915/i915_buddy.h
>> +++ b/drivers/gpu/drm/i915/i915_buddy.h
>> @@ -70,6 +70,7 @@ struct i915_buddy_mm {
>> /* Must be at least PAGE_SIZE */
>> u64 chunk_size;
>> u64 size;
>> + u64 avail;
>> };
>> static inline u64
>> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c
>> b/drivers/gpu/drm/i915/i915_debugfs.c
>> index cc745751ac53..4765f220469e 100644
>> --- a/drivers/gpu/drm/i915/i915_debugfs.c
>> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
>> @@ -246,8 +246,9 @@ static int i915_gem_object_info(struct seq_file
>> *m, void *data)
>> atomic_read(&i915->mm.free_count),
>> i915->mm.shrink_memory);
>> for_each_memory_region(mr, i915, id)
>> - seq_printf(m, "%s: total:%pa, available:%pa bytes\n",
>> - mr->name, &mr->total, &mr->avail);
>> + seq_printf(m, "%s: total:%pa, available:%llu bytes\n",
>> + mr->name, &mr->total,
>> + intel_memory_region_get_avail(mr));
>> return 0;
>> }
>> diff --git a/drivers/gpu/drm/i915/i915_query.c
>> b/drivers/gpu/drm/i915/i915_query.c
>> index e49da36c62fb..f10dcea94ac9 100644
>> --- a/drivers/gpu/drm/i915/i915_query.c
>> +++ b/drivers/gpu/drm/i915/i915_query.c
>> @@ -465,7 +465,7 @@ static int query_memregion_info(struct
>> drm_i915_private *i915,
>> info.region.memory_class = mr->type;
>> info.region.memory_instance = mr->instance;
>> info.probed_size = mr->total;
>> - info.unallocated_size = mr->avail;
>> + info.unallocated_size = intel_memory_region_get_avail(mr);
>> if (__copy_to_user(info_ptr, &info, sizeof(info)))
>> return -EFAULT;
>> diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
>> b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
>> index fc7ad5c035b8..562d11edc5e4 100644
>> --- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
>> +++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
>> @@ -246,3 +246,16 @@ int i915_ttm_buddy_man_reserve(struct
>> ttm_resource_manager *man,
>> return ret;
>> }
>> +/**
>> + * i915_ttm_buddy_man_avail - Get the currently available size
>> + * @man: The buddy allocator ttm manager
>> + *
>> + * Return: The available size in bytes
>> + */
>> +u64 i915_ttm_buddy_man_get_avail(struct ttm_resource_manager *man)
>> +{
>> + struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
>> + struct i915_buddy_mm *mm = &bman->mm;
>> +
>> + return mm->avail;
>> +}
>> diff --git a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h
>> b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h
>> index 26026213e20a..39f5b1a4c3e7 100644
>> --- a/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h
>> +++ b/drivers/gpu/drm/i915/i915_ttm_buddy_manager.h
>> @@ -53,4 +53,6 @@ int i915_ttm_buddy_man_fini(struct ttm_device *bdev,
>> int i915_ttm_buddy_man_reserve(struct ttm_resource_manager *man,
>> u64 start, u64 size);
>> +u64 i915_ttm_buddy_man_get_avail(struct ttm_resource_manager *man);
>> +
>> #endif
>> diff --git a/drivers/gpu/drm/i915/intel_memory_region.c
>> b/drivers/gpu/drm/i915/intel_memory_region.c
>> index df59f884d37c..269cbb60e233 100644
>> --- a/drivers/gpu/drm/i915/intel_memory_region.c
>> +++ b/drivers/gpu/drm/i915/intel_memory_region.c
>> @@ -132,6 +132,14 @@ void intel_memory_region_set_name(struct
>> intel_memory_region *mem,
>> va_end(ap);
>> }
>> +u64 intel_memory_region_get_avail(struct intel_memory_region *mr)
>> +{
>> + if (mr->type == INTEL_MEMORY_LOCAL)
>> + return i915_ttm_buddy_man_get_avail(mr->region_private);
>> +
>> + return mr->avail;
>> +}
>
> Perhaps a kerneldoc comment here as well?
>
>
>> +
>> static void __intel_memory_region_destroy(struct kref *kref)
>> {
>> struct intel_memory_region *mem =
>> diff --git a/drivers/gpu/drm/i915/intel_memory_region.h
>> b/drivers/gpu/drm/i915/intel_memory_region.h
>> index 2be8433d373a..6f7a073d5a70 100644
>> --- a/drivers/gpu/drm/i915/intel_memory_region.h
>> +++ b/drivers/gpu/drm/i915/intel_memory_region.h
>> @@ -74,6 +74,7 @@ struct intel_memory_region {
>> resource_size_t io_start;
>> resource_size_t min_page_size;
>> resource_size_t total;
>> + /* Do not access directly. Use the accessor instead. */
>> resource_size_t avail;
>> u16 type;
>> @@ -125,4 +126,7 @@ intel_memory_region_set_name(struct
>> intel_memory_region *mem,
>> int intel_memory_region_reserve(struct intel_memory_region *mem,
>> resource_size_t offset,
>> resource_size_t size);
>> +
>> +u64 intel_memory_region_get_avail(struct intel_memory_region *mem);
>> +
>> #endif
>
> Otherwise code itself looks good to me.
>
> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>
>
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2021-06-18 13:57 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-18 13:31 [Intel-gfx] [PATCH 1/2] drm/i915/selftests: add back the selftest() hook for the buddy Matthew Auld
2021-06-18 13:31 ` [Intel-gfx] [PATCH 2/2] drm/i915: add back the avail tracking Matthew Auld
2021-06-18 13:44 ` Thomas Hellström
2021-06-18 13:57 ` Matthew Auld [this message]
2021-06-18 13:36 ` [Intel-gfx] [PATCH 1/2] drm/i915/selftests: add back the selftest() hook for the buddy Thomas Hellström
2021-06-18 13:43 ` Matthew Auld
2021-06-18 16:06 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for series starting with [1/2] " Patchwork
2021-06-18 16:34 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-06-18 18:35 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a016ba03-c76c-c484-6591-b1b534a4d286@intel.com \
--to=matthew.auld@intel.com \
--cc=daniel.vetter@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox