From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Subject: [Intel-gfx] [PATCH 4/5] drm/i915: Account ring buffer and context state storage
Date: Thu, 8 Jun 2023 15:51:32 +0100 [thread overview]
Message-ID: <20230608145133.1059554-5-tvrtko.ursulin@linux.intel.com> (raw)
In-Reply-To: <20230608145133.1059554-1-tvrtko.ursulin@linux.intel.com>
From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Account ring buffers and logical context space against the owning client
memory usage stats.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
drivers/gpu/drm/i915/gem/i915_gem_context.c | 6 ++++++
drivers/gpu/drm/i915/i915_drm_client.c | 10 ++++++++++
drivers/gpu/drm/i915/i915_drm_client.h | 9 +++++++++
3 files changed, 25 insertions(+)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 35cf6608180e..3f4c74aed3c5 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -1703,6 +1703,8 @@ static void gem_context_register(struct i915_gem_context *ctx,
u32 id)
{
struct drm_i915_private *i915 = ctx->i915;
+ struct i915_gem_engines_iter it;
+ struct intel_context *ce;
void *old;
ctx->file_priv = fpriv;
@@ -1721,6 +1723,10 @@ static void gem_context_register(struct i915_gem_context *ctx,
list_add_tail(&ctx->link, &i915->gem.contexts.list);
spin_unlock(&i915->gem.contexts.lock);
+ for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it)
+ i915_drm_client_add_context(fpriv->client, ce);
+ i915_gem_context_unlock_engines(ctx);
+
/* And finally expose ourselves to userspace via the idr */
old = xa_store(&fpriv->context_xa, id, ctx, GFP_KERNEL);
WARN_ON(old);
diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
index 4cacca568375..777930f4995f 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.c
+++ b/drivers/gpu/drm/i915/i915_drm_client.c
@@ -142,4 +142,14 @@ void i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
i915_drm_client_put(client);
}
+
+void i915_drm_client_add_context(struct i915_drm_client *client,
+ struct intel_context *ce)
+{
+ if (ce->state)
+ i915_drm_client_add_object(client, ce->state->obj);
+
+ if (ce->ring != ce->engine->legacy.ring && ce->ring->vma)
+ i915_drm_client_add_object(client, ce->ring->vma->obj);
+}
#endif
diff --git a/drivers/gpu/drm/i915/i915_drm_client.h b/drivers/gpu/drm/i915/i915_drm_client.h
index 0db68b4d7a4f..a5c29a105af3 100644
--- a/drivers/gpu/drm/i915/i915_drm_client.h
+++ b/drivers/gpu/drm/i915/i915_drm_client.h
@@ -15,6 +15,7 @@
#include "i915_file_private.h"
#include "gem/i915_gem_object_types.h"
+#include "gt/intel_context_types.h"
#define I915_LAST_UABI_ENGINE_CLASS I915_ENGINE_CLASS_COMPUTE
@@ -73,6 +74,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file);
void i915_drm_client_add_object(struct i915_drm_client *client,
struct drm_i915_gem_object *obj);
void i915_drm_client_remove_object(struct drm_i915_gem_object *obj);
+void i915_drm_client_add_context(struct i915_drm_client *client,
+ struct intel_context *ce);
#else
static inline void i915_drm_client_add_object(struct i915_drm_client *client,
struct drm_i915_gem_object *obj)
@@ -83,6 +86,12 @@ static inline void i915_drm_client_add_object(struct i915_drm_client *client,
static inline void i915_drm_client_remove_object(struct drm_i915_gem_object *obj)
{
+}
+
+static inline void i915_drm_client_add_context(struct i915_drm_client *client,
+ struct intel_context *ce)
+{
+
}
#endif
--
2.39.2
next prev parent reply other threads:[~2023-06-08 14:52 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-08 14:51 [Intel-gfx] [PATCH v2 0/5] fdinfo memory stats Tvrtko Ursulin
2023-06-08 14:51 ` [Intel-gfx] [PATCH 1/5] drm/i915: Track buffer objects belonging to clients Tvrtko Ursulin
2023-06-09 4:16 ` Iddamsetty, Aravind
2023-06-09 12:08 ` Tvrtko Ursulin
2023-06-08 14:51 ` [Intel-gfx] [PATCH 2/5] drm/i915: Record which clients own a VM Tvrtko Ursulin
2023-06-08 17:48 ` kernel test robot
2023-06-08 14:51 ` [Intel-gfx] [PATCH 3/5] drm/i915: Track page table backing store usage Tvrtko Ursulin
2023-06-08 14:51 ` Tvrtko Ursulin [this message]
2023-06-08 14:51 ` [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing Tvrtko Ursulin
2023-06-08 18:47 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for fdinfo memory stats Patchwork
-- strict thread matches above, loose matches on Subject: below --
2023-06-12 10:46 [Intel-gfx] [PATCH v4 0/5] " Tvrtko Ursulin
2023-06-12 10:46 ` [Intel-gfx] [PATCH 4/5] drm/i915: Account ring buffer and context state storage Tvrtko Ursulin
2023-07-07 13:02 [Intel-gfx] [PATCH v5 0/5] fdinfo memory stats Tvrtko Ursulin
2023-07-07 13:02 ` [Intel-gfx] [PATCH 4/5] drm/i915: Account ring buffer and context state storage Tvrtko Ursulin
2023-07-11 9:29 ` Iddamsetty, Aravind
2023-07-11 9:44 ` Tvrtko Ursulin
2023-07-27 10:13 [Intel-gfx] [PATCH v6 0/5] fdinfo memory stats Tvrtko Ursulin
2023-07-27 10:13 ` [Intel-gfx] [PATCH 4/5] drm/i915: Account ring buffer and context state storage Tvrtko Ursulin
2023-09-21 11:48 [Intel-gfx] [PATCH v7 0/5] fdinfo memory stats Tvrtko Ursulin
2023-09-21 11:48 ` [Intel-gfx] [PATCH 4/5] drm/i915: Account ring buffer and context state storage Tvrtko Ursulin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230608145133.1059554-5-tvrtko.ursulin@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=Intel-gfx@lists.freedesktop.org \
--cc=dri-devel@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox