* [PATCH v2 0/2] drm: fdinfo memory stats
@ 2023-04-10 21:06 Rob Clark
2023-04-10 21:06 ` [PATCH v2 1/2] drm: Add " Rob Clark
2023-04-11 16:47 ` [PATCH v2 0/2] drm: " Rob Clark
0 siblings, 2 replies; 20+ messages in thread
From: Rob Clark @ 2023-04-10 21:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Boris Brezillon, Tvrtko Ursulin,
Christopher Healy, Emil Velikov, Rob Clark,
open list:DOCUMENTATION, open list, Sean Paul
From: Rob Clark <robdclark@chromium.org>
Similar motivation to other similar recent attempt[1]. But with an
attempt to have some shared code for this. As well as documentation.
It is probably a bit UMA-centric, I guess devices with VRAM might want
some placement stats as well. But this seems like a reasonable start.
Basic gputop support: https://patchwork.freedesktop.org/series/116236/
And already nvtop support: https://github.com/Syllo/nvtop/pull/204
[1] https://patchwork.freedesktop.org/series/112397/
Rob Clark (2):
drm: Add fdinfo memory stats
drm/msm: Add memory stats to fdinfo
Documentation/gpu/drm-usage-stats.rst | 21 +++++++
drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
drivers/gpu/drm/msm/msm_gpu.c | 2 -
include/drm/drm_file.h | 10 ++++
5 files changed, 134 insertions(+), 3 deletions(-)
--
2.39.2
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v2 1/2] drm: Add fdinfo memory stats
2023-04-10 21:06 [PATCH v2 0/2] drm: fdinfo memory stats Rob Clark
@ 2023-04-10 21:06 ` Rob Clark
2023-04-11 10:43 ` Daniel Vetter
2023-04-11 16:47 ` [PATCH v2 0/2] drm: " Rob Clark
1 sibling, 1 reply; 20+ messages in thread
From: Rob Clark @ 2023-04-10 21:06 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Boris Brezillon, Tvrtko Ursulin,
Christopher Healy, Emil Velikov, Rob Clark, David Airlie,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Jonathan Corbet, open list:DOCUMENTATION,
open list
From: Rob Clark <robdclark@chromium.org>
Add a helper to dump memory stats to fdinfo. For the things the drm
core isn't aware of, use a callback.
v2: Fix typos, change size units to match docs, use div_u64
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
---
Documentation/gpu/drm-usage-stats.rst | 21 +++++++
drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
include/drm/drm_file.h | 10 ++++
3 files changed, 110 insertions(+)
diff --git a/Documentation/gpu/drm-usage-stats.rst b/Documentation/gpu/drm-usage-stats.rst
index b46327356e80..b5e7802532ed 100644
--- a/Documentation/gpu/drm-usage-stats.rst
+++ b/Documentation/gpu/drm-usage-stats.rst
@@ -105,6 +105,27 @@ object belong to this client, in the respective memory region.
Default unit shall be bytes with optional unit specifiers of 'KiB' or 'MiB'
indicating kibi- or mebi-bytes.
+- drm-shared-memory: <uint> [KiB|MiB]
+
+The total size of buffers that are shared with another file (ie. have more
+than a single handle).
+
+- drm-private-memory: <uint> [KiB|MiB]
+
+The total size of buffers that are not shared with another file.
+
+- drm-resident-memory: <uint> [KiB|MiB]
+
+The total size of buffers that are resident in system memory.
+
+- drm-purgeable-memory: <uint> [KiB|MiB]
+
+The total size of buffers that are purgeable.
+
+- drm-active-memory: <uint> [KiB|MiB]
+
+The total size of buffers that are active on one or more rings.
+
- drm-cycles-<str> <uint>
Engine identifier string must be the same as the one specified in the
diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
index a51ff8cee049..085b01842a87 100644
--- a/drivers/gpu/drm/drm_file.c
+++ b/drivers/gpu/drm/drm_file.c
@@ -42,6 +42,7 @@
#include <drm/drm_client.h>
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
+#include <drm/drm_gem.h>
#include <drm/drm_print.h>
#include "drm_crtc_internal.h"
@@ -868,6 +869,84 @@ void drm_send_event(struct drm_device *dev, struct drm_pending_event *e)
}
EXPORT_SYMBOL(drm_send_event);
+static void print_size(struct drm_printer *p, const char *stat, size_t sz)
+{
+ const char *units[] = {"", " KiB", " MiB"};
+ unsigned u;
+
+ for (u = 0; u < ARRAY_SIZE(units) - 1; u++) {
+ if (sz < SZ_1K)
+ break;
+ sz = div_u64(sz, SZ_1K);
+ }
+
+ drm_printf(p, "%s:\t%zu%s\n", stat, sz, units[u]);
+}
+
+/**
+ * drm_print_memory_stats - Helper to print standard fdinfo memory stats
+ * @file: the DRM file
+ * @p: the printer to print output to
+ * @status: callback to get driver tracked object status
+ *
+ * Helper to iterate over GEM objects with a handle allocated in the specified
+ * file. The optional status callback can return additional object state which
+ * determines which stats the object is counted against. The callback is called
+ * under table_lock. Racing against object status change is "harmless", and the
+ * callback can expect to not race against object destruction.
+ */
+void drm_print_memory_stats(struct drm_file *file, struct drm_printer *p,
+ enum drm_gem_object_status (*status)(struct drm_gem_object *))
+{
+ struct drm_gem_object *obj;
+ struct {
+ size_t shared;
+ size_t private;
+ size_t resident;
+ size_t purgeable;
+ size_t active;
+ } size = {0};
+ int id;
+
+ spin_lock(&file->table_lock);
+ idr_for_each_entry (&file->object_idr, obj, id) {
+ enum drm_gem_object_status s = 0;
+
+ if (status)
+ s = status(obj);
+
+ if (obj->handle_count > 1) {
+ size.shared += obj->size;
+ } else {
+ size.private += obj->size;
+ }
+
+ if (s & DRM_GEM_OBJECT_RESIDENT) {
+ size.resident += obj->size;
+ s &= ~DRM_GEM_OBJECT_PURGEABLE;
+ }
+
+ if (s & DRM_GEM_OBJECT_ACTIVE) {
+ size.active += obj->size;
+ s &= ~DRM_GEM_OBJECT_PURGEABLE;
+ }
+
+ if (s & DRM_GEM_OBJECT_PURGEABLE)
+ size.purgeable += obj->size;
+ }
+ spin_unlock(&file->table_lock);
+
+ print_size(p, "drm-shared-memory", size.shared);
+ print_size(p, "drm-private-memory", size.private);
+
+ if (status) {
+ print_size(p, "drm-resident-memory", size.resident);
+ print_size(p, "drm-purgeable-memory", size.purgeable);
+ print_size(p, "drm-active-memory", size.active);
+ }
+}
+EXPORT_SYMBOL(drm_print_memory_stats);
+
/**
* mock_drm_getfile - Create a new struct file for the drm device
* @minor: drm minor to wrap (e.g. #drm_device.primary)
diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
index 0d1f853092ab..7bd8a1374f39 100644
--- a/include/drm/drm_file.h
+++ b/include/drm/drm_file.h
@@ -41,6 +41,7 @@
struct dma_fence;
struct drm_file;
struct drm_device;
+struct drm_printer;
struct device;
struct file;
@@ -438,6 +439,15 @@ void drm_send_event_timestamp_locked(struct drm_device *dev,
struct drm_pending_event *e,
ktime_t timestamp);
+enum drm_gem_object_status {
+ DRM_GEM_OBJECT_RESIDENT = BIT(0),
+ DRM_GEM_OBJECT_PURGEABLE = BIT(1),
+ DRM_GEM_OBJECT_ACTIVE = BIT(2),
+};
+
+void drm_print_memory_stats(struct drm_file *file, struct drm_printer *p,
+ enum drm_gem_object_status (*status)(struct drm_gem_object *));
+
struct file *mock_drm_getfile(struct drm_minor *minor, unsigned int flags);
#endif /* _DRM_FILE_H_ */
--
2.39.2
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH v2 1/2] drm: Add fdinfo memory stats
2023-04-10 21:06 ` [PATCH v2 1/2] drm: Add " Rob Clark
@ 2023-04-11 10:43 ` Daniel Vetter
2023-04-11 15:02 ` Rob Clark
0 siblings, 1 reply; 20+ messages in thread
From: Daniel Vetter @ 2023-04-11 10:43 UTC (permalink / raw)
To: Rob Clark
Cc: dri-devel, linux-arm-msm, freedreno, Boris Brezillon,
Tvrtko Ursulin, Christopher Healy, Emil Velikov, Rob Clark,
David Airlie, Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Jonathan Corbet, open list:DOCUMENTATION,
open list
On Mon, Apr 10, 2023 at 02:06:06PM -0700, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
>
> Add a helper to dump memory stats to fdinfo. For the things the drm
> core isn't aware of, use a callback.
>
> v2: Fix typos, change size units to match docs, use div_u64
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Uh can't we wire this up by default? Having this as a per-driver opt-in
sounds like we'll get maximally fragmented drm fd_info, and since that's
uapi I don't think that's any good at all.
I think it's time we have
- drm_fd_info
- rolled out to all drivers in their fops
- with feature checks as appropriate
- push the driver-specific things into a drm_driver callback
And I guess start peopling giving a hard time for making things needless
driver-specifict ... there's really no reason at all this is not
consistent across drivers.
-Daniel
> ---
> Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> include/drm/drm_file.h | 10 ++++
> 3 files changed, 110 insertions(+)
>
> diff --git a/Documentation/gpu/drm-usage-stats.rst b/Documentation/gpu/drm-usage-stats.rst
> index b46327356e80..b5e7802532ed 100644
> --- a/Documentation/gpu/drm-usage-stats.rst
> +++ b/Documentation/gpu/drm-usage-stats.rst
> @@ -105,6 +105,27 @@ object belong to this client, in the respective memory region.
> Default unit shall be bytes with optional unit specifiers of 'KiB' or 'MiB'
> indicating kibi- or mebi-bytes.
>
> +- drm-shared-memory: <uint> [KiB|MiB]
> +
> +The total size of buffers that are shared with another file (ie. have more
> +than a single handle).
> +
> +- drm-private-memory: <uint> [KiB|MiB]
> +
> +The total size of buffers that are not shared with another file.
> +
> +- drm-resident-memory: <uint> [KiB|MiB]
> +
> +The total size of buffers that are resident in system memory.
> +
> +- drm-purgeable-memory: <uint> [KiB|MiB]
> +
> +The total size of buffers that are purgeable.
> +
> +- drm-active-memory: <uint> [KiB|MiB]
> +
> +The total size of buffers that are active on one or more rings.
> +
> - drm-cycles-<str> <uint>
>
> Engine identifier string must be the same as the one specified in the
> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
> index a51ff8cee049..085b01842a87 100644
> --- a/drivers/gpu/drm/drm_file.c
> +++ b/drivers/gpu/drm/drm_file.c
> @@ -42,6 +42,7 @@
> #include <drm/drm_client.h>
> #include <drm/drm_drv.h>
> #include <drm/drm_file.h>
> +#include <drm/drm_gem.h>
> #include <drm/drm_print.h>
>
> #include "drm_crtc_internal.h"
> @@ -868,6 +869,84 @@ void drm_send_event(struct drm_device *dev, struct drm_pending_event *e)
> }
> EXPORT_SYMBOL(drm_send_event);
>
> +static void print_size(struct drm_printer *p, const char *stat, size_t sz)
> +{
> + const char *units[] = {"", " KiB", " MiB"};
> + unsigned u;
> +
> + for (u = 0; u < ARRAY_SIZE(units) - 1; u++) {
> + if (sz < SZ_1K)
> + break;
> + sz = div_u64(sz, SZ_1K);
> + }
> +
> + drm_printf(p, "%s:\t%zu%s\n", stat, sz, units[u]);
> +}
> +
> +/**
> + * drm_print_memory_stats - Helper to print standard fdinfo memory stats
> + * @file: the DRM file
> + * @p: the printer to print output to
> + * @status: callback to get driver tracked object status
> + *
> + * Helper to iterate over GEM objects with a handle allocated in the specified
> + * file. The optional status callback can return additional object state which
> + * determines which stats the object is counted against. The callback is called
> + * under table_lock. Racing against object status change is "harmless", and the
> + * callback can expect to not race against object destruction.
> + */
> +void drm_print_memory_stats(struct drm_file *file, struct drm_printer *p,
> + enum drm_gem_object_status (*status)(struct drm_gem_object *))
> +{
> + struct drm_gem_object *obj;
> + struct {
> + size_t shared;
> + size_t private;
> + size_t resident;
> + size_t purgeable;
> + size_t active;
> + } size = {0};
> + int id;
> +
> + spin_lock(&file->table_lock);
> + idr_for_each_entry (&file->object_idr, obj, id) {
> + enum drm_gem_object_status s = 0;
> +
> + if (status)
> + s = status(obj);
> +
> + if (obj->handle_count > 1) {
> + size.shared += obj->size;
> + } else {
> + size.private += obj->size;
> + }
> +
> + if (s & DRM_GEM_OBJECT_RESIDENT) {
> + size.resident += obj->size;
> + s &= ~DRM_GEM_OBJECT_PURGEABLE;
> + }
> +
> + if (s & DRM_GEM_OBJECT_ACTIVE) {
> + size.active += obj->size;
> + s &= ~DRM_GEM_OBJECT_PURGEABLE;
> + }
> +
> + if (s & DRM_GEM_OBJECT_PURGEABLE)
> + size.purgeable += obj->size;
> + }
> + spin_unlock(&file->table_lock);
> +
> + print_size(p, "drm-shared-memory", size.shared);
> + print_size(p, "drm-private-memory", size.private);
> +
> + if (status) {
> + print_size(p, "drm-resident-memory", size.resident);
> + print_size(p, "drm-purgeable-memory", size.purgeable);
> + print_size(p, "drm-active-memory", size.active);
> + }
> +}
> +EXPORT_SYMBOL(drm_print_memory_stats);
> +
> /**
> * mock_drm_getfile - Create a new struct file for the drm device
> * @minor: drm minor to wrap (e.g. #drm_device.primary)
> diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
> index 0d1f853092ab..7bd8a1374f39 100644
> --- a/include/drm/drm_file.h
> +++ b/include/drm/drm_file.h
> @@ -41,6 +41,7 @@
> struct dma_fence;
> struct drm_file;
> struct drm_device;
> +struct drm_printer;
> struct device;
> struct file;
>
> @@ -438,6 +439,15 @@ void drm_send_event_timestamp_locked(struct drm_device *dev,
> struct drm_pending_event *e,
> ktime_t timestamp);
>
> +enum drm_gem_object_status {
> + DRM_GEM_OBJECT_RESIDENT = BIT(0),
> + DRM_GEM_OBJECT_PURGEABLE = BIT(1),
> + DRM_GEM_OBJECT_ACTIVE = BIT(2),
> +};
> +
> +void drm_print_memory_stats(struct drm_file *file, struct drm_printer *p,
> + enum drm_gem_object_status (*status)(struct drm_gem_object *));
> +
> struct file *mock_drm_getfile(struct drm_minor *minor, unsigned int flags);
>
> #endif /* _DRM_FILE_H_ */
> --
> 2.39.2
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 1/2] drm: Add fdinfo memory stats
2023-04-11 10:43 ` Daniel Vetter
@ 2023-04-11 15:02 ` Rob Clark
2023-04-11 15:10 ` Daniel Vetter
0 siblings, 1 reply; 20+ messages in thread
From: Rob Clark @ 2023-04-11 15:02 UTC (permalink / raw)
To: Rob Clark, dri-devel, linux-arm-msm, freedreno, Boris Brezillon,
Tvrtko Ursulin, Christopher Healy, Emil Velikov, Rob Clark,
David Airlie, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, open list:DOCUMENTATION, open list
Cc: Daniel Vetter
On Tue, Apr 11, 2023 at 3:43 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Mon, Apr 10, 2023 at 02:06:06PM -0700, Rob Clark wrote:
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Add a helper to dump memory stats to fdinfo. For the things the drm
> > core isn't aware of, use a callback.
> >
> > v2: Fix typos, change size units to match docs, use div_u64
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
>
> Uh can't we wire this up by default? Having this as a per-driver opt-in
> sounds like we'll get maximally fragmented drm fd_info, and since that's
> uapi I don't think that's any good at all.
That is the reason for the centralized documentation of the props (and
why for this one I added a helper, rather than continuing the current
pattern of everyone rolling their own)..
We _could_ (and I had contemplated) doing this all in core if (a) we
move madv to drm_gem_object, and (b) track
drm_gem_get_pages()/drm_gem_put_pages(). I guess neither is totally
unreasonable, pretty much all the non-ttm/non-cma GEM drivers have
some form of madvise ioctl and use
drm_gem_get_pages()/drm_gem_put_pages()..
BR,
-R
> I think it's time we have
> - drm_fd_info
> - rolled out to all drivers in their fops
> - with feature checks as appropriate
> - push the driver-specific things into a drm_driver callback
>
> And I guess start peopling giving a hard time for making things needless
> driver-specifict ... there's really no reason at all this is not
> consistent across drivers.
> -Daniel
>
> > ---
> > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > include/drm/drm_file.h | 10 ++++
> > 3 files changed, 110 insertions(+)
> >
> > diff --git a/Documentation/gpu/drm-usage-stats.rst b/Documentation/gpu/drm-usage-stats.rst
> > index b46327356e80..b5e7802532ed 100644
> > --- a/Documentation/gpu/drm-usage-stats.rst
> > +++ b/Documentation/gpu/drm-usage-stats.rst
> > @@ -105,6 +105,27 @@ object belong to this client, in the respective memory region.
> > Default unit shall be bytes with optional unit specifiers of 'KiB' or 'MiB'
> > indicating kibi- or mebi-bytes.
> >
> > +- drm-shared-memory: <uint> [KiB|MiB]
> > +
> > +The total size of buffers that are shared with another file (ie. have more
> > +than a single handle).
> > +
> > +- drm-private-memory: <uint> [KiB|MiB]
> > +
> > +The total size of buffers that are not shared with another file.
> > +
> > +- drm-resident-memory: <uint> [KiB|MiB]
> > +
> > +The total size of buffers that are resident in system memory.
> > +
> > +- drm-purgeable-memory: <uint> [KiB|MiB]
> > +
> > +The total size of buffers that are purgeable.
> > +
> > +- drm-active-memory: <uint> [KiB|MiB]
> > +
> > +The total size of buffers that are active on one or more rings.
> > +
> > - drm-cycles-<str> <uint>
> >
> > Engine identifier string must be the same as the one specified in the
> > diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
> > index a51ff8cee049..085b01842a87 100644
> > --- a/drivers/gpu/drm/drm_file.c
> > +++ b/drivers/gpu/drm/drm_file.c
> > @@ -42,6 +42,7 @@
> > #include <drm/drm_client.h>
> > #include <drm/drm_drv.h>
> > #include <drm/drm_file.h>
> > +#include <drm/drm_gem.h>
> > #include <drm/drm_print.h>
> >
> > #include "drm_crtc_internal.h"
> > @@ -868,6 +869,84 @@ void drm_send_event(struct drm_device *dev, struct drm_pending_event *e)
> > }
> > EXPORT_SYMBOL(drm_send_event);
> >
> > +static void print_size(struct drm_printer *p, const char *stat, size_t sz)
> > +{
> > + const char *units[] = {"", " KiB", " MiB"};
> > + unsigned u;
> > +
> > + for (u = 0; u < ARRAY_SIZE(units) - 1; u++) {
> > + if (sz < SZ_1K)
> > + break;
> > + sz = div_u64(sz, SZ_1K);
> > + }
> > +
> > + drm_printf(p, "%s:\t%zu%s\n", stat, sz, units[u]);
> > +}
> > +
> > +/**
> > + * drm_print_memory_stats - Helper to print standard fdinfo memory stats
> > + * @file: the DRM file
> > + * @p: the printer to print output to
> > + * @status: callback to get driver tracked object status
> > + *
> > + * Helper to iterate over GEM objects with a handle allocated in the specified
> > + * file. The optional status callback can return additional object state which
> > + * determines which stats the object is counted against. The callback is called
> > + * under table_lock. Racing against object status change is "harmless", and the
> > + * callback can expect to not race against object destruction.
> > + */
> > +void drm_print_memory_stats(struct drm_file *file, struct drm_printer *p,
> > + enum drm_gem_object_status (*status)(struct drm_gem_object *))
> > +{
> > + struct drm_gem_object *obj;
> > + struct {
> > + size_t shared;
> > + size_t private;
> > + size_t resident;
> > + size_t purgeable;
> > + size_t active;
> > + } size = {0};
> > + int id;
> > +
> > + spin_lock(&file->table_lock);
> > + idr_for_each_entry (&file->object_idr, obj, id) {
> > + enum drm_gem_object_status s = 0;
> > +
> > + if (status)
> > + s = status(obj);
> > +
> > + if (obj->handle_count > 1) {
> > + size.shared += obj->size;
> > + } else {
> > + size.private += obj->size;
> > + }
> > +
> > + if (s & DRM_GEM_OBJECT_RESIDENT) {
> > + size.resident += obj->size;
> > + s &= ~DRM_GEM_OBJECT_PURGEABLE;
> > + }
> > +
> > + if (s & DRM_GEM_OBJECT_ACTIVE) {
> > + size.active += obj->size;
> > + s &= ~DRM_GEM_OBJECT_PURGEABLE;
> > + }
> > +
> > + if (s & DRM_GEM_OBJECT_PURGEABLE)
> > + size.purgeable += obj->size;
> > + }
> > + spin_unlock(&file->table_lock);
> > +
> > + print_size(p, "drm-shared-memory", size.shared);
> > + print_size(p, "drm-private-memory", size.private);
> > +
> > + if (status) {
> > + print_size(p, "drm-resident-memory", size.resident);
> > + print_size(p, "drm-purgeable-memory", size.purgeable);
> > + print_size(p, "drm-active-memory", size.active);
> > + }
> > +}
> > +EXPORT_SYMBOL(drm_print_memory_stats);
> > +
> > /**
> > * mock_drm_getfile - Create a new struct file for the drm device
> > * @minor: drm minor to wrap (e.g. #drm_device.primary)
> > diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
> > index 0d1f853092ab..7bd8a1374f39 100644
> > --- a/include/drm/drm_file.h
> > +++ b/include/drm/drm_file.h
> > @@ -41,6 +41,7 @@
> > struct dma_fence;
> > struct drm_file;
> > struct drm_device;
> > +struct drm_printer;
> > struct device;
> > struct file;
> >
> > @@ -438,6 +439,15 @@ void drm_send_event_timestamp_locked(struct drm_device *dev,
> > struct drm_pending_event *e,
> > ktime_t timestamp);
> >
> > +enum drm_gem_object_status {
> > + DRM_GEM_OBJECT_RESIDENT = BIT(0),
> > + DRM_GEM_OBJECT_PURGEABLE = BIT(1),
> > + DRM_GEM_OBJECT_ACTIVE = BIT(2),
> > +};
> > +
> > +void drm_print_memory_stats(struct drm_file *file, struct drm_printer *p,
> > + enum drm_gem_object_status (*status)(struct drm_gem_object *));
> > +
> > struct file *mock_drm_getfile(struct drm_minor *minor, unsigned int flags);
> >
> > #endif /* _DRM_FILE_H_ */
> > --
> > 2.39.2
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 1/2] drm: Add fdinfo memory stats
2023-04-11 15:02 ` Rob Clark
@ 2023-04-11 15:10 ` Daniel Vetter
0 siblings, 0 replies; 20+ messages in thread
From: Daniel Vetter @ 2023-04-11 15:10 UTC (permalink / raw)
To: Rob Clark
Cc: dri-devel, linux-arm-msm, freedreno, Boris Brezillon,
Tvrtko Ursulin, Christopher Healy, Emil Velikov, Rob Clark,
David Airlie, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Jonathan Corbet, open list:DOCUMENTATION, open list,
Daniel Vetter
On Tue, Apr 11, 2023 at 08:02:09AM -0700, Rob Clark wrote:
> On Tue, Apr 11, 2023 at 3:43 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> >
> > On Mon, Apr 10, 2023 at 02:06:06PM -0700, Rob Clark wrote:
> > > From: Rob Clark <robdclark@chromium.org>
> > >
> > > Add a helper to dump memory stats to fdinfo. For the things the drm
> > > core isn't aware of, use a callback.
> > >
> > > v2: Fix typos, change size units to match docs, use div_u64
> > >
> > > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > > Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
> >
> > Uh can't we wire this up by default? Having this as a per-driver opt-in
> > sounds like we'll get maximally fragmented drm fd_info, and since that's
> > uapi I don't think that's any good at all.
>
> That is the reason for the centralized documentation of the props (and
> why for this one I added a helper, rather than continuing the current
> pattern of everyone rolling their own)..
Yeah, but we all know how consistent specs are without either a common
implementation or a test suite (or better, both) :-)
It's imo good to kick of new things, but anything that multiple drivers
could/should implement or once we have multiple drivers implementing
something should be common code.
I'm doing the same pushing for the fd_info around ctx, at least all
drivers using drm/sched should mostly just get this stuff instead of tons
of driver glue that then blows up in interesting ways because people
discover new ways to get lifetime rules wrong ...
> We _could_ (and I had contemplated) doing this all in core if (a) we
> move madv to drm_gem_object, and (b) track
> drm_gem_get_pages()/drm_gem_put_pages(). I guess neither is totally
> unreasonable, pretty much all the non-ttm/non-cma GEM drivers have
> some form of madvise ioctl and use
> drm_gem_get_pages()/drm_gem_put_pages()..
The active part shouldn't need anything new, you should be able to compute
that by looking at dma_resv (which is still ok under the spinlock, we
still have the lockless stuff to check status afaik).
The other bits are a bit more work, and I guess you could sort that out
for now by making the status callback optional. Long term pushing
purgeable as a concept one level up might make sense, but that's maybe a
bit too much refactoring for this.
I think just the minimal to get at least infra in place and the fully
driver-agnostic stuff computed for every gem driver would be great
already. And it makes it much easier for the next fd_info thing to become
fully generic, which hopefully motivates people to do that.
-Daniel
>
> BR,
> -R
>
> > I think it's time we have
> > - drm_fd_info
> > - rolled out to all drivers in their fops
> > - with feature checks as appropriate
> > - push the driver-specific things into a drm_driver callback
> >
> > And I guess start peopling giving a hard time for making things needless
> > driver-specifict ... there's really no reason at all this is not
> > consistent across drivers.
> > -Daniel
> >
> > > ---
> > > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > > include/drm/drm_file.h | 10 ++++
> > > 3 files changed, 110 insertions(+)
> > >
> > > diff --git a/Documentation/gpu/drm-usage-stats.rst b/Documentation/gpu/drm-usage-stats.rst
> > > index b46327356e80..b5e7802532ed 100644
> > > --- a/Documentation/gpu/drm-usage-stats.rst
> > > +++ b/Documentation/gpu/drm-usage-stats.rst
> > > @@ -105,6 +105,27 @@ object belong to this client, in the respective memory region.
> > > Default unit shall be bytes with optional unit specifiers of 'KiB' or 'MiB'
> > > indicating kibi- or mebi-bytes.
> > >
> > > +- drm-shared-memory: <uint> [KiB|MiB]
> > > +
> > > +The total size of buffers that are shared with another file (ie. have more
> > > +than a single handle).
> > > +
> > > +- drm-private-memory: <uint> [KiB|MiB]
> > > +
> > > +The total size of buffers that are not shared with another file.
> > > +
> > > +- drm-resident-memory: <uint> [KiB|MiB]
> > > +
> > > +The total size of buffers that are resident in system memory.
> > > +
> > > +- drm-purgeable-memory: <uint> [KiB|MiB]
> > > +
> > > +The total size of buffers that are purgeable.
> > > +
> > > +- drm-active-memory: <uint> [KiB|MiB]
> > > +
> > > +The total size of buffers that are active on one or more rings.
> > > +
> > > - drm-cycles-<str> <uint>
> > >
> > > Engine identifier string must be the same as the one specified in the
> > > diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
> > > index a51ff8cee049..085b01842a87 100644
> > > --- a/drivers/gpu/drm/drm_file.c
> > > +++ b/drivers/gpu/drm/drm_file.c
> > > @@ -42,6 +42,7 @@
> > > #include <drm/drm_client.h>
> > > #include <drm/drm_drv.h>
> > > #include <drm/drm_file.h>
> > > +#include <drm/drm_gem.h>
> > > #include <drm/drm_print.h>
> > >
> > > #include "drm_crtc_internal.h"
> > > @@ -868,6 +869,84 @@ void drm_send_event(struct drm_device *dev, struct drm_pending_event *e)
> > > }
> > > EXPORT_SYMBOL(drm_send_event);
> > >
> > > +static void print_size(struct drm_printer *p, const char *stat, size_t sz)
> > > +{
> > > + const char *units[] = {"", " KiB", " MiB"};
> > > + unsigned u;
> > > +
> > > + for (u = 0; u < ARRAY_SIZE(units) - 1; u++) {
> > > + if (sz < SZ_1K)
> > > + break;
> > > + sz = div_u64(sz, SZ_1K);
> > > + }
> > > +
> > > + drm_printf(p, "%s:\t%zu%s\n", stat, sz, units[u]);
> > > +}
> > > +
> > > +/**
> > > + * drm_print_memory_stats - Helper to print standard fdinfo memory stats
> > > + * @file: the DRM file
> > > + * @p: the printer to print output to
> > > + * @status: callback to get driver tracked object status
> > > + *
> > > + * Helper to iterate over GEM objects with a handle allocated in the specified
> > > + * file. The optional status callback can return additional object state which
> > > + * determines which stats the object is counted against. The callback is called
> > > + * under table_lock. Racing against object status change is "harmless", and the
> > > + * callback can expect to not race against object destruction.
> > > + */
> > > +void drm_print_memory_stats(struct drm_file *file, struct drm_printer *p,
> > > + enum drm_gem_object_status (*status)(struct drm_gem_object *))
> > > +{
> > > + struct drm_gem_object *obj;
> > > + struct {
> > > + size_t shared;
> > > + size_t private;
> > > + size_t resident;
> > > + size_t purgeable;
> > > + size_t active;
> > > + } size = {0};
> > > + int id;
> > > +
> > > + spin_lock(&file->table_lock);
> > > + idr_for_each_entry (&file->object_idr, obj, id) {
> > > + enum drm_gem_object_status s = 0;
> > > +
> > > + if (status)
> > > + s = status(obj);
> > > +
> > > + if (obj->handle_count > 1) {
> > > + size.shared += obj->size;
> > > + } else {
> > > + size.private += obj->size;
> > > + }
> > > +
> > > + if (s & DRM_GEM_OBJECT_RESIDENT) {
> > > + size.resident += obj->size;
> > > + s &= ~DRM_GEM_OBJECT_PURGEABLE;
> > > + }
> > > +
> > > + if (s & DRM_GEM_OBJECT_ACTIVE) {
> > > + size.active += obj->size;
> > > + s &= ~DRM_GEM_OBJECT_PURGEABLE;
> > > + }
> > > +
> > > + if (s & DRM_GEM_OBJECT_PURGEABLE)
> > > + size.purgeable += obj->size;
> > > + }
> > > + spin_unlock(&file->table_lock);
> > > +
> > > + print_size(p, "drm-shared-memory", size.shared);
> > > + print_size(p, "drm-private-memory", size.private);
> > > +
> > > + if (status) {
> > > + print_size(p, "drm-resident-memory", size.resident);
> > > + print_size(p, "drm-purgeable-memory", size.purgeable);
> > > + print_size(p, "drm-active-memory", size.active);
> > > + }
> > > +}
> > > +EXPORT_SYMBOL(drm_print_memory_stats);
> > > +
> > > /**
> > > * mock_drm_getfile - Create a new struct file for the drm device
> > > * @minor: drm minor to wrap (e.g. #drm_device.primary)
> > > diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
> > > index 0d1f853092ab..7bd8a1374f39 100644
> > > --- a/include/drm/drm_file.h
> > > +++ b/include/drm/drm_file.h
> > > @@ -41,6 +41,7 @@
> > > struct dma_fence;
> > > struct drm_file;
> > > struct drm_device;
> > > +struct drm_printer;
> > > struct device;
> > > struct file;
> > >
> > > @@ -438,6 +439,15 @@ void drm_send_event_timestamp_locked(struct drm_device *dev,
> > > struct drm_pending_event *e,
> > > ktime_t timestamp);
> > >
> > > +enum drm_gem_object_status {
> > > + DRM_GEM_OBJECT_RESIDENT = BIT(0),
> > > + DRM_GEM_OBJECT_PURGEABLE = BIT(1),
> > > + DRM_GEM_OBJECT_ACTIVE = BIT(2),
> > > +};
> > > +
> > > +void drm_print_memory_stats(struct drm_file *file, struct drm_printer *p,
> > > + enum drm_gem_object_status (*status)(struct drm_gem_object *));
> > > +
> > > struct file *mock_drm_getfile(struct drm_minor *minor, unsigned int flags);
> > >
> > > #endif /* _DRM_FILE_H_ */
> > > --
> > > 2.39.2
> > >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-10 21:06 [PATCH v2 0/2] drm: fdinfo memory stats Rob Clark
2023-04-10 21:06 ` [PATCH v2 1/2] drm: Add " Rob Clark
@ 2023-04-11 16:47 ` Rob Clark
2023-04-11 16:53 ` Daniel Vetter
1 sibling, 1 reply; 20+ messages in thread
From: Rob Clark @ 2023-04-11 16:47 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Boris Brezillon, Tvrtko Ursulin,
Christopher Healy, Emil Velikov, Rob Clark,
open list:DOCUMENTATION, open list, Sean Paul
On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
>
> From: Rob Clark <robdclark@chromium.org>
>
> Similar motivation to other similar recent attempt[1]. But with an
> attempt to have some shared code for this. As well as documentation.
>
> It is probably a bit UMA-centric, I guess devices with VRAM might want
> some placement stats as well. But this seems like a reasonable start.
>
> Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> And already nvtop support: https://github.com/Syllo/nvtop/pull/204
On a related topic, I'm wondering if it would make sense to report
some more global things (temp, freq, etc) via fdinfo? Some of this,
tools like nvtop could get by trawling sysfs or other driver specific
ways. But maybe it makes sense to have these sort of things reported
in a standardized way (even though they aren't really per-drm_file)
BR,
-R
> [1] https://patchwork.freedesktop.org/series/112397/
>
> Rob Clark (2):
> drm: Add fdinfo memory stats
> drm/msm: Add memory stats to fdinfo
>
> Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
> drivers/gpu/drm/msm/msm_gpu.c | 2 -
> include/drm/drm_file.h | 10 ++++
> 5 files changed, 134 insertions(+), 3 deletions(-)
>
> --
> 2.39.2
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-11 16:47 ` [PATCH v2 0/2] drm: " Rob Clark
@ 2023-04-11 16:53 ` Daniel Vetter
2023-04-11 17:13 ` Rob Clark
0 siblings, 1 reply; 20+ messages in thread
From: Daniel Vetter @ 2023-04-11 16:53 UTC (permalink / raw)
To: Rob Clark
Cc: dri-devel, Rob Clark, Tvrtko Ursulin, open list:DOCUMENTATION,
linux-arm-msm, Emil Velikov, Christopher Healy, open list,
Sean Paul, Boris Brezillon, freedreno
On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> >
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Similar motivation to other similar recent attempt[1]. But with an
> > attempt to have some shared code for this. As well as documentation.
> >
> > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > some placement stats as well. But this seems like a reasonable start.
> >
> > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
>
> On a related topic, I'm wondering if it would make sense to report
> some more global things (temp, freq, etc) via fdinfo? Some of this,
> tools like nvtop could get by trawling sysfs or other driver specific
> ways. But maybe it makes sense to have these sort of things reported
> in a standardized way (even though they aren't really per-drm_file)
I think that's a bit much layering violation, we'd essentially have to
reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
be in :-)
What might be needed is better glue to go from the fd or fdinfo to the
right hw device and then crawl around the hwmon in sysfs automatically. I
would not be surprised at all if we really suck on this, probably more
likely on SoC than pci gpus where at least everything should be under the
main pci sysfs device.
-Daniel
>
> BR,
> -R
>
>
> > [1] https://patchwork.freedesktop.org/series/112397/
> >
> > Rob Clark (2):
> > drm: Add fdinfo memory stats
> > drm/msm: Add memory stats to fdinfo
> >
> > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
> > drivers/gpu/drm/msm/msm_gpu.c | 2 -
> > include/drm/drm_file.h | 10 ++++
> > 5 files changed, 134 insertions(+), 3 deletions(-)
> >
> > --
> > 2.39.2
> >
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-11 16:53 ` Daniel Vetter
@ 2023-04-11 17:13 ` Rob Clark
2023-04-11 17:35 ` [Freedreno] " Dmitry Baryshkov
0 siblings, 1 reply; 20+ messages in thread
From: Rob Clark @ 2023-04-11 17:13 UTC (permalink / raw)
To: Rob Clark, dri-devel, Rob Clark, Tvrtko Ursulin,
open list:DOCUMENTATION, linux-arm-msm, Emil Velikov,
Christopher Healy, open list, Sean Paul, Boris Brezillon,
freedreno
On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > >
> > > From: Rob Clark <robdclark@chromium.org>
> > >
> > > Similar motivation to other similar recent attempt[1]. But with an
> > > attempt to have some shared code for this. As well as documentation.
> > >
> > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > some placement stats as well. But this seems like a reasonable start.
> > >
> > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> >
> > On a related topic, I'm wondering if it would make sense to report
> > some more global things (temp, freq, etc) via fdinfo? Some of this,
> > tools like nvtop could get by trawling sysfs or other driver specific
> > ways. But maybe it makes sense to have these sort of things reported
> > in a standardized way (even though they aren't really per-drm_file)
>
> I think that's a bit much layering violation, we'd essentially have to
> reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> be in :-)
I guess this is true for temp (where there are thermal zones with
potentially multiple temp sensors.. but I'm still digging my way thru
the thermal_cooling_device stuff)
But what about freq? I think, esp for cases where some "fw thing" is
controlling the freq we end up needing to use gpu counters to measure
the freq.
> What might be needed is better glue to go from the fd or fdinfo to the
> right hw device and then crawl around the hwmon in sysfs automatically. I
> would not be surprised at all if we really suck on this, probably more
> likely on SoC than pci gpus where at least everything should be under the
> main pci sysfs device.
yeah, I *think* userspace would have to look at /proc/device-tree to
find the cooling device(s) associated with the gpu.. at least I don't
see a straightforward way to figure it out just for sysfs
BR,
-R
> -Daniel
>
> >
> > BR,
> > -R
> >
> >
> > > [1] https://patchwork.freedesktop.org/series/112397/
> > >
> > > Rob Clark (2):
> > > drm: Add fdinfo memory stats
> > > drm/msm: Add memory stats to fdinfo
> > >
> > > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > > drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
> > > drivers/gpu/drm/msm/msm_gpu.c | 2 -
> > > include/drm/drm_file.h | 10 ++++
> > > 5 files changed, 134 insertions(+), 3 deletions(-)
> > >
> > > --
> > > 2.39.2
> > >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-11 17:13 ` Rob Clark
@ 2023-04-11 17:35 ` Dmitry Baryshkov
2023-04-11 18:26 ` Daniel Vetter
2023-04-11 18:28 ` Rob Clark
0 siblings, 2 replies; 20+ messages in thread
From: Dmitry Baryshkov @ 2023-04-11 17:35 UTC (permalink / raw)
To: Rob Clark
Cc: dri-devel, Rob Clark, Tvrtko Ursulin, open list:DOCUMENTATION,
linux-arm-msm, Emil Velikov, Christopher Healy, open list,
Sean Paul, Boris Brezillon, freedreno
On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
>
> On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> >
> > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > From: Rob Clark <robdclark@chromium.org>
> > > >
> > > > Similar motivation to other similar recent attempt[1]. But with an
> > > > attempt to have some shared code for this. As well as documentation.
> > > >
> > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > some placement stats as well. But this seems like a reasonable start.
> > > >
> > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > >
> > > On a related topic, I'm wondering if it would make sense to report
> > > some more global things (temp, freq, etc) via fdinfo? Some of this,
> > > tools like nvtop could get by trawling sysfs or other driver specific
> > > ways. But maybe it makes sense to have these sort of things reported
> > > in a standardized way (even though they aren't really per-drm_file)
> >
> > I think that's a bit much layering violation, we'd essentially have to
> > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > be in :-)
>
> I guess this is true for temp (where there are thermal zones with
> potentially multiple temp sensors.. but I'm still digging my way thru
> the thermal_cooling_device stuff)
It is slightly ugly. All thermal zones and cooling devices are virtual
devices (so, even no connection to the particular tsens device). One
can either enumerate them by checking
/sys/class/thermal/thermal_zoneN/type or enumerate them through
/sys/class/hwmon. For cooling devices again the only enumeration is
through /sys/class/thermal/cooling_deviceN/type.
Probably it should be possible to push cooling devices and thermal
zones under corresponding providers. However I do not know if there is
a good way to correlate cooling device (ideally a part of GPU) to the
thermal_zone (which in our case is provided by tsens / temp_alarm
rather than GPU itself).
>
> But what about freq? I think, esp for cases where some "fw thing" is
> controlling the freq we end up needing to use gpu counters to measure
> the freq.
For the freq it is slightly easier: /sys/class/devfreq/*, devices are
registered under proper parent (IOW, GPU). So one can read
/sys/class/devfreq/3d00000.gpu/cur_freq or
/sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
However because of the components usage, there is no link from
/sys/class/drm/card0
(/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
Getting all these items together in a platform-independent way would
be definitely an important but complex topic.
>
> > What might be needed is better glue to go from the fd or fdinfo to the
> > right hw device and then crawl around the hwmon in sysfs automatically. I
> > would not be surprised at all if we really suck on this, probably more
> > likely on SoC than pci gpus where at least everything should be under the
> > main pci sysfs device.
>
> yeah, I *think* userspace would have to look at /proc/device-tree to
> find the cooling device(s) associated with the gpu.. at least I don't
> see a straightforward way to figure it out just for sysfs
>
> BR,
> -R
>
> > -Daniel
> >
> > >
> > > BR,
> > > -R
> > >
> > >
> > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > >
> > > > Rob Clark (2):
> > > > drm: Add fdinfo memory stats
> > > > drm/msm: Add memory stats to fdinfo
> > > >
> > > > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > > > drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
> > > > drivers/gpu/drm/msm/msm_gpu.c | 2 -
> > > > include/drm/drm_file.h | 10 ++++
> > > > 5 files changed, 134 insertions(+), 3 deletions(-)
> > > >
> > > > --
> > > > 2.39.2
> > > >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
--
With best wishes
Dmitry
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-11 17:35 ` [Freedreno] " Dmitry Baryshkov
@ 2023-04-11 18:26 ` Daniel Vetter
2023-04-11 22:27 ` Dmitry Baryshkov
2023-04-11 18:28 ` Rob Clark
1 sibling, 1 reply; 20+ messages in thread
From: Daniel Vetter @ 2023-04-11 18:26 UTC (permalink / raw)
To: Dmitry Baryshkov
Cc: Rob Clark, Rob Clark, Tvrtko Ursulin, open list:DOCUMENTATION,
linux-arm-msm, Emil Velikov, Christopher Healy, dri-devel,
open list, Boris Brezillon, freedreno, Sean Paul
On Tue, Apr 11, 2023 at 08:35:48PM +0300, Dmitry Baryshkov wrote:
> On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
> >
> > On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > >
> > > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > >
> > > > > From: Rob Clark <robdclark@chromium.org>
> > > > >
> > > > > Similar motivation to other similar recent attempt[1]. But with an
> > > > > attempt to have some shared code for this. As well as documentation.
> > > > >
> > > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > > some placement stats as well. But this seems like a reasonable start.
> > > > >
> > > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > > >
> > > > On a related topic, I'm wondering if it would make sense to report
> > > > some more global things (temp, freq, etc) via fdinfo? Some of this,
> > > > tools like nvtop could get by trawling sysfs or other driver specific
> > > > ways. But maybe it makes sense to have these sort of things reported
> > > > in a standardized way (even though they aren't really per-drm_file)
> > >
> > > I think that's a bit much layering violation, we'd essentially have to
> > > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > > be in :-)
> >
> > I guess this is true for temp (where there are thermal zones with
> > potentially multiple temp sensors.. but I'm still digging my way thru
> > the thermal_cooling_device stuff)
>
> It is slightly ugly. All thermal zones and cooling devices are virtual
> devices (so, even no connection to the particular tsens device). One
> can either enumerate them by checking
> /sys/class/thermal/thermal_zoneN/type or enumerate them through
> /sys/class/hwmon. For cooling devices again the only enumeration is
> through /sys/class/thermal/cooling_deviceN/type.
>
> Probably it should be possible to push cooling devices and thermal
> zones under corresponding providers. However I do not know if there is
> a good way to correlate cooling device (ideally a part of GPU) to the
> thermal_zone (which in our case is provided by tsens / temp_alarm
> rather than GPU itself).
There's not even sysfs links to connect the pieces in both ways?
> > But what about freq? I think, esp for cases where some "fw thing" is
> > controlling the freq we end up needing to use gpu counters to measure
> > the freq.
>
> For the freq it is slightly easier: /sys/class/devfreq/*, devices are
> registered under proper parent (IOW, GPU). So one can read
> /sys/class/devfreq/3d00000.gpu/cur_freq or
> /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
>
> However because of the components usage, there is no link from
> /sys/class/drm/card0
> (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
> to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
Hm ... do we need to make component more visible in sysfs, with _looooots_
of links? Atm it's just not even there.
> Getting all these items together in a platform-independent way would
> be definitely an important but complex topic.
Yeah this sounds like some work. But also sounds like it's all generic
issues (thermal zones above and component here) that really should be
fixed at that level?
Cheers, Daniel
> > > What might be needed is better glue to go from the fd or fdinfo to the
> > > right hw device and then crawl around the hwmon in sysfs automatically. I
> > > would not be surprised at all if we really suck on this, probably more
> > > likely on SoC than pci gpus where at least everything should be under the
> > > main pci sysfs device.
> >
> > yeah, I *think* userspace would have to look at /proc/device-tree to
> > find the cooling device(s) associated with the gpu.. at least I don't
> > see a straightforward way to figure it out just for sysfs
> >
> > BR,
> > -R
> >
> > > -Daniel
> > >
> > > >
> > > > BR,
> > > > -R
> > > >
> > > >
> > > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > > >
> > > > > Rob Clark (2):
> > > > > drm: Add fdinfo memory stats
> > > > > drm/msm: Add memory stats to fdinfo
> > > > >
> > > > > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > > > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > > > > drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
> > > > > drivers/gpu/drm/msm/msm_gpu.c | 2 -
> > > > > include/drm/drm_file.h | 10 ++++
> > > > > 5 files changed, 134 insertions(+), 3 deletions(-)
> > > > >
> > > > > --
> > > > > 2.39.2
> > > > >
> > >
> > > --
> > > Daniel Vetter
> > > Software Engineer, Intel Corporation
> > > http://blog.ffwll.ch
>
>
>
> --
> With best wishes
> Dmitry
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-11 17:35 ` [Freedreno] " Dmitry Baryshkov
2023-04-11 18:26 ` Daniel Vetter
@ 2023-04-11 18:28 ` Rob Clark
2023-04-11 22:36 ` Dmitry Baryshkov
1 sibling, 1 reply; 20+ messages in thread
From: Rob Clark @ 2023-04-11 18:28 UTC (permalink / raw)
To: Dmitry Baryshkov
Cc: dri-devel, Rob Clark, Tvrtko Ursulin, open list:DOCUMENTATION,
linux-arm-msm, Emil Velikov, Christopher Healy, open list,
Sean Paul, Boris Brezillon, freedreno
On Tue, Apr 11, 2023 at 10:36 AM Dmitry Baryshkov
<dmitry.baryshkov@linaro.org> wrote:
>
> On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
> >
> > On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > >
> > > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > >
> > > > > From: Rob Clark <robdclark@chromium.org>
> > > > >
> > > > > Similar motivation to other similar recent attempt[1]. But with an
> > > > > attempt to have some shared code for this. As well as documentation.
> > > > >
> > > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > > some placement stats as well. But this seems like a reasonable start.
> > > > >
> > > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > > >
> > > > On a related topic, I'm wondering if it would make sense to report
> > > > some more global things (temp, freq, etc) via fdinfo? Some of this,
> > > > tools like nvtop could get by trawling sysfs or other driver specific
> > > > ways. But maybe it makes sense to have these sort of things reported
> > > > in a standardized way (even though they aren't really per-drm_file)
> > >
> > > I think that's a bit much layering violation, we'd essentially have to
> > > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > > be in :-)
> >
> > I guess this is true for temp (where there are thermal zones with
> > potentially multiple temp sensors.. but I'm still digging my way thru
> > the thermal_cooling_device stuff)
>
> It is slightly ugly. All thermal zones and cooling devices are virtual
> devices (so, even no connection to the particular tsens device). One
> can either enumerate them by checking
> /sys/class/thermal/thermal_zoneN/type or enumerate them through
> /sys/class/hwmon. For cooling devices again the only enumeration is
> through /sys/class/thermal/cooling_deviceN/type.
>
> Probably it should be possible to push cooling devices and thermal
> zones under corresponding providers. However I do not know if there is
> a good way to correlate cooling device (ideally a part of GPU) to the
> thermal_zone (which in our case is provided by tsens / temp_alarm
> rather than GPU itself).
>
> >
> > But what about freq? I think, esp for cases where some "fw thing" is
> > controlling the freq we end up needing to use gpu counters to measure
> > the freq.
>
> For the freq it is slightly easier: /sys/class/devfreq/*, devices are
> registered under proper parent (IOW, GPU). So one can read
> /sys/class/devfreq/3d00000.gpu/cur_freq or
> /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
>
> However because of the components usage, there is no link from
> /sys/class/drm/card0
> (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
> to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
>
> Getting all these items together in a platform-independent way would
> be definitely an important but complex topic.
But I don't believe any of the pci gpu's use devfreq ;-)
And also, you can't expect the CPU to actually know the freq when fw
is the one controlling freq. We can, currently, have a reasonable
approximation from devfreq but that stops if IFPC is implemented. And
other GPUs have even less direct control. So freq is a thing that I
don't think we should try to get from "common frameworks"
BR,
-R
> >
> > > What might be needed is better glue to go from the fd or fdinfo to the
> > > right hw device and then crawl around the hwmon in sysfs automatically. I
> > > would not be surprised at all if we really suck on this, probably more
> > > likely on SoC than pci gpus where at least everything should be under the
> > > main pci sysfs device.
> >
> > yeah, I *think* userspace would have to look at /proc/device-tree to
> > find the cooling device(s) associated with the gpu.. at least I don't
> > see a straightforward way to figure it out just for sysfs
> >
> > BR,
> > -R
> >
> > > -Daniel
> > >
> > > >
> > > > BR,
> > > > -R
> > > >
> > > >
> > > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > > >
> > > > > Rob Clark (2):
> > > > > drm: Add fdinfo memory stats
> > > > > drm/msm: Add memory stats to fdinfo
> > > > >
> > > > > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > > > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > > > > drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
> > > > > drivers/gpu/drm/msm/msm_gpu.c | 2 -
> > > > > include/drm/drm_file.h | 10 ++++
> > > > > 5 files changed, 134 insertions(+), 3 deletions(-)
> > > > >
> > > > > --
> > > > > 2.39.2
> > > > >
> > >
> > > --
> > > Daniel Vetter
> > > Software Engineer, Intel Corporation
> > > http://blog.ffwll.ch
>
>
>
> --
> With best wishes
> Dmitry
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-11 18:26 ` Daniel Vetter
@ 2023-04-11 22:27 ` Dmitry Baryshkov
0 siblings, 0 replies; 20+ messages in thread
From: Dmitry Baryshkov @ 2023-04-11 22:27 UTC (permalink / raw)
To: Rob Clark, Rob Clark, Tvrtko Ursulin, open list:DOCUMENTATION,
linux-arm-msm, Emil Velikov, Christopher Healy, dri-devel,
open list, Boris Brezillon, freedreno, Sean Paul
On 11/04/2023 21:26, Daniel Vetter wrote:
> On Tue, Apr 11, 2023 at 08:35:48PM +0300, Dmitry Baryshkov wrote:
>> On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
>>>
>>> On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>>>>
>>>> On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
>>>>> On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
>>>>>>
>>>>>> From: Rob Clark <robdclark@chromium.org>
>>>>>>
>>>>>> Similar motivation to other similar recent attempt[1]. But with an
>>>>>> attempt to have some shared code for this. As well as documentation.
>>>>>>
>>>>>> It is probably a bit UMA-centric, I guess devices with VRAM might want
>>>>>> some placement stats as well. But this seems like a reasonable start.
>>>>>>
>>>>>> Basic gputop support: https://patchwork.freedesktop.org/series/116236/
>>>>>> And already nvtop support: https://github.com/Syllo/nvtop/pull/204
>>>>>
>>>>> On a related topic, I'm wondering if it would make sense to report
>>>>> some more global things (temp, freq, etc) via fdinfo? Some of this,
>>>>> tools like nvtop could get by trawling sysfs or other driver specific
>>>>> ways. But maybe it makes sense to have these sort of things reported
>>>>> in a standardized way (even though they aren't really per-drm_file)
>>>>
>>>> I think that's a bit much layering violation, we'd essentially have to
>>>> reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
>>>> be in :-)
>>>
>>> I guess this is true for temp (where there are thermal zones with
>>> potentially multiple temp sensors.. but I'm still digging my way thru
>>> the thermal_cooling_device stuff)
>>
>> It is slightly ugly. All thermal zones and cooling devices are virtual
>> devices (so, even no connection to the particular tsens device). One
>> can either enumerate them by checking
>> /sys/class/thermal/thermal_zoneN/type or enumerate them through
>> /sys/class/hwmon. For cooling devices again the only enumeration is
>> through /sys/class/thermal/cooling_deviceN/type.
>>
>> Probably it should be possible to push cooling devices and thermal
>> zones under corresponding providers. However I do not know if there is
>> a good way to correlate cooling device (ideally a part of GPU) to the
>> thermal_zone (which in our case is provided by tsens / temp_alarm
>> rather than GPU itself).
>
> There's not even sysfs links to connect the pieces in both ways?
I missed them in the most obvious place:
/sys/class/thermal/thermal_zone1/cdev0 -> ../cooling_device0
So, there is a link from thermal zone to cooling device.
>
>>> But what about freq? I think, esp for cases where some "fw thing" is
>>> controlling the freq we end up needing to use gpu counters to measure
>>> the freq.
>>
>> For the freq it is slightly easier: /sys/class/devfreq/*, devices are
>> registered under proper parent (IOW, GPU). So one can read
>> /sys/class/devfreq/3d00000.gpu/cur_freq or
>> /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
>>
>> However because of the components usage, there is no link from
>> /sys/class/drm/card0
>> (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
>> to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
>
> Hm ... do we need to make component more visible in sysfs, with _looooots_
> of links? Atm it's just not even there.
Maybe. Or maybe we should use DPU (the component master and a parent of
drm/card0) as devfreq parent too.
>
>> Getting all these items together in a platform-independent way would
>> be definitely an important but complex topic.
>
> Yeah this sounds like some work. But also sounds like it's all generic
> issues (thermal zones above and component here) that really should be
> fixed at that level?
>
> Cheers, Daniel
>
>
>>>> What might be needed is better glue to go from the fd or fdinfo to the
>>>> right hw device and then crawl around the hwmon in sysfs automatically. I
>>>> would not be surprised at all if we really suck on this, probably more
>>>> likely on SoC than pci gpus where at least everything should be under the
>>>> main pci sysfs device.
>>>
>>> yeah, I *think* userspace would have to look at /proc/device-tree to
>>> find the cooling device(s) associated with the gpu.. at least I don't
>>> see a straightforward way to figure it out just for sysfs
>>>
>>> BR,
>>> -R
>>>
>>>> -Daniel
>>>>
>>>>>
>>>>> BR,
>>>>> -R
>>>>>
>>>>>
>>>>>> [1] https://patchwork.freedesktop.org/series/112397/
>>>>>>
>>>>>> Rob Clark (2):
>>>>>> drm: Add fdinfo memory stats
>>>>>> drm/msm: Add memory stats to fdinfo
>>>>>>
>>>>>> Documentation/gpu/drm-usage-stats.rst | 21 +++++++
>>>>>> drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
>>>>>> drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
>>>>>> drivers/gpu/drm/msm/msm_gpu.c | 2 -
>>>>>> include/drm/drm_file.h | 10 ++++
>>>>>> 5 files changed, 134 insertions(+), 3 deletions(-)
>>>>>>
>>>>>> --
>>>>>> 2.39.2
>>>>>>
>>>>
>>>> --
>>>> Daniel Vetter
>>>> Software Engineer, Intel Corporation
>>>> http://blog.ffwll.ch
>>
>>
>>
>> --
>> With best wishes
>> Dmitry
>
--
With best wishes
Dmitry
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-11 18:28 ` Rob Clark
@ 2023-04-11 22:36 ` Dmitry Baryshkov
2023-04-12 8:11 ` Daniel Vetter
0 siblings, 1 reply; 20+ messages in thread
From: Dmitry Baryshkov @ 2023-04-11 22:36 UTC (permalink / raw)
To: Rob Clark
Cc: dri-devel, Rob Clark, Tvrtko Ursulin, open list:DOCUMENTATION,
linux-arm-msm, Emil Velikov, Christopher Healy, open list,
Sean Paul, Boris Brezillon, freedreno
On 11/04/2023 21:28, Rob Clark wrote:
> On Tue, Apr 11, 2023 at 10:36 AM Dmitry Baryshkov
> <dmitry.baryshkov@linaro.org> wrote:
>>
>> On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
>>>
>>> On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>>>>
>>>> On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
>>>>> On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
>>>>>>
>>>>>> From: Rob Clark <robdclark@chromium.org>
>>>>>>
>>>>>> Similar motivation to other similar recent attempt[1]. But with an
>>>>>> attempt to have some shared code for this. As well as documentation.
>>>>>>
>>>>>> It is probably a bit UMA-centric, I guess devices with VRAM might want
>>>>>> some placement stats as well. But this seems like a reasonable start.
>>>>>>
>>>>>> Basic gputop support: https://patchwork.freedesktop.org/series/116236/
>>>>>> And already nvtop support: https://github.com/Syllo/nvtop/pull/204
>>>>>
>>>>> On a related topic, I'm wondering if it would make sense to report
>>>>> some more global things (temp, freq, etc) via fdinfo? Some of this,
>>>>> tools like nvtop could get by trawling sysfs or other driver specific
>>>>> ways. But maybe it makes sense to have these sort of things reported
>>>>> in a standardized way (even though they aren't really per-drm_file)
>>>>
>>>> I think that's a bit much layering violation, we'd essentially have to
>>>> reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
>>>> be in :-)
>>>
>>> I guess this is true for temp (where there are thermal zones with
>>> potentially multiple temp sensors.. but I'm still digging my way thru
>>> the thermal_cooling_device stuff)
>>
>> It is slightly ugly. All thermal zones and cooling devices are virtual
>> devices (so, even no connection to the particular tsens device). One
>> can either enumerate them by checking
>> /sys/class/thermal/thermal_zoneN/type or enumerate them through
>> /sys/class/hwmon. For cooling devices again the only enumeration is
>> through /sys/class/thermal/cooling_deviceN/type.
>>
>> Probably it should be possible to push cooling devices and thermal
>> zones under corresponding providers. However I do not know if there is
>> a good way to correlate cooling device (ideally a part of GPU) to the
>> thermal_zone (which in our case is provided by tsens / temp_alarm
>> rather than GPU itself).
>>
>>>
>>> But what about freq? I think, esp for cases where some "fw thing" is
>>> controlling the freq we end up needing to use gpu counters to measure
>>> the freq.
>>
>> For the freq it is slightly easier: /sys/class/devfreq/*, devices are
>> registered under proper parent (IOW, GPU). So one can read
>> /sys/class/devfreq/3d00000.gpu/cur_freq or
>> /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
>>
>> However because of the components usage, there is no link from
>> /sys/class/drm/card0
>> (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
>> to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
>>
>> Getting all these items together in a platform-independent way would
>> be definitely an important but complex topic.
>
> But I don't believe any of the pci gpu's use devfreq ;-)
>
> And also, you can't expect the CPU to actually know the freq when fw
> is the one controlling freq. We can, currently, have a reasonable
> approximation from devfreq but that stops if IFPC is implemented. And
> other GPUs have even less direct control. So freq is a thing that I
> don't think we should try to get from "common frameworks"
I think it might be useful to add another passive devfreq governor type
for external frequencies. This way we can use the same interface to
export non-CPU-controlled frequencies.
>
> BR,
> -R
>
>>>
>>>> What might be needed is better glue to go from the fd or fdinfo to the
>>>> right hw device and then crawl around the hwmon in sysfs automatically. I
>>>> would not be surprised at all if we really suck on this, probably more
>>>> likely on SoC than pci gpus where at least everything should be under the
>>>> main pci sysfs device.
>>>
>>> yeah, I *think* userspace would have to look at /proc/device-tree to
>>> find the cooling device(s) associated with the gpu.. at least I don't
>>> see a straightforward way to figure it out just for sysfs
>>>
>>> BR,
>>> -R
>>>
>>>> -Daniel
>>>>
>>>>>
>>>>> BR,
>>>>> -R
>>>>>
>>>>>
>>>>>> [1] https://patchwork.freedesktop.org/series/112397/
>>>>>>
>>>>>> Rob Clark (2):
>>>>>> drm: Add fdinfo memory stats
>>>>>> drm/msm: Add memory stats to fdinfo
>>>>>>
>>>>>> Documentation/gpu/drm-usage-stats.rst | 21 +++++++
>>>>>> drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
>>>>>> drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
>>>>>> drivers/gpu/drm/msm/msm_gpu.c | 2 -
>>>>>> include/drm/drm_file.h | 10 ++++
>>>>>> 5 files changed, 134 insertions(+), 3 deletions(-)
>>>>>>
>>>>>> --
>>>>>> 2.39.2
>>>>>>
>>>>
>>>> --
>>>> Daniel Vetter
>>>> Software Engineer, Intel Corporation
>>>> http://blog.ffwll.ch
>>
>>
>>
>> --
>> With best wishes
>> Dmitry
--
With best wishes
Dmitry
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-11 22:36 ` Dmitry Baryshkov
@ 2023-04-12 8:11 ` Daniel Vetter
2023-04-12 12:47 ` Rodrigo Vivi
0 siblings, 1 reply; 20+ messages in thread
From: Daniel Vetter @ 2023-04-12 8:11 UTC (permalink / raw)
To: Dmitry Baryshkov
Cc: Rob Clark, dri-devel, Rob Clark, Tvrtko Ursulin,
open list:DOCUMENTATION, linux-arm-msm, Emil Velikov,
Christopher Healy, open list, Sean Paul, Boris Brezillon,
freedreno
On Wed, Apr 12, 2023 at 01:36:52AM +0300, Dmitry Baryshkov wrote:
> On 11/04/2023 21:28, Rob Clark wrote:
> > On Tue, Apr 11, 2023 at 10:36 AM Dmitry Baryshkov
> > <dmitry.baryshkov@linaro.org> wrote:
> > >
> > > On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > >
> > > > > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > > > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > >
> > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > >
> > > > > > > Similar motivation to other similar recent attempt[1]. But with an
> > > > > > > attempt to have some shared code for this. As well as documentation.
> > > > > > >
> > > > > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > > > > some placement stats as well. But this seems like a reasonable start.
> > > > > > >
> > > > > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > > > > >
> > > > > > On a related topic, I'm wondering if it would make sense to report
> > > > > > some more global things (temp, freq, etc) via fdinfo? Some of this,
> > > > > > tools like nvtop could get by trawling sysfs or other driver specific
> > > > > > ways. But maybe it makes sense to have these sort of things reported
> > > > > > in a standardized way (even though they aren't really per-drm_file)
> > > > >
> > > > > I think that's a bit much layering violation, we'd essentially have to
> > > > > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > > > > be in :-)
> > > >
> > > > I guess this is true for temp (where there are thermal zones with
> > > > potentially multiple temp sensors.. but I'm still digging my way thru
> > > > the thermal_cooling_device stuff)
> > >
> > > It is slightly ugly. All thermal zones and cooling devices are virtual
> > > devices (so, even no connection to the particular tsens device). One
> > > can either enumerate them by checking
> > > /sys/class/thermal/thermal_zoneN/type or enumerate them through
> > > /sys/class/hwmon. For cooling devices again the only enumeration is
> > > through /sys/class/thermal/cooling_deviceN/type.
> > >
> > > Probably it should be possible to push cooling devices and thermal
> > > zones under corresponding providers. However I do not know if there is
> > > a good way to correlate cooling device (ideally a part of GPU) to the
> > > thermal_zone (which in our case is provided by tsens / temp_alarm
> > > rather than GPU itself).
> > >
> > > >
> > > > But what about freq? I think, esp for cases where some "fw thing" is
> > > > controlling the freq we end up needing to use gpu counters to measure
> > > > the freq.
> > >
> > > For the freq it is slightly easier: /sys/class/devfreq/*, devices are
> > > registered under proper parent (IOW, GPU). So one can read
> > > /sys/class/devfreq/3d00000.gpu/cur_freq or
> > > /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
> > >
> > > However because of the components usage, there is no link from
> > > /sys/class/drm/card0
> > > (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
> > > to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
> > >
> > > Getting all these items together in a platform-independent way would
> > > be definitely an important but complex topic.
> >
> > But I don't believe any of the pci gpu's use devfreq ;-)
> >
> > And also, you can't expect the CPU to actually know the freq when fw
> > is the one controlling freq. We can, currently, have a reasonable
> > approximation from devfreq but that stops if IFPC is implemented. And
> > other GPUs have even less direct control. So freq is a thing that I
> > don't think we should try to get from "common frameworks"
>
> I think it might be useful to add another passive devfreq governor type for
> external frequencies. This way we can use the same interface to export
> non-CPU-controlled frequencies.
Yeah this sounds like a decent idea to me too. It might also solve the fun
of various pci devices having very non-standard freq controls in sysfs
(looking at least at i915 here ...)
I guess it would minimally be a good idea if we could document this, or
maybe have a reference implementation in nvtop or whatever the cool thing
is rn.
-Daniel
>
> >
> > BR,
> > -R
> >
> > > >
> > > > > What might be needed is better glue to go from the fd or fdinfo to the
> > > > > right hw device and then crawl around the hwmon in sysfs automatically. I
> > > > > would not be surprised at all if we really suck on this, probably more
> > > > > likely on SoC than pci gpus where at least everything should be under the
> > > > > main pci sysfs device.
> > > >
> > > > yeah, I *think* userspace would have to look at /proc/device-tree to
> > > > find the cooling device(s) associated with the gpu.. at least I don't
> > > > see a straightforward way to figure it out just for sysfs
> > > >
> > > > BR,
> > > > -R
> > > >
> > > > > -Daniel
> > > > >
> > > > > >
> > > > > > BR,
> > > > > > -R
> > > > > >
> > > > > >
> > > > > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > > > > >
> > > > > > > Rob Clark (2):
> > > > > > > drm: Add fdinfo memory stats
> > > > > > > drm/msm: Add memory stats to fdinfo
> > > > > > >
> > > > > > > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > > > > > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > > > > > > drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
> > > > > > > drivers/gpu/drm/msm/msm_gpu.c | 2 -
> > > > > > > include/drm/drm_file.h | 10 ++++
> > > > > > > 5 files changed, 134 insertions(+), 3 deletions(-)
> > > > > > >
> > > > > > > --
> > > > > > > 2.39.2
> > > > > > >
> > > > >
> > > > > --
> > > > > Daniel Vetter
> > > > > Software Engineer, Intel Corporation
> > > > > http://blog.ffwll.ch
> > >
> > >
> > >
> > > --
> > > With best wishes
> > > Dmitry
>
> --
> With best wishes
> Dmitry
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-12 8:11 ` Daniel Vetter
@ 2023-04-12 12:47 ` Rodrigo Vivi
2023-04-12 20:09 ` Rob Clark
0 siblings, 1 reply; 20+ messages in thread
From: Rodrigo Vivi @ 2023-04-12 12:47 UTC (permalink / raw)
To: Dmitry Baryshkov, Rob Clark, dri-devel, Rob Clark, Tvrtko Ursulin,
open list:DOCUMENTATION, linux-arm-msm, Emil Velikov,
Christopher Healy, open list, Sean Paul, Boris Brezillon,
freedreno
On Wed, Apr 12, 2023 at 10:11:32AM +0200, Daniel Vetter wrote:
> On Wed, Apr 12, 2023 at 01:36:52AM +0300, Dmitry Baryshkov wrote:
> > On 11/04/2023 21:28, Rob Clark wrote:
> > > On Tue, Apr 11, 2023 at 10:36 AM Dmitry Baryshkov
> > > <dmitry.baryshkov@linaro.org> wrote:
> > > >
> > > > On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
> > > > >
> > > > > On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > > >
> > > > > > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > > > > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > >
> > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > >
> > > > > > > > Similar motivation to other similar recent attempt[1]. But with an
> > > > > > > > attempt to have some shared code for this. As well as documentation.
> > > > > > > >
> > > > > > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > > > > > some placement stats as well. But this seems like a reasonable start.
> > > > > > > >
> > > > > > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > > > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > > > > > >
> > > > > > > On a related topic, I'm wondering if it would make sense to report
> > > > > > > some more global things (temp, freq, etc) via fdinfo? Some of this,
> > > > > > > tools like nvtop could get by trawling sysfs or other driver specific
> > > > > > > ways. But maybe it makes sense to have these sort of things reported
> > > > > > > in a standardized way (even though they aren't really per-drm_file)
> > > > > >
> > > > > > I think that's a bit much layering violation, we'd essentially have to
> > > > > > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > > > > > be in :-)
> > > > >
> > > > > I guess this is true for temp (where there are thermal zones with
> > > > > potentially multiple temp sensors.. but I'm still digging my way thru
> > > > > the thermal_cooling_device stuff)
> > > >
> > > > It is slightly ugly. All thermal zones and cooling devices are virtual
> > > > devices (so, even no connection to the particular tsens device). One
> > > > can either enumerate them by checking
> > > > /sys/class/thermal/thermal_zoneN/type or enumerate them through
> > > > /sys/class/hwmon. For cooling devices again the only enumeration is
> > > > through /sys/class/thermal/cooling_deviceN/type.
> > > >
> > > > Probably it should be possible to push cooling devices and thermal
> > > > zones under corresponding providers. However I do not know if there is
> > > > a good way to correlate cooling device (ideally a part of GPU) to the
> > > > thermal_zone (which in our case is provided by tsens / temp_alarm
> > > > rather than GPU itself).
> > > >
> > > > >
> > > > > But what about freq? I think, esp for cases where some "fw thing" is
> > > > > controlling the freq we end up needing to use gpu counters to measure
> > > > > the freq.
> > > >
> > > > For the freq it is slightly easier: /sys/class/devfreq/*, devices are
> > > > registered under proper parent (IOW, GPU). So one can read
> > > > /sys/class/devfreq/3d00000.gpu/cur_freq or
> > > > /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
> > > >
> > > > However because of the components usage, there is no link from
> > > > /sys/class/drm/card0
> > > > (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
> > > > to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
> > > >
> > > > Getting all these items together in a platform-independent way would
> > > > be definitely an important but complex topic.
> > >
> > > But I don't believe any of the pci gpu's use devfreq ;-)
> > >
> > > And also, you can't expect the CPU to actually know the freq when fw
> > > is the one controlling freq. We can, currently, have a reasonable
> > > approximation from devfreq but that stops if IFPC is implemented. And
> > > other GPUs have even less direct control. So freq is a thing that I
> > > don't think we should try to get from "common frameworks"
> >
> > I think it might be useful to add another passive devfreq governor type for
> > external frequencies. This way we can use the same interface to export
> > non-CPU-controlled frequencies.
>
> Yeah this sounds like a decent idea to me too. It might also solve the fun
> of various pci devices having very non-standard freq controls in sysfs
> (looking at least at i915 here ...)
I also like the idea of having some common infrastructure for the GPU freq.
hwmon have a good infrastructure, but they are more focused on individual
monitoring devices and not very welcomed to embedded monitoring and control.
I still want to check the opportunity to see if at least some freq control
could be aligned there.
Another thing that complicates that is that there are multiple frequency
domains and controls with multipliers in Intel GPU that are not very
standard or easy to integrate.
On a quick glace this devfreq seems neat because it aligns with the cpufreq
and governors. But again it would be hard to align with the multiple domains
and controls. But it deserves a look.
I will take a look to both fronts for Xe: hwmon and devfreq. Right now on
Xe we have a lot less controls than i915, but I can imagine soon there
will be requirements to make that to grow and I fear that we end up just
like i915. So I will take a look before that happens.
>
> I guess it would minimally be a good idea if we could document this, or
> maybe have a reference implementation in nvtop or whatever the cool thing
> is rn.
> -Daniel
>
> >
> > >
> > > BR,
> > > -R
> > >
> > > > >
> > > > > > What might be needed is better glue to go from the fd or fdinfo to the
> > > > > > right hw device and then crawl around the hwmon in sysfs automatically. I
> > > > > > would not be surprised at all if we really suck on this, probably more
> > > > > > likely on SoC than pci gpus where at least everything should be under the
> > > > > > main pci sysfs device.
> > > > >
> > > > > yeah, I *think* userspace would have to look at /proc/device-tree to
> > > > > find the cooling device(s) associated with the gpu.. at least I don't
> > > > > see a straightforward way to figure it out just for sysfs
> > > > >
> > > > > BR,
> > > > > -R
> > > > >
> > > > > > -Daniel
> > > > > >
> > > > > > >
> > > > > > > BR,
> > > > > > > -R
> > > > > > >
> > > > > > >
> > > > > > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > > > > > >
> > > > > > > > Rob Clark (2):
> > > > > > > > drm: Add fdinfo memory stats
> > > > > > > > drm/msm: Add memory stats to fdinfo
> > > > > > > >
> > > > > > > > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > > > > > > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > > > > > > > drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
> > > > > > > > drivers/gpu/drm/msm/msm_gpu.c | 2 -
> > > > > > > > include/drm/drm_file.h | 10 ++++
> > > > > > > > 5 files changed, 134 insertions(+), 3 deletions(-)
> > > > > > > >
> > > > > > > > --
> > > > > > > > 2.39.2
> > > > > > > >
> > > > > >
> > > > > > --
> > > > > > Daniel Vetter
> > > > > > Software Engineer, Intel Corporation
> > > > > > http://blog.ffwll.ch
> > > >
> > > >
> > > >
> > > > --
> > > > With best wishes
> > > > Dmitry
> >
> > --
> > With best wishes
> > Dmitry
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-12 12:47 ` Rodrigo Vivi
@ 2023-04-12 20:09 ` Rob Clark
2023-04-12 20:19 ` Dmitry Baryshkov
2023-04-12 20:23 ` Alex Deucher
0 siblings, 2 replies; 20+ messages in thread
From: Rob Clark @ 2023-04-12 20:09 UTC (permalink / raw)
To: Rodrigo Vivi
Cc: Dmitry Baryshkov, dri-devel, Rob Clark, Tvrtko Ursulin,
open list:DOCUMENTATION, linux-arm-msm, Emil Velikov,
Christopher Healy, open list, Sean Paul, Boris Brezillon,
freedreno
On Wed, Apr 12, 2023 at 5:47 AM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
>
> On Wed, Apr 12, 2023 at 10:11:32AM +0200, Daniel Vetter wrote:
> > On Wed, Apr 12, 2023 at 01:36:52AM +0300, Dmitry Baryshkov wrote:
> > > On 11/04/2023 21:28, Rob Clark wrote:
> > > > On Tue, Apr 11, 2023 at 10:36 AM Dmitry Baryshkov
> > > > <dmitry.baryshkov@linaro.org> wrote:
> > > > >
> > > > > On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
> > > > > >
> > > > > > On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > > > >
> > > > > > > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > > > > > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > >
> > > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > > >
> > > > > > > > > Similar motivation to other similar recent attempt[1]. But with an
> > > > > > > > > attempt to have some shared code for this. As well as documentation.
> > > > > > > > >
> > > > > > > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > > > > > > some placement stats as well. But this seems like a reasonable start.
> > > > > > > > >
> > > > > > > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > > > > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > > > > > > >
> > > > > > > > On a related topic, I'm wondering if it would make sense to report
> > > > > > > > some more global things (temp, freq, etc) via fdinfo? Some of this,
> > > > > > > > tools like nvtop could get by trawling sysfs or other driver specific
> > > > > > > > ways. But maybe it makes sense to have these sort of things reported
> > > > > > > > in a standardized way (even though they aren't really per-drm_file)
> > > > > > >
> > > > > > > I think that's a bit much layering violation, we'd essentially have to
> > > > > > > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > > > > > > be in :-)
> > > > > >
> > > > > > I guess this is true for temp (where there are thermal zones with
> > > > > > potentially multiple temp sensors.. but I'm still digging my way thru
> > > > > > the thermal_cooling_device stuff)
> > > > >
> > > > > It is slightly ugly. All thermal zones and cooling devices are virtual
> > > > > devices (so, even no connection to the particular tsens device). One
> > > > > can either enumerate them by checking
> > > > > /sys/class/thermal/thermal_zoneN/type or enumerate them through
> > > > > /sys/class/hwmon. For cooling devices again the only enumeration is
> > > > > through /sys/class/thermal/cooling_deviceN/type.
> > > > >
> > > > > Probably it should be possible to push cooling devices and thermal
> > > > > zones under corresponding providers. However I do not know if there is
> > > > > a good way to correlate cooling device (ideally a part of GPU) to the
> > > > > thermal_zone (which in our case is provided by tsens / temp_alarm
> > > > > rather than GPU itself).
> > > > >
> > > > > >
> > > > > > But what about freq? I think, esp for cases where some "fw thing" is
> > > > > > controlling the freq we end up needing to use gpu counters to measure
> > > > > > the freq.
> > > > >
> > > > > For the freq it is slightly easier: /sys/class/devfreq/*, devices are
> > > > > registered under proper parent (IOW, GPU). So one can read
> > > > > /sys/class/devfreq/3d00000.gpu/cur_freq or
> > > > > /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
> > > > >
> > > > > However because of the components usage, there is no link from
> > > > > /sys/class/drm/card0
> > > > > (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
> > > > > to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
> > > > >
> > > > > Getting all these items together in a platform-independent way would
> > > > > be definitely an important but complex topic.
> > > >
> > > > But I don't believe any of the pci gpu's use devfreq ;-)
> > > >
> > > > And also, you can't expect the CPU to actually know the freq when fw
> > > > is the one controlling freq. We can, currently, have a reasonable
> > > > approximation from devfreq but that stops if IFPC is implemented. And
> > > > other GPUs have even less direct control. So freq is a thing that I
> > > > don't think we should try to get from "common frameworks"
> > >
> > > I think it might be useful to add another passive devfreq governor type for
> > > external frequencies. This way we can use the same interface to export
> > > non-CPU-controlled frequencies.
> >
> > Yeah this sounds like a decent idea to me too. It might also solve the fun
> > of various pci devices having very non-standard freq controls in sysfs
> > (looking at least at i915 here ...)
>
> I also like the idea of having some common infrastructure for the GPU freq.
>
> hwmon have a good infrastructure, but they are more focused on individual
> monitoring devices and not very welcomed to embedded monitoring and control.
> I still want to check the opportunity to see if at least some freq control
> could be aligned there.
>
> Another thing that complicates that is that there are multiple frequency
> domains and controls with multipliers in Intel GPU that are not very
> standard or easy to integrate.
>
> On a quick glace this devfreq seems neat because it aligns with the cpufreq
> and governors. But again it would be hard to align with the multiple domains
> and controls. But it deserves a look.
>
> I will take a look to both fronts for Xe: hwmon and devfreq. Right now on
> Xe we have a lot less controls than i915, but I can imagine soon there
> will be requirements to make that to grow and I fear that we end up just
> like i915. So I will take a look before that happens.
So it looks like i915 (dgpu only) and nouveau already use hwmon.. so
maybe this is a good way to expose temp. Maybe we can wire up some
sort of helper for drivers which use thermal_cooling_device (which can
be composed of multiple sensors) to give back an aggregate temp for
hwmon to report?
Freq could possibly be added to hwmon (ie. seems like a reasonable
attribute to add). Devfreq might also be an option but on arm it
isn't necessarily associated with the drm device, whereas we could
associate the hwmon with the drm device to make it easier for
userspace to find.
BR,
-R
> >
> > I guess it would minimally be a good idea if we could document this, or
> > maybe have a reference implementation in nvtop or whatever the cool thing
> > is rn.
> > -Daniel
> >
> > >
> > > >
> > > > BR,
> > > > -R
> > > >
> > > > > >
> > > > > > > What might be needed is better glue to go from the fd or fdinfo to the
> > > > > > > right hw device and then crawl around the hwmon in sysfs automatically. I
> > > > > > > would not be surprised at all if we really suck on this, probably more
> > > > > > > likely on SoC than pci gpus where at least everything should be under the
> > > > > > > main pci sysfs device.
> > > > > >
> > > > > > yeah, I *think* userspace would have to look at /proc/device-tree to
> > > > > > find the cooling device(s) associated with the gpu.. at least I don't
> > > > > > see a straightforward way to figure it out just for sysfs
> > > > > >
> > > > > > BR,
> > > > > > -R
> > > > > >
> > > > > > > -Daniel
> > > > > > >
> > > > > > > >
> > > > > > > > BR,
> > > > > > > > -R
> > > > > > > >
> > > > > > > >
> > > > > > > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > > > > > > >
> > > > > > > > > Rob Clark (2):
> > > > > > > > > drm: Add fdinfo memory stats
> > > > > > > > > drm/msm: Add memory stats to fdinfo
> > > > > > > > >
> > > > > > > > > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > > > > > > > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > > > > > > > > drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
> > > > > > > > > drivers/gpu/drm/msm/msm_gpu.c | 2 -
> > > > > > > > > include/drm/drm_file.h | 10 ++++
> > > > > > > > > 5 files changed, 134 insertions(+), 3 deletions(-)
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > 2.39.2
> > > > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Daniel Vetter
> > > > > > > Software Engineer, Intel Corporation
> > > > > > > http://blog.ffwll.ch
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > With best wishes
> > > > > Dmitry
> > >
> > > --
> > > With best wishes
> > > Dmitry
> > >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-12 20:09 ` Rob Clark
@ 2023-04-12 20:19 ` Dmitry Baryshkov
2023-04-12 20:34 ` Rob Clark
2023-04-12 20:23 ` Alex Deucher
1 sibling, 1 reply; 20+ messages in thread
From: Dmitry Baryshkov @ 2023-04-12 20:19 UTC (permalink / raw)
To: Rob Clark
Cc: Rodrigo Vivi, dri-devel, Rob Clark, Tvrtko Ursulin,
open list:DOCUMENTATION, linux-arm-msm, Emil Velikov,
Christopher Healy, open list, Sean Paul, Boris Brezillon,
freedreno
On Wed, 12 Apr 2023 at 23:09, Rob Clark <robdclark@gmail.com> wrote:
>
> On Wed, Apr 12, 2023 at 5:47 AM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
> >
> > On Wed, Apr 12, 2023 at 10:11:32AM +0200, Daniel Vetter wrote:
> > > On Wed, Apr 12, 2023 at 01:36:52AM +0300, Dmitry Baryshkov wrote:
> > > > On 11/04/2023 21:28, Rob Clark wrote:
> > > > > On Tue, Apr 11, 2023 at 10:36 AM Dmitry Baryshkov
> > > > > <dmitry.baryshkov@linaro.org> wrote:
> > > > > >
> > > > > > On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
> > > > > > >
> > > > > > > On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > > > > >
> > > > > > > > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > > > > > > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > > >
> > > > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > > > >
> > > > > > > > > > Similar motivation to other similar recent attempt[1]. But with an
> > > > > > > > > > attempt to have some shared code for this. As well as documentation.
> > > > > > > > > >
> > > > > > > > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > > > > > > > some placement stats as well. But this seems like a reasonable start.
> > > > > > > > > >
> > > > > > > > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > > > > > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > > > > > > > >
> > > > > > > > > On a related topic, I'm wondering if it would make sense to report
> > > > > > > > > some more global things (temp, freq, etc) via fdinfo? Some of this,
> > > > > > > > > tools like nvtop could get by trawling sysfs or other driver specific
> > > > > > > > > ways. But maybe it makes sense to have these sort of things reported
> > > > > > > > > in a standardized way (even though they aren't really per-drm_file)
> > > > > > > >
> > > > > > > > I think that's a bit much layering violation, we'd essentially have to
> > > > > > > > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > > > > > > > be in :-)
> > > > > > >
> > > > > > > I guess this is true for temp (where there are thermal zones with
> > > > > > > potentially multiple temp sensors.. but I'm still digging my way thru
> > > > > > > the thermal_cooling_device stuff)
> > > > > >
> > > > > > It is slightly ugly. All thermal zones and cooling devices are virtual
> > > > > > devices (so, even no connection to the particular tsens device). One
> > > > > > can either enumerate them by checking
> > > > > > /sys/class/thermal/thermal_zoneN/type or enumerate them through
> > > > > > /sys/class/hwmon. For cooling devices again the only enumeration is
> > > > > > through /sys/class/thermal/cooling_deviceN/type.
> > > > > >
> > > > > > Probably it should be possible to push cooling devices and thermal
> > > > > > zones under corresponding providers. However I do not know if there is
> > > > > > a good way to correlate cooling device (ideally a part of GPU) to the
> > > > > > thermal_zone (which in our case is provided by tsens / temp_alarm
> > > > > > rather than GPU itself).
> > > > > >
> > > > > > >
> > > > > > > But what about freq? I think, esp for cases where some "fw thing" is
> > > > > > > controlling the freq we end up needing to use gpu counters to measure
> > > > > > > the freq.
> > > > > >
> > > > > > For the freq it is slightly easier: /sys/class/devfreq/*, devices are
> > > > > > registered under proper parent (IOW, GPU). So one can read
> > > > > > /sys/class/devfreq/3d00000.gpu/cur_freq or
> > > > > > /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
> > > > > >
> > > > > > However because of the components usage, there is no link from
> > > > > > /sys/class/drm/card0
> > > > > > (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
> > > > > > to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
> > > > > >
> > > > > > Getting all these items together in a platform-independent way would
> > > > > > be definitely an important but complex topic.
> > > > >
> > > > > But I don't believe any of the pci gpu's use devfreq ;-)
> > > > >
> > > > > And also, you can't expect the CPU to actually know the freq when fw
> > > > > is the one controlling freq. We can, currently, have a reasonable
> > > > > approximation from devfreq but that stops if IFPC is implemented. And
> > > > > other GPUs have even less direct control. So freq is a thing that I
> > > > > don't think we should try to get from "common frameworks"
> > > >
> > > > I think it might be useful to add another passive devfreq governor type for
> > > > external frequencies. This way we can use the same interface to export
> > > > non-CPU-controlled frequencies.
> > >
> > > Yeah this sounds like a decent idea to me too. It might also solve the fun
> > > of various pci devices having very non-standard freq controls in sysfs
> > > (looking at least at i915 here ...)
> >
> > I also like the idea of having some common infrastructure for the GPU freq.
> >
> > hwmon have a good infrastructure, but they are more focused on individual
> > monitoring devices and not very welcomed to embedded monitoring and control.
> > I still want to check the opportunity to see if at least some freq control
> > could be aligned there.
> >
> > Another thing that complicates that is that there are multiple frequency
> > domains and controls with multipliers in Intel GPU that are not very
> > standard or easy to integrate.
> >
> > On a quick glace this devfreq seems neat because it aligns with the cpufreq
> > and governors. But again it would be hard to align with the multiple domains
> > and controls. But it deserves a look.
> >
> > I will take a look to both fronts for Xe: hwmon and devfreq. Right now on
> > Xe we have a lot less controls than i915, but I can imagine soon there
> > will be requirements to make that to grow and I fear that we end up just
> > like i915. So I will take a look before that happens.
>
> So it looks like i915 (dgpu only) and nouveau already use hwmon.. so
> maybe this is a good way to expose temp. Maybe we can wire up some
> sort of helper for drivers which use thermal_cooling_device (which can
> be composed of multiple sensors) to give back an aggregate temp for
> hwmon to report?
The thermal_device already registers the hwmon, see below. The
question is about linking that hwmon to the drm. Strictly speaking, I
don't think that we can reexport it in a clean way.
# grep gpu /sys/class/hwmon/hwmon*/name
/sys/class/hwmon/hwmon15/name:gpu_top_thermal
/sys/class/hwmon/hwmon24/name:gpu_bottom_thermal
# ls /sys/class/hwmon/hwmon15/ -l
lrwxrwxrwx 1 root root 0 Jan 26 08:14 device ->
../../thermal_zone15
-r--r--r-- 1 root root 4096 Jan 26 08:14 name
drwxr-xr-x 2 root root 0 Jan 26 08:15 power
lrwxrwxrwx 1 root root 0 Jan 26 08:12 subsystem ->
../../../../../class/hwmon
-r--r--r-- 1 root root 4096 Jan 26 08:14 temp1_input
-rw-r--r-- 1 root root 4096 Jan 26 08:12 uevent
> Freq could possibly be added to hwmon (ie. seems like a reasonable
> attribute to add). Devfreq might also be an option but on arm it
> isn't necessarily associated with the drm device, whereas we could
> associate the hwmon with the drm device to make it easier for
> userspace to find.
Possibly we can register a virtual 'passive' devfreq being driven by
another active devfreq device.
>
> BR,
> -R
>
> > >
> > > I guess it would minimally be a good idea if we could document this, or
> > > maybe have a reference implementation in nvtop or whatever the cool thing
> > > is rn.
> > > -Daniel
> > >
> > > >
> > > > >
> > > > > BR,
> > > > > -R
> > > > >
> > > > > > >
> > > > > > > > What might be needed is better glue to go from the fd or fdinfo to the
> > > > > > > > right hw device and then crawl around the hwmon in sysfs automatically. I
> > > > > > > > would not be surprised at all if we really suck on this, probably more
> > > > > > > > likely on SoC than pci gpus where at least everything should be under the
> > > > > > > > main pci sysfs device.
> > > > > > >
> > > > > > > yeah, I *think* userspace would have to look at /proc/device-tree to
> > > > > > > find the cooling device(s) associated with the gpu.. at least I don't
> > > > > > > see a straightforward way to figure it out just for sysfs
> > > > > > >
> > > > > > > BR,
> > > > > > > -R
> > > > > > >
> > > > > > > > -Daniel
> > > > > > > >
> > > > > > > > >
> > > > > > > > > BR,
> > > > > > > > > -R
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > > > > > > > >
> > > > > > > > > > Rob Clark (2):
> > > > > > > > > > drm: Add fdinfo memory stats
> > > > > > > > > > drm/msm: Add memory stats to fdinfo
> > > > > > > > > >
> > > > > > > > > > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > > > > > > > > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > > > > > > > > > drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
> > > > > > > > > > drivers/gpu/drm/msm/msm_gpu.c | 2 -
> > > > > > > > > > include/drm/drm_file.h | 10 ++++
> > > > > > > > > > 5 files changed, 134 insertions(+), 3 deletions(-)
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > 2.39.2
> > > > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Daniel Vetter
> > > > > > > > Software Engineer, Intel Corporation
> > > > > > > > http://blog.ffwll.ch
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > With best wishes
> > > > > > Dmitry
> > > >
> > > > --
> > > > With best wishes
> > > > Dmitry
> > > >
> > >
> > > --
> > > Daniel Vetter
> > > Software Engineer, Intel Corporation
> > > http://blog.ffwll.ch
--
With best wishes
Dmitry
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-12 20:09 ` Rob Clark
2023-04-12 20:19 ` Dmitry Baryshkov
@ 2023-04-12 20:23 ` Alex Deucher
1 sibling, 0 replies; 20+ messages in thread
From: Alex Deucher @ 2023-04-12 20:23 UTC (permalink / raw)
To: Rob Clark
Cc: Rodrigo Vivi, Rob Clark, Tvrtko Ursulin, open list:DOCUMENTATION,
linux-arm-msm, Emil Velikov, Christopher Healy, dri-devel,
open list, Boris Brezillon, Dmitry Baryshkov, freedreno,
Sean Paul
On Wed, Apr 12, 2023 at 4:10 PM Rob Clark <robdclark@gmail.com> wrote:
>
> On Wed, Apr 12, 2023 at 5:47 AM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
> >
> > On Wed, Apr 12, 2023 at 10:11:32AM +0200, Daniel Vetter wrote:
> > > On Wed, Apr 12, 2023 at 01:36:52AM +0300, Dmitry Baryshkov wrote:
> > > > On 11/04/2023 21:28, Rob Clark wrote:
> > > > > On Tue, Apr 11, 2023 at 10:36 AM Dmitry Baryshkov
> > > > > <dmitry.baryshkov@linaro.org> wrote:
> > > > > >
> > > > > > On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
> > > > > > >
> > > > > > > On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > > > > >
> > > > > > > > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > > > > > > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > > >
> > > > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > > > >
> > > > > > > > > > Similar motivation to other similar recent attempt[1]. But with an
> > > > > > > > > > attempt to have some shared code for this. As well as documentation.
> > > > > > > > > >
> > > > > > > > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > > > > > > > some placement stats as well. But this seems like a reasonable start.
> > > > > > > > > >
> > > > > > > > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > > > > > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > > > > > > > >
> > > > > > > > > On a related topic, I'm wondering if it would make sense to report
> > > > > > > > > some more global things (temp, freq, etc) via fdinfo? Some of this,
> > > > > > > > > tools like nvtop could get by trawling sysfs or other driver specific
> > > > > > > > > ways. But maybe it makes sense to have these sort of things reported
> > > > > > > > > in a standardized way (even though they aren't really per-drm_file)
> > > > > > > >
> > > > > > > > I think that's a bit much layering violation, we'd essentially have to
> > > > > > > > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > > > > > > > be in :-)
> > > > > > >
> > > > > > > I guess this is true for temp (where there are thermal zones with
> > > > > > > potentially multiple temp sensors.. but I'm still digging my way thru
> > > > > > > the thermal_cooling_device stuff)
> > > > > >
> > > > > > It is slightly ugly. All thermal zones and cooling devices are virtual
> > > > > > devices (so, even no connection to the particular tsens device). One
> > > > > > can either enumerate them by checking
> > > > > > /sys/class/thermal/thermal_zoneN/type or enumerate them through
> > > > > > /sys/class/hwmon. For cooling devices again the only enumeration is
> > > > > > through /sys/class/thermal/cooling_deviceN/type.
> > > > > >
> > > > > > Probably it should be possible to push cooling devices and thermal
> > > > > > zones under corresponding providers. However I do not know if there is
> > > > > > a good way to correlate cooling device (ideally a part of GPU) to the
> > > > > > thermal_zone (which in our case is provided by tsens / temp_alarm
> > > > > > rather than GPU itself).
> > > > > >
> > > > > > >
> > > > > > > But what about freq? I think, esp for cases where some "fw thing" is
> > > > > > > controlling the freq we end up needing to use gpu counters to measure
> > > > > > > the freq.
> > > > > >
> > > > > > For the freq it is slightly easier: /sys/class/devfreq/*, devices are
> > > > > > registered under proper parent (IOW, GPU). So one can read
> > > > > > /sys/class/devfreq/3d00000.gpu/cur_freq or
> > > > > > /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
> > > > > >
> > > > > > However because of the components usage, there is no link from
> > > > > > /sys/class/drm/card0
> > > > > > (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
> > > > > > to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
> > > > > >
> > > > > > Getting all these items together in a platform-independent way would
> > > > > > be definitely an important but complex topic.
> > > > >
> > > > > But I don't believe any of the pci gpu's use devfreq ;-)
> > > > >
> > > > > And also, you can't expect the CPU to actually know the freq when fw
> > > > > is the one controlling freq. We can, currently, have a reasonable
> > > > > approximation from devfreq but that stops if IFPC is implemented. And
> > > > > other GPUs have even less direct control. So freq is a thing that I
> > > > > don't think we should try to get from "common frameworks"
> > > >
> > > > I think it might be useful to add another passive devfreq governor type for
> > > > external frequencies. This way we can use the same interface to export
> > > > non-CPU-controlled frequencies.
> > >
> > > Yeah this sounds like a decent idea to me too. It might also solve the fun
> > > of various pci devices having very non-standard freq controls in sysfs
> > > (looking at least at i915 here ...)
> >
> > I also like the idea of having some common infrastructure for the GPU freq.
> >
> > hwmon have a good infrastructure, but they are more focused on individual
> > monitoring devices and not very welcomed to embedded monitoring and control.
> > I still want to check the opportunity to see if at least some freq control
> > could be aligned there.
> >
> > Another thing that complicates that is that there are multiple frequency
> > domains and controls with multipliers in Intel GPU that are not very
> > standard or easy to integrate.
> >
> > On a quick glace this devfreq seems neat because it aligns with the cpufreq
> > and governors. But again it would be hard to align with the multiple domains
> > and controls. But it deserves a look.
> >
> > I will take a look to both fronts for Xe: hwmon and devfreq. Right now on
> > Xe we have a lot less controls than i915, but I can imagine soon there
> > will be requirements to make that to grow and I fear that we end up just
> > like i915. So I will take a look before that happens.
>
> So it looks like i915 (dgpu only) and nouveau already use hwmon.. so
> maybe this is a good way to expose temp. Maybe we can wire up some
> sort of helper for drivers which use thermal_cooling_device (which can
> be composed of multiple sensors) to give back an aggregate temp for
> hwmon to report?
amdgpu uses hwmon as well for temp, voltage, power, etc. Once of the
problems with hwmon is that it's designed around individual sensors.
However, on the GPU at least, most customers, at least in the
datacenter, want an atomic view of all of the attributes. It would be
nice if there were some way to get nice snapshot of all of the
attributes at one time.
>
> Freq could possibly be added to hwmon (ie. seems like a reasonable
> attribute to add). Devfreq might also be an option but on arm it
> isn't necessarily associated with the drm device, whereas we could
> associate the hwmon with the drm device to make it easier for
> userspace to find.
freq attributes seem natural for hwmon, at least for reporting. I'm
not familiar with devfreq; I wonder if it's flexible enough to deal
with devices that might have full or partial firmware control of the
frequencies. Moreover, each clock domain is not necessarily
independent. You might have multiple clock domains with different
voltage, thermal, and tdp dependencies. Power limits are controlled
via hwmon and you may need to adjust them in order to make certain
clock changes. Then add in overclocking support on top and it gets
more complex.
Alex
>
> BR,
> -R
>
> > >
> > > I guess it would minimally be a good idea if we could document this, or
> > > maybe have a reference implementation in nvtop or whatever the cool thing
> > > is rn.
> > > -Daniel
> > >
> > > >
> > > > >
> > > > > BR,
> > > > > -R
> > > > >
> > > > > > >
> > > > > > > > What might be needed is better glue to go from the fd or fdinfo to the
> > > > > > > > right hw device and then crawl around the hwmon in sysfs automatically. I
> > > > > > > > would not be surprised at all if we really suck on this, probably more
> > > > > > > > likely on SoC than pci gpus where at least everything should be under the
> > > > > > > > main pci sysfs device.
> > > > > > >
> > > > > > > yeah, I *think* userspace would have to look at /proc/device-tree to
> > > > > > > find the cooling device(s) associated with the gpu.. at least I don't
> > > > > > > see a straightforward way to figure it out just for sysfs
> > > > > > >
> > > > > > > BR,
> > > > > > > -R
> > > > > > >
> > > > > > > > -Daniel
> > > > > > > >
> > > > > > > > >
> > > > > > > > > BR,
> > > > > > > > > -R
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > > > > > > > >
> > > > > > > > > > Rob Clark (2):
> > > > > > > > > > drm: Add fdinfo memory stats
> > > > > > > > > > drm/msm: Add memory stats to fdinfo
> > > > > > > > > >
> > > > > > > > > > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > > > > > > > > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > > > > > > > > > drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
> > > > > > > > > > drivers/gpu/drm/msm/msm_gpu.c | 2 -
> > > > > > > > > > include/drm/drm_file.h | 10 ++++
> > > > > > > > > > 5 files changed, 134 insertions(+), 3 deletions(-)
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > 2.39.2
> > > > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Daniel Vetter
> > > > > > > > Software Engineer, Intel Corporation
> > > > > > > > http://blog.ffwll.ch
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > With best wishes
> > > > > > Dmitry
> > > >
> > > > --
> > > > With best wishes
> > > > Dmitry
> > > >
> > >
> > > --
> > > Daniel Vetter
> > > Software Engineer, Intel Corporation
> > > http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-12 20:19 ` Dmitry Baryshkov
@ 2023-04-12 20:34 ` Rob Clark
2023-04-13 0:27 ` Dmitry Baryshkov
0 siblings, 1 reply; 20+ messages in thread
From: Rob Clark @ 2023-04-12 20:34 UTC (permalink / raw)
To: Dmitry Baryshkov
Cc: Rodrigo Vivi, dri-devel, Rob Clark, Tvrtko Ursulin,
open list:DOCUMENTATION, linux-arm-msm, Emil Velikov,
Christopher Healy, open list, Sean Paul, Boris Brezillon,
freedreno
On Wed, Apr 12, 2023 at 1:19 PM Dmitry Baryshkov
<dmitry.baryshkov@linaro.org> wrote:
>
> On Wed, 12 Apr 2023 at 23:09, Rob Clark <robdclark@gmail.com> wrote:
> >
> > On Wed, Apr 12, 2023 at 5:47 AM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
> > >
> > > On Wed, Apr 12, 2023 at 10:11:32AM +0200, Daniel Vetter wrote:
> > > > On Wed, Apr 12, 2023 at 01:36:52AM +0300, Dmitry Baryshkov wrote:
> > > > > On 11/04/2023 21:28, Rob Clark wrote:
> > > > > > On Tue, Apr 11, 2023 at 10:36 AM Dmitry Baryshkov
> > > > > > <dmitry.baryshkov@linaro.org> wrote:
> > > > > > >
> > > > > > > On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > >
> > > > > > > > On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > > > > > >
> > > > > > > > > On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
> > > > > > > > > > On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
> > > > > > > > > > >
> > > > > > > > > > > From: Rob Clark <robdclark@chromium.org>
> > > > > > > > > > >
> > > > > > > > > > > Similar motivation to other similar recent attempt[1]. But with an
> > > > > > > > > > > attempt to have some shared code for this. As well as documentation.
> > > > > > > > > > >
> > > > > > > > > > > It is probably a bit UMA-centric, I guess devices with VRAM might want
> > > > > > > > > > > some placement stats as well. But this seems like a reasonable start.
> > > > > > > > > > >
> > > > > > > > > > > Basic gputop support: https://patchwork.freedesktop.org/series/116236/
> > > > > > > > > > > And already nvtop support: https://github.com/Syllo/nvtop/pull/204
> > > > > > > > > >
> > > > > > > > > > On a related topic, I'm wondering if it would make sense to report
> > > > > > > > > > some more global things (temp, freq, etc) via fdinfo? Some of this,
> > > > > > > > > > tools like nvtop could get by trawling sysfs or other driver specific
> > > > > > > > > > ways. But maybe it makes sense to have these sort of things reported
> > > > > > > > > > in a standardized way (even though they aren't really per-drm_file)
> > > > > > > > >
> > > > > > > > > I think that's a bit much layering violation, we'd essentially have to
> > > > > > > > > reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
> > > > > > > > > be in :-)
> > > > > > > >
> > > > > > > > I guess this is true for temp (where there are thermal zones with
> > > > > > > > potentially multiple temp sensors.. but I'm still digging my way thru
> > > > > > > > the thermal_cooling_device stuff)
> > > > > > >
> > > > > > > It is slightly ugly. All thermal zones and cooling devices are virtual
> > > > > > > devices (so, even no connection to the particular tsens device). One
> > > > > > > can either enumerate them by checking
> > > > > > > /sys/class/thermal/thermal_zoneN/type or enumerate them through
> > > > > > > /sys/class/hwmon. For cooling devices again the only enumeration is
> > > > > > > through /sys/class/thermal/cooling_deviceN/type.
> > > > > > >
> > > > > > > Probably it should be possible to push cooling devices and thermal
> > > > > > > zones under corresponding providers. However I do not know if there is
> > > > > > > a good way to correlate cooling device (ideally a part of GPU) to the
> > > > > > > thermal_zone (which in our case is provided by tsens / temp_alarm
> > > > > > > rather than GPU itself).
> > > > > > >
> > > > > > > >
> > > > > > > > But what about freq? I think, esp for cases where some "fw thing" is
> > > > > > > > controlling the freq we end up needing to use gpu counters to measure
> > > > > > > > the freq.
> > > > > > >
> > > > > > > For the freq it is slightly easier: /sys/class/devfreq/*, devices are
> > > > > > > registered under proper parent (IOW, GPU). So one can read
> > > > > > > /sys/class/devfreq/3d00000.gpu/cur_freq or
> > > > > > > /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
> > > > > > >
> > > > > > > However because of the components usage, there is no link from
> > > > > > > /sys/class/drm/card0
> > > > > > > (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
> > > > > > > to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
> > > > > > >
> > > > > > > Getting all these items together in a platform-independent way would
> > > > > > > be definitely an important but complex topic.
> > > > > >
> > > > > > But I don't believe any of the pci gpu's use devfreq ;-)
> > > > > >
> > > > > > And also, you can't expect the CPU to actually know the freq when fw
> > > > > > is the one controlling freq. We can, currently, have a reasonable
> > > > > > approximation from devfreq but that stops if IFPC is implemented. And
> > > > > > other GPUs have even less direct control. So freq is a thing that I
> > > > > > don't think we should try to get from "common frameworks"
> > > > >
> > > > > I think it might be useful to add another passive devfreq governor type for
> > > > > external frequencies. This way we can use the same interface to export
> > > > > non-CPU-controlled frequencies.
> > > >
> > > > Yeah this sounds like a decent idea to me too. It might also solve the fun
> > > > of various pci devices having very non-standard freq controls in sysfs
> > > > (looking at least at i915 here ...)
> > >
> > > I also like the idea of having some common infrastructure for the GPU freq.
> > >
> > > hwmon have a good infrastructure, but they are more focused on individual
> > > monitoring devices and not very welcomed to embedded monitoring and control.
> > > I still want to check the opportunity to see if at least some freq control
> > > could be aligned there.
> > >
> > > Another thing that complicates that is that there are multiple frequency
> > > domains and controls with multipliers in Intel GPU that are not very
> > > standard or easy to integrate.
> > >
> > > On a quick glace this devfreq seems neat because it aligns with the cpufreq
> > > and governors. But again it would be hard to align with the multiple domains
> > > and controls. But it deserves a look.
> > >
> > > I will take a look to both fronts for Xe: hwmon and devfreq. Right now on
> > > Xe we have a lot less controls than i915, but I can imagine soon there
> > > will be requirements to make that to grow and I fear that we end up just
> > > like i915. So I will take a look before that happens.
> >
> > So it looks like i915 (dgpu only) and nouveau already use hwmon.. so
> > maybe this is a good way to expose temp. Maybe we can wire up some
> > sort of helper for drivers which use thermal_cooling_device (which can
> > be composed of multiple sensors) to give back an aggregate temp for
> > hwmon to report?
>
> The thermal_device already registers the hwmon, see below. The
> question is about linking that hwmon to the drm. Strictly speaking, I
> don't think that we can reexport it in a clean way.
>
> # grep gpu /sys/class/hwmon/hwmon*/name
> /sys/class/hwmon/hwmon15/name:gpu_top_thermal
> /sys/class/hwmon/hwmon24/name:gpu_bottom_thermal
I can't get excited about userspace relying on naming conventions or
other heuristics like this. Also, userspace's view of the world is
very much that there is a "gpu card", not a collection of parts.
(Windows seems to have the same view of the world.) So we have the
component framework to assemble the various parts together into the
"device" that userspace expects to deal with. We need to do something
similar for exposing temp and freq.
> # ls /sys/class/hwmon/hwmon15/ -l
> lrwxrwxrwx 1 root root 0 Jan 26 08:14 device ->
> ../../thermal_zone15
> -r--r--r-- 1 root root 4096 Jan 26 08:14 name
> drwxr-xr-x 2 root root 0 Jan 26 08:15 power
> lrwxrwxrwx 1 root root 0 Jan 26 08:12 subsystem ->
> ../../../../../class/hwmon
> -r--r--r-- 1 root root 4096 Jan 26 08:14 temp1_input
> -rw-r--r-- 1 root root 4096 Jan 26 08:12 uevent
>
> > Freq could possibly be added to hwmon (ie. seems like a reasonable
> > attribute to add). Devfreq might also be an option but on arm it
> > isn't necessarily associated with the drm device, whereas we could
> > associate the hwmon with the drm device to make it easier for
> > userspace to find.
>
> Possibly we can register a virtual 'passive' devfreq being driven by
> another active devfreq device.
That's all fine and good, but it has the same problem that existing
hwmon's associated with the cooling-device have..
BR,
-R
> >
> > BR,
> > -R
> >
> > > >
> > > > I guess it would minimally be a good idea if we could document this, or
> > > > maybe have a reference implementation in nvtop or whatever the cool thing
> > > > is rn.
> > > > -Daniel
> > > >
> > > > >
> > > > > >
> > > > > > BR,
> > > > > > -R
> > > > > >
> > > > > > > >
> > > > > > > > > What might be needed is better glue to go from the fd or fdinfo to the
> > > > > > > > > right hw device and then crawl around the hwmon in sysfs automatically. I
> > > > > > > > > would not be surprised at all if we really suck on this, probably more
> > > > > > > > > likely on SoC than pci gpus where at least everything should be under the
> > > > > > > > > main pci sysfs device.
> > > > > > > >
> > > > > > > > yeah, I *think* userspace would have to look at /proc/device-tree to
> > > > > > > > find the cooling device(s) associated with the gpu.. at least I don't
> > > > > > > > see a straightforward way to figure it out just for sysfs
> > > > > > > >
> > > > > > > > BR,
> > > > > > > > -R
> > > > > > > >
> > > > > > > > > -Daniel
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > BR,
> > > > > > > > > > -R
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > [1] https://patchwork.freedesktop.org/series/112397/
> > > > > > > > > > >
> > > > > > > > > > > Rob Clark (2):
> > > > > > > > > > > drm: Add fdinfo memory stats
> > > > > > > > > > > drm/msm: Add memory stats to fdinfo
> > > > > > > > > > >
> > > > > > > > > > > Documentation/gpu/drm-usage-stats.rst | 21 +++++++
> > > > > > > > > > > drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
> > > > > > > > > > > drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
> > > > > > > > > > > drivers/gpu/drm/msm/msm_gpu.c | 2 -
> > > > > > > > > > > include/drm/drm_file.h | 10 ++++
> > > > > > > > > > > 5 files changed, 134 insertions(+), 3 deletions(-)
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > 2.39.2
> > > > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > Daniel Vetter
> > > > > > > > > Software Engineer, Intel Corporation
> > > > > > > > > http://blog.ffwll.ch
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > With best wishes
> > > > > > > Dmitry
> > > > >
> > > > > --
> > > > > With best wishes
> > > > > Dmitry
> > > > >
> > > >
> > > > --
> > > > Daniel Vetter
> > > > Software Engineer, Intel Corporation
> > > > http://blog.ffwll.ch
>
>
>
> --
> With best wishes
> Dmitry
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Freedreno] [PATCH v2 0/2] drm: fdinfo memory stats
2023-04-12 20:34 ` Rob Clark
@ 2023-04-13 0:27 ` Dmitry Baryshkov
0 siblings, 0 replies; 20+ messages in thread
From: Dmitry Baryshkov @ 2023-04-13 0:27 UTC (permalink / raw)
To: Rob Clark
Cc: Rodrigo Vivi, dri-devel, Rob Clark, Tvrtko Ursulin,
open list:DOCUMENTATION, linux-arm-msm, Emil Velikov,
Christopher Healy, open list, Sean Paul, Boris Brezillon,
freedreno
On 12/04/2023 23:34, Rob Clark wrote:
> On Wed, Apr 12, 2023 at 1:19 PM Dmitry Baryshkov
> <dmitry.baryshkov@linaro.org> wrote:
>>
>> On Wed, 12 Apr 2023 at 23:09, Rob Clark <robdclark@gmail.com> wrote:
>>>
>>> On Wed, Apr 12, 2023 at 5:47 AM Rodrigo Vivi <rodrigo.vivi@intel.com> wrote:
>>>>
>>>> On Wed, Apr 12, 2023 at 10:11:32AM +0200, Daniel Vetter wrote:
>>>>> On Wed, Apr 12, 2023 at 01:36:52AM +0300, Dmitry Baryshkov wrote:
>>>>>> On 11/04/2023 21:28, Rob Clark wrote:
>>>>>>> On Tue, Apr 11, 2023 at 10:36 AM Dmitry Baryshkov
>>>>>>> <dmitry.baryshkov@linaro.org> wrote:
>>>>>>>>
>>>>>>>> On Tue, 11 Apr 2023 at 20:13, Rob Clark <robdclark@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>> On Tue, Apr 11, 2023 at 9:53 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>>>>>>>>>>
>>>>>>>>>> On Tue, Apr 11, 2023 at 09:47:32AM -0700, Rob Clark wrote:
>>>>>>>>>>> On Mon, Apr 10, 2023 at 2:06 PM Rob Clark <robdclark@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> From: Rob Clark <robdclark@chromium.org>
>>>>>>>>>>>>
>>>>>>>>>>>> Similar motivation to other similar recent attempt[1]. But with an
>>>>>>>>>>>> attempt to have some shared code for this. As well as documentation.
>>>>>>>>>>>>
>>>>>>>>>>>> It is probably a bit UMA-centric, I guess devices with VRAM might want
>>>>>>>>>>>> some placement stats as well. But this seems like a reasonable start.
>>>>>>>>>>>>
>>>>>>>>>>>> Basic gputop support: https://patchwork.freedesktop.org/series/116236/
>>>>>>>>>>>> And already nvtop support: https://github.com/Syllo/nvtop/pull/204
>>>>>>>>>>>
>>>>>>>>>>> On a related topic, I'm wondering if it would make sense to report
>>>>>>>>>>> some more global things (temp, freq, etc) via fdinfo? Some of this,
>>>>>>>>>>> tools like nvtop could get by trawling sysfs or other driver specific
>>>>>>>>>>> ways. But maybe it makes sense to have these sort of things reported
>>>>>>>>>>> in a standardized way (even though they aren't really per-drm_file)
>>>>>>>>>>
>>>>>>>>>> I think that's a bit much layering violation, we'd essentially have to
>>>>>>>>>> reinvent the hwmon sysfs uapi in fdinfo. Not really a business I want to
>>>>>>>>>> be in :-)
>>>>>>>>>
>>>>>>>>> I guess this is true for temp (where there are thermal zones with
>>>>>>>>> potentially multiple temp sensors.. but I'm still digging my way thru
>>>>>>>>> the thermal_cooling_device stuff)
>>>>>>>>
>>>>>>>> It is slightly ugly. All thermal zones and cooling devices are virtual
>>>>>>>> devices (so, even no connection to the particular tsens device). One
>>>>>>>> can either enumerate them by checking
>>>>>>>> /sys/class/thermal/thermal_zoneN/type or enumerate them through
>>>>>>>> /sys/class/hwmon. For cooling devices again the only enumeration is
>>>>>>>> through /sys/class/thermal/cooling_deviceN/type.
>>>>>>>>
>>>>>>>> Probably it should be possible to push cooling devices and thermal
>>>>>>>> zones under corresponding providers. However I do not know if there is
>>>>>>>> a good way to correlate cooling device (ideally a part of GPU) to the
>>>>>>>> thermal_zone (which in our case is provided by tsens / temp_alarm
>>>>>>>> rather than GPU itself).
>>>>>>>>
>>>>>>>>>
>>>>>>>>> But what about freq? I think, esp for cases where some "fw thing" is
>>>>>>>>> controlling the freq we end up needing to use gpu counters to measure
>>>>>>>>> the freq.
>>>>>>>>
>>>>>>>> For the freq it is slightly easier: /sys/class/devfreq/*, devices are
>>>>>>>> registered under proper parent (IOW, GPU). So one can read
>>>>>>>> /sys/class/devfreq/3d00000.gpu/cur_freq or
>>>>>>>> /sys/bus/platform/devices/3d00000.gpu/devfreq/3d00000.gpu/cur_freq.
>>>>>>>>
>>>>>>>> However because of the components usage, there is no link from
>>>>>>>> /sys/class/drm/card0
>>>>>>>> (/sys/devices/platform/soc@0/ae00000.display-subsystem/ae01000.display-controller/drm/card0)
>>>>>>>> to /sys/devices/platform/soc@0/3d00000.gpu, the GPU unit.
>>>>>>>>
>>>>>>>> Getting all these items together in a platform-independent way would
>>>>>>>> be definitely an important but complex topic.
>>>>>>>
>>>>>>> But I don't believe any of the pci gpu's use devfreq ;-)
>>>>>>>
>>>>>>> And also, you can't expect the CPU to actually know the freq when fw
>>>>>>> is the one controlling freq. We can, currently, have a reasonable
>>>>>>> approximation from devfreq but that stops if IFPC is implemented. And
>>>>>>> other GPUs have even less direct control. So freq is a thing that I
>>>>>>> don't think we should try to get from "common frameworks"
>>>>>>
>>>>>> I think it might be useful to add another passive devfreq governor type for
>>>>>> external frequencies. This way we can use the same interface to export
>>>>>> non-CPU-controlled frequencies.
>>>>>
>>>>> Yeah this sounds like a decent idea to me too. It might also solve the fun
>>>>> of various pci devices having very non-standard freq controls in sysfs
>>>>> (looking at least at i915 here ...)
>>>>
>>>> I also like the idea of having some common infrastructure for the GPU freq.
>>>>
>>>> hwmon have a good infrastructure, but they are more focused on individual
>>>> monitoring devices and not very welcomed to embedded monitoring and control.
>>>> I still want to check the opportunity to see if at least some freq control
>>>> could be aligned there.
>>>>
>>>> Another thing that complicates that is that there are multiple frequency
>>>> domains and controls with multipliers in Intel GPU that are not very
>>>> standard or easy to integrate.
>>>>
>>>> On a quick glace this devfreq seems neat because it aligns with the cpufreq
>>>> and governors. But again it would be hard to align with the multiple domains
>>>> and controls. But it deserves a look.
>>>>
>>>> I will take a look to both fronts for Xe: hwmon and devfreq. Right now on
>>>> Xe we have a lot less controls than i915, but I can imagine soon there
>>>> will be requirements to make that to grow and I fear that we end up just
>>>> like i915. So I will take a look before that happens.
>>>
>>> So it looks like i915 (dgpu only) and nouveau already use hwmon.. so
>>> maybe this is a good way to expose temp. Maybe we can wire up some
>>> sort of helper for drivers which use thermal_cooling_device (which can
>>> be composed of multiple sensors) to give back an aggregate temp for
>>> hwmon to report?
>>
>> The thermal_device already registers the hwmon, see below. The
>> question is about linking that hwmon to the drm. Strictly speaking, I
>> don't think that we can reexport it in a clean way.
>>
>> # grep gpu /sys/class/hwmon/hwmon*/name
>> /sys/class/hwmon/hwmon15/name:gpu_top_thermal
>> /sys/class/hwmon/hwmon24/name:gpu_bottom_thermal
>
> I can't get excited about userspace relying on naming conventions or
> other heuristics like this.
As you can guess, me neither. We are not in 2.4 world anymore.
> Also, userspace's view of the world is
> very much that there is a "gpu card", not a collection of parts.
> (Windows seems to have the same view of the world.) So we have the
> component framework to assemble the various parts together into the
> "device" that userspace expects to deal with. We need to do something
> similar for exposing temp and freq.
I think we are lookin for something close to device links. We need to
create a userspace-visible link from one device to another across device
hierarchy. Current device_link API is tied to suspend/resume, but the
overall idea seems to be close enough (in my opinion).
>
>> # ls /sys/class/hwmon/hwmon15/ -l
>> lrwxrwxrwx 1 root root 0 Jan 26 08:14 device ->
>> ../../thermal_zone15
>> -r--r--r-- 1 root root 4096 Jan 26 08:14 name
>> drwxr-xr-x 2 root root 0 Jan 26 08:15 power
>> lrwxrwxrwx 1 root root 0 Jan 26 08:12 subsystem ->
>> ../../../../../class/hwmon
>> -r--r--r-- 1 root root 4096 Jan 26 08:14 temp1_input
>> -rw-r--r-- 1 root root 4096 Jan 26 08:12 uevent
>>
>>> Freq could possibly be added to hwmon (ie. seems like a reasonable
>>> attribute to add). Devfreq might also be an option but on arm it
>>> isn't necessarily associated with the drm device, whereas we could
>>> associate the hwmon with the drm device to make it easier for
>>> userspace to find.
>>
>> Possibly we can register a virtual 'passive' devfreq being driven by
>> another active devfreq device.
>
> That's all fine and good, but it has the same problem that existing
> hwmon's associated with the cooling-device have..
>
> BR,
> -R
>
>>>
>>> BR,
>>> -R
>>>
>>>>>
>>>>> I guess it would minimally be a good idea if we could document this, or
>>>>> maybe have a reference implementation in nvtop or whatever the cool thing
>>>>> is rn.
>>>>> -Daniel
>>>>>
>>>>>>
>>>>>>>
>>>>>>> BR,
>>>>>>> -R
>>>>>>>
>>>>>>>>>
>>>>>>>>>> What might be needed is better glue to go from the fd or fdinfo to the
>>>>>>>>>> right hw device and then crawl around the hwmon in sysfs automatically. I
>>>>>>>>>> would not be surprised at all if we really suck on this, probably more
>>>>>>>>>> likely on SoC than pci gpus where at least everything should be under the
>>>>>>>>>> main pci sysfs device.
>>>>>>>>>
>>>>>>>>> yeah, I *think* userspace would have to look at /proc/device-tree to
>>>>>>>>> find the cooling device(s) associated with the gpu.. at least I don't
>>>>>>>>> see a straightforward way to figure it out just for sysfs
>>>>>>>>>
>>>>>>>>> BR,
>>>>>>>>> -R
>>>>>>>>>
>>>>>>>>>> -Daniel
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> BR,
>>>>>>>>>>> -R
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> [1] https://patchwork.freedesktop.org/series/112397/
>>>>>>>>>>>>
>>>>>>>>>>>> Rob Clark (2):
>>>>>>>>>>>> drm: Add fdinfo memory stats
>>>>>>>>>>>> drm/msm: Add memory stats to fdinfo
>>>>>>>>>>>>
>>>>>>>>>>>> Documentation/gpu/drm-usage-stats.rst | 21 +++++++
>>>>>>>>>>>> drivers/gpu/drm/drm_file.c | 79 +++++++++++++++++++++++++++
>>>>>>>>>>>> drivers/gpu/drm/msm/msm_drv.c | 25 ++++++++-
>>>>>>>>>>>> drivers/gpu/drm/msm/msm_gpu.c | 2 -
>>>>>>>>>>>> include/drm/drm_file.h | 10 ++++
>>>>>>>>>>>> 5 files changed, 134 insertions(+), 3 deletions(-)
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> 2.39.2
>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Daniel Vetter
>>>>>>>>>> Software Engineer, Intel Corporation
>>>>>>>>>> http://blog.ffwll.ch
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> With best wishes
>>>>>>>> Dmitry
>>>>>>
>>>>>> --
>>>>>> With best wishes
>>>>>> Dmitry
>>>>>>
>>>>>
>>>>> --
>>>>> Daniel Vetter
>>>>> Software Engineer, Intel Corporation
>>>>> http://blog.ffwll.ch
>>
>>
>>
>> --
>> With best wishes
>> Dmitry
--
With best wishes
Dmitry
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2023-04-13 0:27 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-10 21:06 [PATCH v2 0/2] drm: fdinfo memory stats Rob Clark
2023-04-10 21:06 ` [PATCH v2 1/2] drm: Add " Rob Clark
2023-04-11 10:43 ` Daniel Vetter
2023-04-11 15:02 ` Rob Clark
2023-04-11 15:10 ` Daniel Vetter
2023-04-11 16:47 ` [PATCH v2 0/2] drm: " Rob Clark
2023-04-11 16:53 ` Daniel Vetter
2023-04-11 17:13 ` Rob Clark
2023-04-11 17:35 ` [Freedreno] " Dmitry Baryshkov
2023-04-11 18:26 ` Daniel Vetter
2023-04-11 22:27 ` Dmitry Baryshkov
2023-04-11 18:28 ` Rob Clark
2023-04-11 22:36 ` Dmitry Baryshkov
2023-04-12 8:11 ` Daniel Vetter
2023-04-12 12:47 ` Rodrigo Vivi
2023-04-12 20:09 ` Rob Clark
2023-04-12 20:19 ` Dmitry Baryshkov
2023-04-12 20:34 ` Rob Clark
2023-04-13 0:27 ` Dmitry Baryshkov
2023-04-12 20:23 ` Alex Deucher
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).