From: Daniel Vetter <daniel-/w4YWyX8dFk@public.gmane.org>
To: "T.J. Mercier" <tjmercier-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: "David Airlie" <airlied-cv59FeDIM0c@public.gmane.org>,
"Daniel Vetter" <daniel-/w4YWyX8dFk@public.gmane.org>,
"Maarten Lankhorst"
<maarten.lankhorst-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>,
"Maxime Ripard" <mripard-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
"Thomas Zimmermann" <tzimmermann-l3A5Bk7waGM@public.gmane.org>,
"Jonathan Corbet" <corbet-T1hC0tSOHrs@public.gmane.org>,
"Greg Kroah-Hartman"
<gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r@public.gmane.org>,
"Arve Hjønnevåg" <arve-z5hGa2qSFaRBDgjK7y7TUQ@public.gmane.org>,
"Todd Kjos" <tkjos-z5hGa2qSFaRBDgjK7y7TUQ@public.gmane.org>,
"Martijn Coenen" <maco-z5hGa2qSFaRBDgjK7y7TUQ@public.gmane.org>,
"Joel Fernandes"
<joel-QYYGw3jwrUn5owFQY34kdNi2O/JbrIOy@public.gmane.org>,
"Christian Brauner"
<brauner-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
"Hridya Valsaraju"
<hridya-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
"Suren Baghdasaryan"
<surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
"Sumit Semwal"
<sumit.semwal-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>,
"Christian König" <christian.koenig-5C7GfCeVMHo@public.gmane.org>,
"Benjamin Gaignard"
<benjamin.gaignard-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>,
"Liam Mark" <lmark-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>
Subject: Re: [RFC v4 4/8] dmabuf: heaps: export system_heap buffers with GPU cgroup charging
Date: Mon, 28 Mar 2022 16:36:47 +0200 [thread overview]
Message-ID: <YkHH/0Use7F30UUE@phenom.ffwll.local> (raw)
In-Reply-To: <20220328035951.1817417-5-tjmercier-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
On Mon, Mar 28, 2022 at 03:59:43AM +0000, T.J. Mercier wrote:
> From: Hridya Valsaraju <hridya-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
>
> All DMA heaps now register a new GPU cgroup device upon creation, and the
> system_heap now exports buffers associated with its GPU cgroup device for
> tracking purposes.
>
> Signed-off-by: Hridya Valsaraju <hridya-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> Signed-off-by: T.J. Mercier <tjmercier-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
>
> ---
> v3 changes
> Use more common dual author commit message format per John Stultz.
>
> v2 changes
> Move dma-buf cgroup charge transfer from a dma_buf_op defined by every
> heap to a single dma-buf function for all heaps per Daniel Vetter and
> Christian König.
Apologies for being out of the loop quite a bit. I scrolled through this
all and I think it looks good to get going.
The only thing I have is whether we should move the cgroup controllers out
of dma-buf heaps, since that's rather android centric. E.g.
- a system gpucg_device which is used by all the various single page
allocators (dma-buf heap but also shmem helpers and really anything
else)
- same for cma, again both for dma-buf heaps and also for the gem cma
helpers in drm
Otherwise this will only work on non-upstream android where gpu drivers
allocate everything from dma-buf heap. If you use something like the x86
android project with mesa drivers, then driver-internal buffers will be
allocated through gem and not through dma-buf heaps. Or at least I think
that's how it works.
But also meh, we can fix this fairly easily later on by adding these
standard gpucg_dev somwehere with a bit of kerneldoc.
Anyway has my all my ack, but don't count this as my in-depth review :-)
-Daniel
> ---
> drivers/dma-buf/dma-heap.c | 27 +++++++++++++++++++++++++++
> drivers/dma-buf/heaps/system_heap.c | 3 +++
> include/linux/dma-heap.h | 11 +++++++++++
> 3 files changed, 41 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
> index 8f5848aa144f..885072427775 100644
> --- a/drivers/dma-buf/dma-heap.c
> +++ b/drivers/dma-buf/dma-heap.c
> @@ -7,6 +7,7 @@
> */
>
> #include <linux/cdev.h>
> +#include <linux/cgroup_gpu.h>
> #include <linux/debugfs.h>
> #include <linux/device.h>
> #include <linux/dma-buf.h>
> @@ -31,6 +32,7 @@
> * @heap_devt heap device node
> * @list list head connecting to list of heaps
> * @heap_cdev heap char device
> + * @gpucg_dev gpu cgroup device for memory accounting
> *
> * Represents a heap of memory from which buffers can be made.
> */
> @@ -41,6 +43,9 @@ struct dma_heap {
> dev_t heap_devt;
> struct list_head list;
> struct cdev heap_cdev;
> +#ifdef CONFIG_CGROUP_GPU
> + struct gpucg_device gpucg_dev;
> +#endif
> };
>
> static LIST_HEAD(heap_list);
> @@ -216,6 +221,26 @@ const char *dma_heap_get_name(struct dma_heap *heap)
> return heap->name;
> }
>
> +#ifdef CONFIG_CGROUP_GPU
> +/**
> + * dma_heap_get_gpucg_dev() - get struct gpucg_device for the heap.
> + * @heap: DMA-Heap to get the gpucg_device struct for.
> + *
> + * Returns:
> + * The gpucg_device struct for the heap. NULL if the GPU cgroup controller is
> + * not enabled.
> + */
> +struct gpucg_device *dma_heap_get_gpucg_dev(struct dma_heap *heap)
> +{
> + return &heap->gpucg_dev;
> +}
> +#else /* CONFIG_CGROUP_GPU */
> +struct gpucg_device *dma_heap_get_gpucg_dev(struct dma_heap *heap)
> +{
> + return NULL;
> +}
> +#endif /* CONFIG_CGROUP_GPU */
> +
> struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
> {
> struct dma_heap *heap, *h, *err_ret;
> @@ -288,6 +313,8 @@ struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
> list_add(&heap->list, &heap_list);
> mutex_unlock(&heap_list_lock);
>
> + gpucg_register_device(dma_heap_get_gpucg_dev(heap), exp_info->name);
> +
> return heap;
>
> err2:
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> index ab7fd896d2c4..752a05c3cfe2 100644
> --- a/drivers/dma-buf/heaps/system_heap.c
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -395,6 +395,9 @@ static struct dma_buf *system_heap_allocate(struct dma_heap *heap,
> exp_info.ops = &system_heap_buf_ops;
> exp_info.size = buffer->len;
> exp_info.flags = fd_flags;
> +#ifdef CONFIG_CGROUP_GPU
> + exp_info.gpucg_dev = dma_heap_get_gpucg_dev(heap);
> +#endif
> exp_info.priv = buffer;
> dmabuf = dma_buf_export(&exp_info);
> if (IS_ERR(dmabuf)) {
> diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
> index 0c05561cad6e..e447a61d054e 100644
> --- a/include/linux/dma-heap.h
> +++ b/include/linux/dma-heap.h
> @@ -10,6 +10,7 @@
> #define _DMA_HEAPS_H
>
> #include <linux/cdev.h>
> +#include <linux/cgroup_gpu.h>
> #include <linux/types.h>
>
> struct dma_heap;
> @@ -59,6 +60,16 @@ void *dma_heap_get_drvdata(struct dma_heap *heap);
> */
> const char *dma_heap_get_name(struct dma_heap *heap);
>
> +/**
> + * dma_heap_get_gpucg_dev() - get a pointer to the struct gpucg_device for the
> + * heap.
> + * @heap: DMA-Heap to retrieve gpucg_device for.
> + *
> + * Returns:
> + * The gpucg_device struct for the heap.
> + */
> +struct gpucg_device *dma_heap_get_gpucg_dev(struct dma_heap *heap);
> +
> /**
> * dma_heap_add - adds a heap to dmabuf heaps
> * @exp_info: information needed to register this heap
> --
> 2.35.1.1021.g381101b075-goog
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
next prev parent reply other threads:[~2022-03-28 14:36 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-28 3:59 [RFC v4 0/8] Proposal for a GPU cgroup controller T.J. Mercier
2022-03-28 3:59 ` [RFC v4 1/8] gpu: rfc: " T.J. Mercier
2022-03-28 3:59 ` [RFC v4 2/8] cgroup: gpu: Add a cgroup controller for allocator attribution of GPU memory T.J. Mercier
2022-03-29 16:59 ` Tejun Heo
[not found] ` <YkM6/57mVxoNfSvm-NiLfg/pYEd1N0TnZuCh8vA@public.gmane.org>
2022-03-30 20:56 ` T.J. Mercier
2022-04-04 17:41 ` Tejun Heo
2022-04-04 17:49 ` T.J. Mercier
2022-03-28 3:59 ` [RFC v4 4/8] dmabuf: heaps: export system_heap buffers with GPU cgroup charging T.J. Mercier
[not found] ` <20220328035951.1817417-5-tjmercier-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2022-03-28 14:36 ` Daniel Vetter [this message]
[not found] ` <YkHH/0Use7F30UUE-dv86pmgwkMBes7Z6vYuT8azUEOm+Xw19@public.gmane.org>
2022-03-28 18:28 ` T.J. Mercier
[not found] ` <CABdmKX01p6g_iHsB6dd4Wwh=8iLdYiUqdY6_yyA5ax2YNHt6tQ@mail.gmail.com>
[not found] ` <CABdmKX01p6g_iHsB6dd4Wwh=8iLdYiUqdY6_yyA5ax2YNHt6tQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2022-03-29 8:42 ` Daniel Vetter
2022-03-29 16:50 ` Tejun Heo
[not found] ` <YkLGbL5Z3HVCyVkK-dv86pmgwkMBes7Z6vYuT8azUEOm+Xw19@public.gmane.org>
2022-03-29 17:52 ` T.J. Mercier
[not found] ` <20220328035951.1817417-1-tjmercier-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2022-03-28 3:59 ` [RFC v4 3/8] dmabuf: Use the GPU cgroup charge/uncharge APIs T.J. Mercier
2022-03-28 3:59 ` [RFC v4 5/8] dmabuf: Add gpu cgroup charge transfer function T.J. Mercier
2022-03-29 15:21 ` Michal Koutný
2022-04-01 18:41 ` T.J. Mercier
2022-04-05 12:12 ` Michal Koutný
2022-04-05 17:48 ` T.J. Mercier
2022-03-28 3:59 ` [RFC v4 6/8] binder: Add a buffer flag to relinquish ownership of fds T.J. Mercier
2022-03-28 3:59 ` [RFC v4 7/8] binder: use __kernel_pid_t and __kernel_uid_t for userspace T.J. Mercier
2022-03-28 3:59 ` [RFC v4 8/8] selftests: Add binder cgroup gpu memory transfer test T.J. Mercier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YkHH/0Use7F30UUE@phenom.ffwll.local \
--to=daniel-/w4ywyx8dfk@public.gmane.org \
--cc=airlied-cv59FeDIM0c@public.gmane.org \
--cc=arve-z5hGa2qSFaRBDgjK7y7TUQ@public.gmane.org \
--cc=benjamin.gaignard-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org \
--cc=brauner-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=christian.koenig-5C7GfCeVMHo@public.gmane.org \
--cc=corbet-T1hC0tSOHrs@public.gmane.org \
--cc=gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r@public.gmane.org \
--cc=hridya-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=joel-QYYGw3jwrUn5owFQY34kdNi2O/JbrIOy@public.gmane.org \
--cc=lmark-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org \
--cc=maarten.lankhorst-VuQAYsv1563Yd54FQh9/CA@public.gmane.org \
--cc=maco-z5hGa2qSFaRBDgjK7y7TUQ@public.gmane.org \
--cc=mripard-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
--cc=sumit.semwal-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org \
--cc=surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=tjmercier-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
--cc=tkjos-z5hGa2qSFaRBDgjK7y7TUQ@public.gmane.org \
--cc=tzimmermann-l3A5Bk7waGM@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox