From: John Stultz <john.stultz@linaro.org>
To: lkml <linux-kernel@vger.kernel.org>
Cc: "John Stultz" <john.stultz@linaro.org>,
"Daniel Vetter" <daniel@ffwll.ch>,
"Christian Koenig" <christian.koenig@amd.com>,
"Sumit Semwal" <sumit.semwal@linaro.org>,
"Liam Mark" <lmark@codeaurora.org>,
"Chris Goldsworthy" <cgoldswo@codeaurora.org>,
"Laura Abbott" <labbott@kernel.org>,
"Brian Starkey" <Brian.Starkey@arm.com>,
"Hridya Valsaraju" <hridya@google.com>,
"Suren Baghdasaryan" <surenb@google.com>,
"Sandeep Patil" <sspatil@google.com>,
"Daniel Mentz" <danielmentz@google.com>,
"Ørjan Eide" <orjan.eide@arm.com>,
"Robin Murphy" <robin.murphy@arm.com>,
"Ezequiel Garcia" <ezequiel@collabora.com>,
"Simon Ser" <contact@emersion.fr>,
"James Jones" <jajones@nvidia.com>,
linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org
Subject: [PATCH v9 3/5] dma-buf: system_heap: Add drm pagepool support to system heap
Date: Wed, 30 Jun 2021 01:34:19 +0000 [thread overview]
Message-ID: <20210630013421.735092-4-john.stultz@linaro.org> (raw)
In-Reply-To: <20210630013421.735092-1-john.stultz@linaro.org>
Utilize the drm pagepool code to speed up allocation
performance.
This is similar to the ION pagepool usage, but tries to
utilize generic code instead of a custom implementation.
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Chris Goldsworthy <cgoldswo@codeaurora.org>
Cc: Laura Abbott <labbott@kernel.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Hridya Valsaraju <hridya@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Sandeep Patil <sspatil@google.com>
Cc: Daniel Mentz <danielmentz@google.com>
Cc: Ørjan Eide <orjan.eide@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Ezequiel Garcia <ezequiel@collabora.com>
Cc: Simon Ser <contact@emersion.fr>
Cc: James Jones <jajones@nvidia.com>
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
v2:
* Fix build issue caused by selecting PAGE_POOL w/o NET
as Reported-by: kernel test robot <lkp@intel.com>
v3:
* Simplify the page zeroing logic a bit by using kmap_atomic
instead of vmap as suggested by Daniel Mentz
v5:
* Shift away from networking page pool completely to
dmabuf page pool implementation
v6:
* Switch again to using the drm_page_pool code shared w/
ttm_pool
v7:
* Slight rework for drm_page_pool changes
v8:
* Rework to use the rewritten drm_page_pool logic
* Drop explicit buffer zeroing, as the drm_page_pool handles that
v9:
* Fix compiler warning Reported-by: kernel test robot <lkp@intel.com>
---
drivers/dma-buf/heaps/Kconfig | 1 +
drivers/dma-buf/heaps/system_heap.c | 26 +++++++++++++++++++++++---
2 files changed, 24 insertions(+), 3 deletions(-)
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index a5eef06c4226..f19bf1f82bc2 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -1,6 +1,7 @@
config DMABUF_HEAPS_SYSTEM
bool "DMA-BUF System Heap"
depends on DMABUF_HEAPS
+ select DRM_PAGE_POOL
help
Choose this option to enable the system dmabuf heap. The system heap
is backed by pages from the buddy allocator. If in doubt, say Y.
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
index f57a39ddd063..85ceca2ed61d 100644
--- a/drivers/dma-buf/heaps/system_heap.c
+++ b/drivers/dma-buf/heaps/system_heap.c
@@ -21,6 +21,8 @@
#include <linux/slab.h>
#include <linux/vmalloc.h>
+#include <drm/page_pool.h>
+
static struct dma_heap *sys_heap;
struct system_heap_buffer {
@@ -54,6 +56,7 @@ static gfp_t order_flags[] = {HIGH_ORDER_GFP, MID_ORDER_GFP, LOW_ORDER_GFP};
*/
static const unsigned int orders[] = {8, 4, 0};
#define NUM_ORDERS ARRAY_SIZE(orders)
+struct drm_page_pool pools[NUM_ORDERS];
static struct sg_table *dup_sg_table(struct sg_table *table)
{
@@ -282,18 +285,27 @@ static void system_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
dma_buf_map_clear(map);
}
+static void system_heap_free_pages(struct drm_page_pool *pool, struct page *p)
+{
+ __free_pages(p, pool->order);
+}
+
static void system_heap_dma_buf_release(struct dma_buf *dmabuf)
{
struct system_heap_buffer *buffer = dmabuf->priv;
struct sg_table *table;
struct scatterlist *sg;
- int i;
+ int i, j;
table = &buffer->sg_table;
for_each_sg(table->sgl, sg, table->nents, i) {
struct page *page = sg_page(sg);
- __free_pages(page, compound_order(page));
+ for (j = 0; j < NUM_ORDERS; j++) {
+ if (compound_order(page) == orders[j])
+ break;
+ }
+ drm_page_pool_add(&pools[j], page);
}
sg_free_table(table);
kfree(buffer);
@@ -324,7 +336,9 @@ static struct page *alloc_largest_available(unsigned long size,
if (max_order < orders[i])
continue;
- page = alloc_pages(order_flags[i], orders[i]);
+ page = drm_page_pool_remove(&pools[i]);
+ if (!page)
+ page = alloc_pages(order_flags[i], orders[i]);
if (!page)
continue;
return page;
@@ -425,6 +439,12 @@ static const struct dma_heap_ops system_heap_ops = {
static int system_heap_create(void)
{
struct dma_heap_export_info exp_info;
+ int i;
+
+ for (i = 0; i < NUM_ORDERS; i++) {
+ drm_page_pool_init(&pools[i], orders[i],
+ system_heap_free_pages);
+ }
exp_info.name = "system";
exp_info.ops = &system_heap_ops;
--
2.25.1
next prev parent reply other threads:[~2021-06-30 1:34 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-30 1:34 [PATCH v9 0/5] Generic page pool & deferred freeing for system dmabuf hea John Stultz
2021-06-30 1:34 ` [PATCH v9 1/5] drm: Add a sharable drm page-pool implementation John Stultz
2021-06-30 9:10 ` Christian König
2021-06-30 22:24 ` John Stultz
2021-07-01 6:52 ` Christian König
2021-07-06 21:03 ` John Stultz
2021-07-06 21:15 ` Daniel Vetter
2021-07-06 21:19 ` John Stultz
2021-07-07 6:52 ` Christian König
2021-07-07 6:38 ` page pools, was " Christoph Hellwig
2021-07-07 7:10 ` Christian König
2021-07-07 7:14 ` Christoph Hellwig
2021-07-07 9:32 ` Christian König
2021-07-07 19:42 ` John Stultz
2021-07-07 19:35 ` John Stultz
2021-07-08 4:20 ` Christoph Hellwig
2021-07-08 7:37 ` Christian König
2021-06-30 1:34 ` [PATCH v9 2/5] drm: ttm_pool: Rework ttm_pool to use drm_page_pool John Stultz
2021-06-30 5:11 ` kernel test robot
2021-06-30 1:34 ` John Stultz [this message]
2021-06-30 4:34 ` [PATCH v9 3/5] dma-buf: system_heap: Add drm pagepool support to system heap kernel test robot
2021-06-30 5:25 ` kernel test robot
2021-06-30 1:34 ` [PATCH v9 4/5] dma-buf: heaps: Add deferred-free-helper library code John Stultz
2021-06-30 1:34 ` [PATCH v9 5/5] dma-buf: system_heap: Add deferred freeing to the system heap John Stultz
2021-06-30 9:13 ` [PATCH v9 0/5] Generic page pool & deferred freeing for system dmabuf hea Christian König
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210630013421.735092-4-john.stultz@linaro.org \
--to=john.stultz@linaro.org \
--cc=Brian.Starkey@arm.com \
--cc=cgoldswo@codeaurora.org \
--cc=christian.koenig@amd.com \
--cc=contact@emersion.fr \
--cc=daniel@ffwll.ch \
--cc=danielmentz@google.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=ezequiel@collabora.com \
--cc=hridya@google.com \
--cc=jajones@nvidia.com \
--cc=labbott@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=lmark@codeaurora.org \
--cc=orjan.eide@arm.com \
--cc=robin.murphy@arm.com \
--cc=sspatil@google.com \
--cc=sumit.semwal@linaro.org \
--cc=surenb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox