From: "Christian König" <christian.koenig@amd.com>
To: Nirmoy Das <nirmoy.das@intel.com>, dri-devel@lists.freedesktop.org
Cc: intel-xe@lists.freedesktop.org,
"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Matthew Auld" <matthew.auld@intel.com>
Subject: Re: [PATCH] drm/ttm/pool: Revert to clear-on-alloc to honor TTM_TT_FLAG_ZERO_ALLOC
Date: Fri, 21 Jun 2024 16:54:59 +0200 [thread overview]
Message-ID: <98890fd6-43b8-4a08-84e8-1f635abb933d@amd.com> (raw)
In-Reply-To: <20240620160147.29819-1-nirmoy.das@intel.com>
[-- Attachment #1: Type: text/plain, Size: 4028 bytes --]
Am 20.06.24 um 18:01 schrieb Nirmoy Das:
> Currently ttm pool is not honoring TTM_TT_FLAG_ZERO_ALLOC flag and
> clearing pages on free. It does help with allocation latency but clearing
> happens even if drm driver doesn't passes the flag. If clear on free
> is needed then a new flag can be added for that purpose.
Mhm, thinking more about it that will most likely get push back from
others as well.
How about the attached patch? We just skip clearing pages when the
driver set the ZERO_ALLOC flag again before freeing them.
Maybe rename the flag or add a new one for that, but in general that
looks like the option with the least impact.
Regards,
Christian.
>
> Cc: Christian Koenig <christian.koenig@amd.com>
> Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
> ---
> drivers/gpu/drm/ttm/ttm_pool.c | 31 +++++++++++++++++--------------
> 1 file changed, 17 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
> index 6e1fd6985ffc..cbbd722185ee 100644
> --- a/drivers/gpu/drm/ttm/ttm_pool.c
> +++ b/drivers/gpu/drm/ttm/ttm_pool.c
> @@ -224,15 +224,6 @@ static void ttm_pool_unmap(struct ttm_pool *pool, dma_addr_t dma_addr,
> /* Give pages into a specific pool_type */
> static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p)
> {
> - unsigned int i, num_pages = 1 << pt->order;
> -
> - for (i = 0; i < num_pages; ++i) {
> - if (PageHighMem(p))
> - clear_highpage(p + i);
> - else
> - clear_page(page_address(p + i));
> - }
> -
> spin_lock(&pt->lock);
> list_add(&p->lru, &pt->pages);
> spin_unlock(&pt->lock);
> @@ -240,15 +231,26 @@ static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p)
> }
>
> /* Take pages from a specific pool_type, return NULL when nothing available */
> -static struct page *ttm_pool_type_take(struct ttm_pool_type *pt)
> +static struct page *ttm_pool_type_take(struct ttm_pool_type *pt, bool clear)
> {
> struct page *p;
>
> spin_lock(&pt->lock);
> p = list_first_entry_or_null(&pt->pages, typeof(*p), lru);
> if (p) {
> + unsigned int i, num_pages = 1 << pt->order;
> +
> atomic_long_sub(1 << pt->order, &allocated_pages);
> list_del(&p->lru);
> + if (clear) {
> + for (i = 0; i < num_pages; ++i) {
> + if (PageHighMem(p))
> + clear_highpage(p + i);
> + else
> + clear_page(page_address(p + i));
> + }
> + }
> +
> }
> spin_unlock(&pt->lock);
>
> @@ -279,7 +281,7 @@ static void ttm_pool_type_fini(struct ttm_pool_type *pt)
> list_del(&pt->shrinker_list);
> spin_unlock(&shrinker_lock);
>
> - while ((p = ttm_pool_type_take(pt)))
> + while ((p = ttm_pool_type_take(pt, false)))
> ttm_pool_free_page(pt->pool, pt->caching, pt->order, p);
> }
>
> @@ -330,7 +332,7 @@ static unsigned int ttm_pool_shrink(void)
> list_move_tail(&pt->shrinker_list, &shrinker_list);
> spin_unlock(&shrinker_lock);
>
> - p = ttm_pool_type_take(pt);
> + p = ttm_pool_type_take(pt, false);
> if (p) {
> ttm_pool_free_page(pt->pool, pt->caching, pt->order, p);
> num_pages = 1 << pt->order;
> @@ -457,10 +459,11 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
> num_pages;
> order = min_t(unsigned int, order, __fls(num_pages))) {
> struct ttm_pool_type *pt;
> + bool clear = tt->page_flags & TTM_TT_FLAG_ZERO_ALLOC;
>
> page_caching = tt->caching;
> pt = ttm_pool_select_type(pool, tt->caching, order);
> - p = pt ? ttm_pool_type_take(pt) : NULL;
> + p = pt ? ttm_pool_type_take(pt, clear) : NULL;
> if (p) {
> r = ttm_pool_apply_caching(caching, pages,
> tt->caching);
> @@ -480,7 +483,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
> if (num_pages < (1 << order))
> break;
>
> - p = ttm_pool_type_take(pt);
> + p = ttm_pool_type_take(pt, clear);
> } while (p);
> }
>
[-- Attachment #2: 0001-drm-ttm-skip-page-clear-if-ZERO_ALLOC-flag-is-set-on.patch --]
[-- Type: text/x-patch, Size: 2268 bytes --]
From 466b7b315af74bae635b9245a1d9e6619a3da171 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Christian=20K=C3=B6nig?= <christian.koenig@amd.com>
Date: Fri, 21 Jun 2024 16:50:59 +0200
Subject: [PATCH] drm/ttm: skip page clear if ZERO_ALLOC flag is set on free
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
This allows the driver to clear the pages using some DMA instead of the
CPU.
Signed-off-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/ttm/ttm_pool.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index 6e1fd6985ffc..6add5006c575 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -222,15 +222,18 @@ static void ttm_pool_unmap(struct ttm_pool *pool, dma_addr_t dma_addr,
}
/* Give pages into a specific pool_type */
-static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p)
+static void ttm_pool_type_give(struct ttm_pool_type *pt, struct page *p,
+ bool cleared)
{
unsigned int i, num_pages = 1 << pt->order;
- for (i = 0; i < num_pages; ++i) {
- if (PageHighMem(p))
- clear_highpage(p + i);
- else
- clear_page(page_address(p + i));
+ if (!cleared) {
+ for (i = 0; i < num_pages; ++i) {
+ if (PageHighMem(p))
+ clear_highpage(p + i);
+ else
+ clear_page(page_address(p + i));
+ }
}
spin_lock(&pt->lock);
@@ -394,6 +397,7 @@ static void ttm_pool_free_range(struct ttm_pool *pool, struct ttm_tt *tt,
pgoff_t start_page, pgoff_t end_page)
{
struct page **pages = &tt->pages[start_page];
+ bool cleared = tt->page_flags & TTM_TT_FLAG_ZERO_ALLOC;
unsigned int order;
pgoff_t i, nr;
@@ -407,7 +411,7 @@ static void ttm_pool_free_range(struct ttm_pool *pool, struct ttm_tt *tt,
pt = ttm_pool_select_type(pool, caching, order);
if (pt)
- ttm_pool_type_give(pt, *pages);
+ ttm_pool_type_give(pt, *pages, cleared);
else
ttm_pool_free_page(pool, caching, order, *pages);
}
@@ -517,6 +521,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
if (r)
goto error_free_all;
+ tt->page_flags &= ~TTM_TT_FLAG_ZERO_ALLOC;
return 0;
error_free_page:
--
2.34.1
next prev parent reply other threads:[~2024-06-21 14:55 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-20 16:01 [PATCH] drm/ttm/pool: Revert to clear-on-alloc to honor TTM_TT_FLAG_ZERO_ALLOC Nirmoy Das
2024-06-20 17:08 ` ✓ CI.Patch_applied: success for " Patchwork
2024-06-20 17:08 ` ✓ CI.checkpatch: " Patchwork
2024-06-20 17:10 ` ✓ CI.KUnit: " Patchwork
2024-06-20 17:24 ` ✓ CI.Build: " Patchwork
2024-06-20 17:27 ` ✗ CI.Hooks: failure " Patchwork
2024-06-20 17:28 ` ✗ CI.checksparse: warning " Patchwork
2024-06-20 18:02 ` ✓ CI.BAT: success " Patchwork
2024-06-20 20:47 ` ✗ CI.FULL: failure " Patchwork
2024-06-21 14:54 ` Christian König [this message]
2024-06-21 15:43 ` [PATCH] " Nirmoy Das
2024-06-24 8:41 ` Christian König
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=98890fd6-43b8-4a08-84e8-1f635abb933d@amd.com \
--to=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
--cc=nirmoy.das@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox