From: Matthew Brost <matthew.brost@intel.com>
To: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH] drm/xe/sa: Drop hardcoded 4K guard in sub-allocator
Date: Tue, 17 Dec 2024 14:39:44 -0800 [thread overview]
Message-ID: <Z2H9sADDJOvFG6j+@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20241217222246.863-1-michal.wajdeczko@intel.com>
On Tue, Dec 17, 2024 at 11:22:46PM +0100, Michal Wajdeczko wrote:
> Any required prefetch guards are added during batch buffer
> allocations anyway.
>
This should work but I think we actually want to do the opposite of
this - drop the prefetch pad in BB allocation. This would enable a more
optimial usage of each suballocation. I think that would work unless we
have an odd caching issue - if caching is a problem then maybe the BB is
a cacheline.
I haven't had time to try to out yet but I think we explore the above
option first. If I'm missing something and the above does not work, then
agree with this patch.
Matt
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_sa.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_sa.c b/drivers/gpu/drm/xe/xe_sa.c
> index e055bed7ae55..2f69277b1a50 100644
> --- a/drivers/gpu/drm/xe/xe_sa.c
> +++ b/drivers/gpu/drm/xe/xe_sa.c
> @@ -34,7 +34,6 @@ static void xe_sa_bo_manager_fini(struct drm_device *drm, void *arg)
> struct xe_sa_manager *xe_sa_bo_manager_init(struct xe_tile *tile, u32 size, u32 align)
> {
> struct xe_device *xe = tile_to_xe(tile);
> - u32 managed_size = size - SZ_4K;
> struct xe_bo *bo;
> int ret;
>
> @@ -58,11 +57,11 @@ struct xe_sa_manager *xe_sa_bo_manager_init(struct xe_tile *tile, u32 size, u32
> sa_manager->bo = bo;
> sa_manager->is_iomem = bo->vmap.is_iomem;
>
> - drm_suballoc_manager_init(&sa_manager->base, managed_size, align);
> + drm_suballoc_manager_init(&sa_manager->base, size, align);
> sa_manager->gpu_addr = xe_bo_ggtt_addr(bo);
>
> if (bo->vmap.is_iomem) {
> - sa_manager->cpu_ptr = kvzalloc(managed_size, GFP_KERNEL);
> + sa_manager->cpu_ptr = kvzalloc(size, GFP_KERNEL);
> if (!sa_manager->cpu_ptr) {
> sa_manager->bo = NULL;
> return ERR_PTR(-ENOMEM);
> --
> 2.47.1
>
next prev parent reply other threads:[~2024-12-17 22:39 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-17 22:22 [PATCH] drm/xe/sa: Drop hardcoded 4K guard in sub-allocator Michal Wajdeczko
2024-12-17 22:39 ` Matthew Brost [this message]
2024-12-18 9:15 ` Matthew Auld
2024-12-18 19:47 ` Michal Wajdeczko
2024-12-19 4:16 ` Matthew Brost
2024-12-18 19:42 ` Michal Wajdeczko
2024-12-18 0:05 ` ✓ CI.Patch_applied: success for " Patchwork
2024-12-18 0:05 ` ✓ CI.checkpatch: " Patchwork
2024-12-18 0:06 ` ✓ CI.KUnit: " Patchwork
2024-12-18 0:24 ` ✓ CI.Build: " Patchwork
2024-12-18 0:26 ` ✓ CI.Hooks: " Patchwork
2024-12-18 0:28 ` ✓ CI.checksparse: " Patchwork
2024-12-18 1:02 ` ✓ Xe.CI.BAT: " Patchwork
2024-12-18 10:42 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z2H9sADDJOvFG6j+@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=michal.wajdeczko@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox