From: Nirmoy Das <nirmoy.das@intel.com>
To: dri-devel@lists.freedesktop.org
Cc: intel-xe@lists.freedesktop.org,
"Nirmoy Das" <nirmoy.das@intel.com>,
"Himal Prasad Ghimiray" <himal.prasad.ghimiray@intel.com>,
"Matthew Auld" <matthew.auld@intel.com>,
"Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Subject: [PATCH v4 4/4] drm/xe/lnl: Offload system clear page activity to GPU
Date: Mon, 1 Jul 2024 17:17:38 +0200 [thread overview]
Message-ID: <20240701151738.6749-4-nirmoy.das@intel.com> (raw)
In-Reply-To: <20240701151738.6749-1-nirmoy.das@intel.com>
On LNL because of flat CCS, driver creates a migrate job to clear
CCS meta data. Extend that to also clear system pages using GPU.
Inform TTM to allocate pages without __GFP_ZERO to avoid double page
clearing by clearing out TTM_TT_FLAG_ZERO_ALLOC flag and set
TTM_TT_FLAG_CLEARED_ON_FREE while freeing to skip ttm pool's
clearn-on-free as XE now takes care of clearing pages. If a bo
is in system placement and there is a cpu map then for such BO gpu
clear will be avoided as there is no dma mapping for such BO.
To test the patch, created a small test that tries to submit a job
after binding various sizes of buffer which shows good gains for larger
buffer. For lower buffer sizes, the result is not very reliable as the
results vary a lot.
With the patch
sudo ~/igt-gpu-tools/build/tests/xe_exec_store --run
basic-store-benchmark
IGT-Version: 1.28-g2ed908c0b (x86_64) (Linux: 6.10.0-rc2-xe+ x86_64)
Using IGT_SRANDOM=1719237905 for randomisation
Opened device: /dev/dri/card0
Starting subtest: basic-store-benchmark
Starting dynamic subtest: WC
Dynamic subtest WC: SUCCESS (0.000s)
Time taken for size SZ_4K: 9493 us
Time taken for size SZ_2M: 5503 us
Time taken for size SZ_64M: 13016 us
Time taken for size SZ_128M: 29464 us
Time taken for size SZ_256M: 38408 us
Time taken for size SZ_1G: 148758 us
Starting dynamic subtest: WB
Dynamic subtest WB: SUCCESS (0.000s)
Time taken for size SZ_4K: 3889 us
Time taken for size SZ_2M: 6091 us
Time taken for size SZ_64M: 20920 us
Time taken for size SZ_128M: 32394 us
Time taken for size SZ_256M: 61710 us
Time taken for size SZ_1G: 215437 us
Subtest basic-store-benchmark: SUCCESS (0.589s)
With the patch:
sudo ~/igt-gpu-tools/build/tests/xe_exec_store --run
basic-store-benchmark
IGT-Version: 1.28-g2ed908c0b (x86_64) (Linux: 6.10.0-rc2-xe+ x86_64)
Using IGT_SRANDOM=1719238062 for randomisation
Opened device: /dev/dri/card0
Starting subtest: basic-store-benchmark
Starting dynamic subtest: WC
Dynamic subtest WC: SUCCESS (0.000s)
Time taken for size SZ_4K: 11803 us
Time taken for size SZ_2M: 4237 us
Time taken for size SZ_64M: 8649 us
Time taken for size SZ_128M: 14682 us
Time taken for size SZ_256M: 22156 us
Time taken for size SZ_1G: 74457 us
Starting dynamic subtest: WB
Dynamic subtest WB: SUCCESS (0.000s)
Time taken for size SZ_4K: 5129 us
Time taken for size SZ_2M: 12563 us
Time taken for size SZ_64M: 14860 us
Time taken for size SZ_128M: 26064 us
Time taken for size SZ_256M: 47167 us
Time taken for size SZ_1G: 170304 us
Subtest basic-store-benchmark: SUCCESS (0.417s)
With the patch and init_on_alloc=0
sudo ~/igt-gpu-tools/build/tests/xe_exec_store --run
basic-store-benchmark
IGT-Version: 1.28-g2ed908c0b (x86_64) (Linux: 6.10.0-rc2-xe+ x86_64)
Using IGT_SRANDOM=1719238219 for randomisation
Opened device: /dev/dri/card0
Starting subtest: basic-store-benchmark
Starting dynamic subtest: WC
Dynamic subtest WC: SUCCESS (0.000s)
Time taken for size SZ_4K: 4803 us
Time taken for size SZ_2M: 9212 us
Time taken for size SZ_64M: 9643 us
Time taken for size SZ_128M: 13479 us
Time taken for size SZ_256M: 22429 us
Time taken for size SZ_1G: 83110 us
Starting dynamic subtest: WB
Dynamic subtest WB: SUCCESS (0.000s)
Time taken for size SZ_4K: 4003 us
Time taken for size SZ_2M: 4443 us
Time taken for size SZ_64M: 12960 us
Time taken for size SZ_128M: 13741 us
Time taken for size SZ_256M: 26841 us
Time taken for size SZ_1G: 84746 us
Subtest basic-store-benchmark: SUCCESS (0.290s)
v2: Handle regression on dgfx(Himal)
Update commit message as no ttm API changes needed.
v3: Fix Kunit test.
v4: handle data leak on cpu mmap(Thomas)
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 25 ++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_device.c | 7 +++++++
drivers/gpu/drm/xe/xe_device_types.h | 2 ++
3 files changed, 33 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 4d6315d2ae9a..b76a44fcf3b1 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -387,6 +387,13 @@ static struct ttm_tt *xe_ttm_tt_create(struct ttm_buffer_object *ttm_bo,
caching = ttm_uncached;
}
+ /* If the device can support gpu clear pages then set proper ttm
+ * flag. Zeroed pages are only required for ttm_bo_type_device so
+ * unwanted data is leaked to userspace.
+ */
+ if (ttm_bo->type == ttm_bo_type_device && xe->mem.gpu_page_clear)
+ page_flags |= TTM_TT_FLAG_CLEARED_ON_FREE;
+
err = ttm_tt_init(&tt->ttm, &bo->ttm, page_flags, caching, extra_pages);
if (err) {
kfree(tt);
@@ -408,6 +415,10 @@ static int xe_ttm_tt_populate(struct ttm_device *ttm_dev, struct ttm_tt *tt,
if (tt->page_flags & TTM_TT_FLAG_EXTERNAL)
return 0;
+ /* Clear TTM_TT_FLAG_ZERO_ALLOC when GPU is set to clear pages */
+ if (tt->page_flags & TTM_TT_FLAG_CLEARED_ON_FREE)
+ tt->page_flags &= ~TTM_TT_FLAG_ZERO_ALLOC;
+
err = ttm_pool_alloc(&ttm_dev->pool, tt, ctx);
if (err)
return err;
@@ -653,6 +664,14 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
int ret = 0;
+ /*
+ * Clear TTM_TT_FLAG_CLEARED_ON_FREE on bo creation path when
+ * moving to system as the bo doesn't dma_mapping.
+ */
+ if (!old_mem && ttm && !ttm_tt_is_populated(ttm)) {
+ ttm->page_flags &= ~TTM_TT_FLAG_CLEARED_ON_FREE;
+ }
+
/* Bo creation path, moving to system or TT. */
if ((!old_mem && ttm) && !handle_system_ccs) {
if (new_mem->mem_type == XE_PL_TT)
@@ -676,7 +695,8 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
(!mem_type_is_vram(old_mem_type) && !tt_has_data);
needs_clear = (ttm && ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC) ||
- (!ttm && ttm_bo->type == ttm_bo_type_device);
+ (!ttm && ttm_bo->type == ttm_bo_type_device) ||
+ (ttm && ttm->page_flags & TTM_TT_FLAG_CLEARED_ON_FREE);
if (new_mem->mem_type == XE_PL_TT) {
ret = xe_tt_map_sg(ttm);
@@ -790,6 +810,9 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
handle_system_ccs;
bool clear_bo_data = mem_type_is_vram(new_mem->mem_type);
+ if (ttm && (ttm->page_flags & TTM_TT_FLAG_CLEARED_ON_FREE))
+ clear_bo_data |= true;
+
fence = xe_migrate_clear(migrate, bo, new_mem,
clear_bo_data, clear_ccs);
}
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index cfda7cb5df2c..293579e35c2e 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -636,6 +636,13 @@ int xe_device_probe(struct xe_device *xe)
if (err)
goto err;
+ /**
+ * On iGFX device with flat CCS, we clear CCS metadata, let's extend that
+ * and use GPU to clear pages as well.
+ */
+ if (xe_device_has_flat_ccs(xe) && !IS_DGFX(xe))
+ xe->mem.gpu_page_clear = true;
+
err = xe_vram_probe(xe);
if (err)
goto err;
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index c37be471d11c..ece68c6f3668 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -325,6 +325,8 @@ struct xe_device {
struct xe_mem_region vram;
/** @mem.sys_mgr: system TTM manager */
struct ttm_resource_manager sys_mgr;
+ /** @gpu_page_clear: clear pages offloaded to GPU */
+ bool gpu_page_clear;
} mem;
/** @sriov: device level virtualization data */
--
2.42.0
next prev parent reply other threads:[~2024-07-01 15:32 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-01 15:17 [PATCH v4 1/4] drm/ttm: Add a flag to allow drivers to skip clear-on-free Nirmoy Das
2024-07-01 15:17 ` [PATCH v4 2/4] drm/xe/migrate: Parameterize ccs and bo data clear in xe_migrate_clear() Nirmoy Das
2024-07-01 15:17 ` [PATCH v4 3/4] drm/xe/migrate: Clear CCS when clearing bo with XE_FAST_COLOR_BLT Nirmoy Das
2024-07-01 15:17 ` Nirmoy Das [this message]
2024-07-01 18:54 ` ✓ CI.Patch_applied: success for series starting with [v4,1/4] drm/ttm: Add a flag to allow drivers to skip clear-on-free Patchwork
2024-07-01 18:55 ` ✗ CI.checkpatch: warning " Patchwork
2024-07-01 18:56 ` ✓ CI.KUnit: success " Patchwork
2024-07-01 19:08 ` ✓ CI.Build: " Patchwork
2024-07-01 19:10 ` ✗ CI.Hooks: failure " Patchwork
2024-07-01 19:11 ` ✗ CI.checksparse: warning " Patchwork
2024-07-01 19:34 ` ✓ CI.BAT: success " Patchwork
2024-07-01 21:20 ` ✗ CI.FULL: failure " Patchwork
2024-07-03 10:41 ` ✓ CI.Patch_applied: success for series starting with [v4,1/4] drm/ttm: Add a flag to allow drivers to skip clear-on-free (rev2) Patchwork
2024-07-03 10:42 ` ✗ CI.checkpatch: warning " Patchwork
2024-07-03 10:43 ` ✓ CI.KUnit: success " Patchwork
2024-07-03 10:55 ` ✓ CI.Build: " Patchwork
2024-07-03 10:57 ` ✗ CI.Hooks: failure " Patchwork
2024-07-03 10:58 ` ✗ CI.checksparse: warning " Patchwork
2024-07-03 11:22 ` ✓ CI.BAT: success " Patchwork
2024-07-03 12:23 ` ✓ CI.FULL: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240701151738.6749-4-nirmoy.das@intel.com \
--to=nirmoy.das@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox