Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Stuart Summers <stuart.summers@intel.com>
Cc: matthew.brost@intel.com, John.C.Harrison@Intel.com,
	brian.welty@intel.com, rodrigo.vivi@intel.com,
	intel-xe@lists.freedesktop.org,
	Stuart Summers <stuart.summers@intel.com>
Subject: [CI 0/3] Update page fault queue size calculation
Date: Sat, 17 Aug 2024 02:47:29 +0000	[thread overview]
Message-ID: <cover.1723862633.git.stuart.summers@intel.com> (raw)

Right now the page fault queue size is hard coded with an
estimated value based on legacy platforms. Add a more precise
calculation based on the number of compute resources available
which can utilize these page fault queues.

v2: Add a drm reset callback for the teardown changes and other
    suggestions from Matt.
v3: Add a pf_wq destroy when the access counter wq allocation
    fails (Rodrigo) and pf queue size calculation adjustment (Matt)
v4: Bump up the size of the G2H queue as well (Matt)
v5: Make the G2H buffer size 64K (Matt)
v6: Rebase and resend for CI
v7: Rebase (again) and resend for CI
    The prior series was showing an unexpected failure in some
    of the display tests. I tried running the main one manually
    after rebasing this series again recently and it passed
    on manual execution, so giving this series another try on CI
    with the expectation that those prior issues were unrelated
    to my series. Here's the test I ran manually:
    igt@kms_cursor_edge_walk@64x64-top-bottom

Stuart Summers (3):
  drm/xe: Fix missing workqueue destroy in xe_gt_pagefault
  drm/xe: Use topology to determine page fault queue size
  drm/xe/guc: Bump the G2H queue size to account for page faults

 drivers/gpu/drm/xe/xe_gt_pagefault.c | 72 ++++++++++++++++++++++------
 drivers/gpu/drm/xe/xe_gt_types.h     |  9 +++-
 drivers/gpu/drm/xe/xe_guc_ct.c       | 12 ++++-
 3 files changed, 75 insertions(+), 18 deletions(-)

-- 
2.34.1


             reply	other threads:[~2024-08-17  2:47 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-17  2:47 Stuart Summers [this message]
2024-08-17  2:47 ` [CI 1/3] drm/xe: Fix missing workqueue destroy in xe_gt_pagefault Stuart Summers
2024-08-17  2:47 ` [CI 2/3] drm/xe: Use topology to determine page fault queue size Stuart Summers
2024-08-17  2:47 ` [CI 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults Stuart Summers
2024-08-17  2:53 ` ✓ CI.Patch_applied: success for Update page fault queue size calculation (rev6) Patchwork
2024-08-17  2:53 ` ✓ CI.checkpatch: " Patchwork
2024-08-17  2:54 ` ✓ CI.KUnit: " Patchwork
2024-08-17  3:06 ` ✓ CI.Build: " Patchwork
2024-08-17  3:08 ` ✗ CI.Hooks: failure " Patchwork
2024-08-17  3:09 ` ✓ CI.checksparse: success " Patchwork
2024-08-17  3:54 ` ✓ CI.BAT: " Patchwork
2024-08-17 10:03 ` ✗ CI.FULL: failure " Patchwork
2024-08-19 17:21   ` Summers, Stuart
  -- strict thread matches above, loose matches on Subject: below --
2024-07-29 16:19 [CI 0/3] Update page fault queue size calculation Stuart Summers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cover.1723862633.git.stuart.summers@intel.com \
    --to=stuart.summers@intel.com \
    --cc=John.C.Harrison@Intel.com \
    --cc=brian.welty@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=rodrigo.vivi@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox