Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Andrzej Hajda <andrzej.hajda@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: "Lucas De Marchi" <lucas.demarchi@intel.com>,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	"Matthew Auld" <matthew.auld@intel.com>,
	"Matthew Brost" <matthew.brost@intel.com>
Subject: Re: [PATCH v2] drm/xe: flush gtt before signalling user fence on all engines
Date: Tue, 28 May 2024 13:35:29 +0200	[thread overview]
Message-ID: <03a44fa0-c187-4c8a-95c7-1dd6e1f14eab@intel.com> (raw)
In-Reply-To: <20240522-xu_flush_vcs_before_ufence-v2-1-9ac3e9af0323@intel.com>

On 22.05.2024 09:27, Andrzej Hajda wrote:
> Tests show that user fence signalling requires kind of write barrier,
> otherwise not all writes performed by the workload will be available
> to userspace. It is already done for render and compute, we need it
> also for the rest: video, gsc, copy.
> 
> v2: added gsc and copy engines, added fixes and r-b tags
> 
> Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1488
> Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
> Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com>
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---

Gently ping.
The patch is reviewed, needs just merging :)

Regards
Andrzej

> Changes in v2:
> - Added fixes and r-b tags
> - Link to v1: https://lore.kernel.org/r/20240521-xu_flush_vcs_before_ufence-v1-1-ded38b56c8c9@intel.com
> ---
> Matthew,
> 
> I have extended patch to copy and gsc engines. I have kept your r-b,
> since the change is similar, I hope it is OK.
> ---
>   drivers/gpu/drm/xe/xe_ring_ops.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
> index a3ca718456f6..a46a1257a24f 100644
> --- a/drivers/gpu/drm/xe/xe_ring_ops.c
> +++ b/drivers/gpu/drm/xe/xe_ring_ops.c
> @@ -234,13 +234,13 @@ static void __emit_job_gen12_simple(struct xe_sched_job *job, struct xe_lrc *lrc
>   
>   	i = emit_bb_start(batch_addr, ppgtt_flag, dw, i);
>   
> +	i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, false, dw, i);
> +
>   	if (job->user_fence.used)
>   		i = emit_store_imm_ppgtt_posted(job->user_fence.addr,
>   						job->user_fence.value,
>   						dw, i);
>   
> -	i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, false, dw, i);
> -
>   	i = emit_user_interrupt(dw, i);
>   
>   	xe_gt_assert(gt, i <= MAX_JOB_SIZE_DW);
> @@ -293,13 +293,13 @@ static void __emit_job_gen12_video(struct xe_sched_job *job, struct xe_lrc *lrc,
>   
>   	i = emit_bb_start(batch_addr, ppgtt_flag, dw, i);
>   
> +	i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, false, dw, i);
> +
>   	if (job->user_fence.used)
>   		i = emit_store_imm_ppgtt_posted(job->user_fence.addr,
>   						job->user_fence.value,
>   						dw, i);
>   
> -	i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, false, dw, i);
> -
>   	i = emit_user_interrupt(dw, i);
>   
>   	xe_gt_assert(gt, i <= MAX_JOB_SIZE_DW);
> 
> ---
> base-commit: 188ced1e0ff892f0948f20480e2e0122380ae46d
> change-id: 20240521-xu_flush_vcs_before_ufence-a7b45d94cf33
> 
> Best regards,


  parent reply	other threads:[~2024-05-28 11:35 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-22  7:27 [PATCH v2] drm/xe: flush gtt before signalling user fence on all engines Andrzej Hajda
2024-05-22  7:33 ` ✓ CI.Patch_applied: success for " Patchwork
2024-05-22  7:33 ` ✓ CI.checkpatch: " Patchwork
2024-05-22  7:34 ` ✓ CI.KUnit: " Patchwork
2024-05-22  7:46 ` ✓ CI.Build: " Patchwork
2024-05-22  7:48 ` ✓ CI.Hooks: " Patchwork
2024-05-22  7:50 ` ✓ CI.checksparse: " Patchwork
2024-05-22  8:40 ` ✗ CI.BAT: failure " Patchwork
2024-05-24 14:22   ` â " Andrzej Hajda
2024-05-22 10:34 ` ✗ CI.FULL: " Patchwork
2024-05-24 14:30   ` â " Andrzej Hajda
2024-05-24 14:33 ` ✓ CI.Patch_applied: success for drm/xe: flush gtt before signalling user fence on all engines (rev2) Patchwork
2024-05-24 14:33 ` ✓ CI.checkpatch: " Patchwork
2024-05-24 14:34 ` ✓ CI.KUnit: " Patchwork
2024-05-24 14:45 ` ✓ CI.Build: " Patchwork
2024-05-24 14:48 ` ✓ CI.Hooks: " Patchwork
2024-05-24 14:49 ` ✓ CI.checksparse: " Patchwork
2024-05-24 15:17 ` ✓ CI.BAT: " Patchwork
2024-05-27 13:31 ` ✓ CI.FULL: " Patchwork
2024-05-28 11:35 ` Andrzej Hajda [this message]
2024-05-28 12:41   ` [PATCH v2] drm/xe: flush gtt before signalling user fence on all engines Nirmoy Das
2024-05-30 11:17 ` Thomas Hellström
2024-05-30 20:45   ` Matthew Brost
2024-06-03  7:35     ` Thomas Hellström
2024-06-03  8:11       ` Andrzej Hajda
2024-06-03  8:47         ` Thomas Hellström
2024-06-03  9:31           ` Thomas Hellström
2024-06-03 10:19             ` Andrzej Hajda
2024-06-03 11:59               ` Thomas Hellström
2024-06-03 17:42           ` Matthew Brost
2024-06-03 20:35             ` Thomas Hellström
2024-06-03 22:28               ` Matthew Brost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=03a44fa0-c187-4c8a-95c7-1dd6e1f14eab@intel.com \
    --to=andrzej.hajda@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=lucas.demarchi@intel.com \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=matthew.auld@intel.com \
    --cc=matthew.brost@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox