From: "Lis, Tomasz" <tomasz.lis@intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v3 29/36] drm/xe: Move queue init before LRC creation
Date: Thu, 2 Oct 2025 16:54:01 +0200 [thread overview]
Message-ID: <7c80681f-ae86-43f8-a128-f7a63e03a79b@intel.com> (raw)
In-Reply-To: <aN4rl7zRKf5ex/fF@lstrano-desk.jf.intel.com>
On 10/2/2025 9:36 AM, Matthew Brost wrote:
> On Thu, Oct 02, 2025 at 02:44:47AM +0200, Lis, Tomasz wrote:
>> On 9/29/2025 4:55 AM, Matthew Brost wrote:
>>> A queue must be in the submission backend's tracking state before the
>>> LRC is created to avoid a race condition where the LRC's GGTT addresses
>>> are not properly fixed up during VF post-migration recovery.
>>>
>>> Move the queue initialization—which adds the queue to the submission
>>> backend's tracking state—before LRC creation.
>>>
>>> v2:
>>> - Wait on VF GGTT fixes before creating LRC (testing)
>>>
>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>> ---
>>> drivers/gpu/drm/xe/xe_exec_queue.c | 43 +++++++++++++++++------
>>> drivers/gpu/drm/xe/xe_execlist.c | 2 +-
>>> drivers/gpu/drm/xe/xe_gt_sriov_vf.c | 39 +++++++++++++++++++-
>>> drivers/gpu/drm/xe/xe_gt_sriov_vf.h | 2 ++
>>> drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h | 5 +++
>>> drivers/gpu/drm/xe/xe_guc_submit.c | 2 +-
>>> drivers/gpu/drm/xe/xe_lrc.h | 10 ++++++
>>> 7 files changed, 90 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
>>> index 81f707d2c388..3db8e64d9d13 100644
>>> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
>>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
>>> @@ -15,6 +15,7 @@
>>> #include "xe_dep_scheduler.h"
>>> #include "xe_device.h"
>>> #include "xe_gt.h"
>>> +#include "xe_gt_sriov_vf.h"
>>> #include "xe_hw_engine_class_sysfs.h"
>>> #include "xe_hw_engine_group.h"
>>> #include "xe_hw_fence.h"
>>> @@ -179,17 +180,32 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q)
>>> flags |= XE_LRC_CREATE_RUNALONE;
>>> }
>>> + err = q->ops->init(q);
>>> + if (err)
>>> + return err;
>>> +
>>> + /*
>>> + * This must occur after q->ops->init to avoid race conditions during VF
>>> + * post-migration recovery, as the fixups for the LRC GGTT addresses
>>> + * depend on the queue being present in the backend tracking structure.
>>> + *
>>> + * In addition to above, we must wait on inflight GGTT changes to
>>> + * avoid writing out stale values here.
>>> + */
>>> + xe_gt_sriov_vf_wait_valid_ggtt(q->gt);
>> So to avoid locks, we rely on the VF knowing it got migrated from the first
>> moment after vCPU starts.
>>
>> On `qemu`, we do have it this way - when vCPU starts the 'MIGRATED' memirq
>> is already filled.
>>
>> But what about other VM managers? What about future support of platforms
>> without memirq?
>>
>> I don't think the availability of information that we've got migrated from
>> the first vCPU cycle is guaranteed by any specification.
>>
> It is guarnetted byt the design of Xe.
You mean by the PF part? Because generally, Xe cannot make guarantees
for how VMM works.
When state is restored, one of the chunks is GuC state. After the GuC
state is restored, GuC is expected to send MIGRATED irq to the VM. It
sends the interrupt around the same time it answers to PF that state
restore completed. The vCPU is not started at that point. When it
finally starts, on qemu we see the interrupt set from the start. And
this should always be the case for memory-based IRQs - because the
interrupt data is stored in a VRAM buffer. However, in general, this
depends on how qemu implements the interrupt model. I wonder if there
are situations where that would become a problem.
>
>>> for (i = 0; i < q->width; ++i) {
>>> - q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K, q->msix_vec, flags);
>>> - if (IS_ERR(q->lrc[i])) {
>>> - err = PTR_ERR(q->lrc[i]);
>>> + struct xe_lrc *lrc;
>>> +
>>> + lrc = xe_lrc_create(q->hwe, q->vm, xe_lrc_ring_size(),
>>> + q->msix_vec, flags);
>> If migration happened at this place, it is still possible to create a
>> context with wrong GGTT references in the one LRC which was already filled
>> but not integrated into the queue yet.
>>
>> I don't think we can avoid races without a lock.
>>
> There might be a small race here, let me think about this. I will say
> this change xe_exec_threads --r threads-many-queues though. Locking is
> definitely not the way solve this though - reclaim rules are in play
> here which make locking difficult and convoluted cross layer locks will
> always get nacked by myself and others.
Ok, if you can find a lockless solution again, that would be beneficial.
-Tomasz
>
> Matt
>
>>> + if (IS_ERR(lrc)) {
>>> + err = PTR_ERR(lrc);
>>> goto err_lrc;
>>> }
>>> - }
>>> - err = q->ops->init(q);
>>> - if (err)
>>> - goto err_lrc;
>>> + /* Pairs with READ_ONCE to xe_exec_queue_contexts_hwsp_rebase */
>>> + WRITE_ONCE(q->lrc[i], lrc);
>>> + }
>>> return 0;
>>> @@ -1095,9 +1111,16 @@ int xe_exec_queue_contexts_hwsp_rebase(struct xe_exec_queue *q, void *scratch)
>>> int err = 0;
>>> for (i = 0; i < q->width; ++i) {
>>> - xe_lrc_update_memirq_regs_with_address(q->lrc[i], q->hwe, scratch);
>>> - xe_lrc_update_hwctx_regs_with_address(q->lrc[i]);
>>> - err = xe_lrc_setup_wa_bb_with_scratch(q->lrc[i], q->hwe, scratch);
>>> + struct xe_lrc *lrc;
>>> +
>>> + /* Pairs with WRITE_ONCE in __xe_exec_queue_init */
>>> + lrc = READ_ONCE(q->lrc[i]);
>>> + if (!lrc)
>>> + continue;
>>> +
>>> + xe_lrc_update_memirq_regs_with_address(lrc, q->hwe, scratch);
>>> + xe_lrc_update_hwctx_regs_with_address(lrc);
>>> + err = xe_lrc_setup_wa_bb_with_scratch(lrc, q->hwe, scratch);
>>> if (err)
>>> break;
>>> }
>>> diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
>>> index f83d421ac9d3..769d05517f93 100644
>>> --- a/drivers/gpu/drm/xe/xe_execlist.c
>>> +++ b/drivers/gpu/drm/xe/xe_execlist.c
>>> @@ -339,7 +339,7 @@ static int execlist_exec_queue_init(struct xe_exec_queue *q)
>>> const struct drm_sched_init_args args = {
>>> .ops = &drm_sched_ops,
>>> .num_rqs = 1,
>>> - .credit_limit = q->lrc[0]->ring.size / MAX_JOB_SIZE_BYTES,
>>> + .credit_limit = xe_lrc_ring_size() / MAX_JOB_SIZE_BYTES,
>>> .hang_limit = XE_SCHED_HANG_LIMIT,
>>> .timeout = XE_SCHED_JOB_TIMEOUT,
>>> .name = q->hwe->name,
>>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
>>> index 0d94867dce8e..42f9fd43b436 100644
>>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
>>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
>>> @@ -482,6 +482,11 @@ static int vf_get_ggtt_info(struct xe_gt *gt, bool recovery)
>>> shift, config->ggtt_base);
>>> xe_tile_sriov_vf_fixup_ggtt_nodes(gt_to_tile(gt), shift);
>>> }
>>> +
>>> + WRITE_ONCE(gt->sriov.vf.migration.ggtt_need_fixes, false);
>>> + smp_wmb(); /* Ensure above write visible before wake */
>>> + wake_up_all(>->sriov.vf.migration.wq);
>>> +
>>> out:
>>> up_write(config->lock);
>>> return err;
>>> @@ -820,7 +825,8 @@ static void vf_start_migration_recovery(struct xe_gt *gt)
>>> !gt->sriov.vf.migration.recovery_teardown) {
>>> gt->sriov.vf.migration.recovery_queued = true;
>>> WRITE_ONCE(gt->sriov.vf.migration.recovery_inprogress, true);
>>> - smp_wmb(); /* Ensure above write visable before wake */
>>> + WRITE_ONCE(gt->sriov.vf.migration.ggtt_need_fixes, true);
>>> + smp_wmb(); /* Ensure above writes visable before wake */
>> typo in patch "Wakeup in GuC backend on VF post migration recovery"
>>> wake_up_all(>->uc.guc.ct.wq);
>>> @@ -1344,6 +1350,7 @@ int xe_gt_sriov_vf_init_early(struct xe_gt *gt)
>>> &tile->primary_gt->sriov.vf.self_config.__lock;
>>> spin_lock_init(>->sriov.vf.migration.lock);
>>> INIT_WORK(>->sriov.vf.migration.worker, migration_worker_func);
>>> + init_waitqueue_head(>->sriov.vf.migration.wq);
>>> return 0;
>>> }
>>> @@ -1387,3 +1394,33 @@ bool xe_gt_sriov_vf_recovery_inprogress(struct xe_gt *gt)
>>> return (xe_memirq_sw_int_0_irq_pending(memirq, >->uc.guc) ||
>>> READ_ONCE(gt->sriov.vf.migration.recovery_inprogress));
>>> }
>>> +
>>> +static bool vf_valid_ggtt(struct xe_gt *gt)
>>> +{
>>> + struct xe_memirq *memirq = >_to_tile(gt)->memirq;
>>> +
>>> + xe_gt_assert(gt, IS_SRIOV_VF(gt_to_xe(gt)));
>>> +
>>> + if (xe_memirq_sw_int_0_irq_pending(memirq, >->uc.guc) ||
>>> + READ_ONCE(gt->sriov.vf.migration.ggtt_need_fixes))
>>> + return false;
>>> +
>>> + return true;
>>> +}
>>> +
>>> +/**
>>> + * xe_gt_sriov_vf_wait_valid_ggtt() - VF wait for valid GGTT addresses
>>> + * @gt: the &xe_gt
>>> + */
>>> +void xe_gt_sriov_vf_wait_valid_ggtt(struct xe_gt *gt)
>>> +{
>>> + int ret;
>>> +
>>> + if (!IS_SRIOV_VF(gt_to_xe(gt)))
>>> + return;
>>> +
>>> + ret = wait_event_interruptible_timeout(gt->sriov.vf.migration.wq,
>>> + vf_valid_ggtt(gt),
>>> + HZ * 5);
>>> + XE_WARN_ON(!ret);
>>> +}
>>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
>>> index 71e1d566da81..20cc0c4c32e3 100644
>>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
>>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
>>> @@ -40,4 +40,6 @@ void xe_gt_sriov_vf_print_config(struct xe_gt *gt, struct drm_printer *p);
>>> void xe_gt_sriov_vf_print_runtime(struct xe_gt *gt, struct drm_printer *p);
>>> void xe_gt_sriov_vf_print_version(struct xe_gt *gt, struct drm_printer *p);
>>> +void xe_gt_sriov_vf_wait_valid_ggtt(struct xe_gt *gt);
>>> +
>>> #endif
>>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h b/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
>>> index e135018cba1e..3c3e415199d1 100644
>>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
>>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
>>> @@ -8,6 +8,7 @@
>>> #include <linux/rwsem.h>
>>> #include <linux/types.h>
>>> +#include <linux/wait.h>
>>> #include <linux/workqueue.h>
>>> #include "xe_uc_fw_types.h"
>>> @@ -61,6 +62,8 @@ struct xe_gt_sriov_vf_migration {
>>> struct work_struct worker;
>>> /** @lock: Protects recovery_queued, teardown */
>>> spinlock_t lock;
>>> + /** @wq: wait queue for migration fixes */
>>> + wait_queue_head_t wq;
>>> /** @scratch: Scratch memory for VF recovery */
>>> void *scratch;
>>> /** @recovery_teardown: VF post migration recovery is being torn down */
>>> @@ -69,6 +72,8 @@ struct xe_gt_sriov_vf_migration {
>>> bool recovery_queued;
>>> /** @recovery_inprogress: VF post migration recovery in progress */
>>> bool recovery_inprogress;
>>> + /** @ggtt_need_fixes: VF GGTT needs fixes */
>>> + bool ggtt_need_fixes;
>>> };
>>> /**
>>> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
>>> index 497a736c23c3..7fe3fb07e35e 100644
>>> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
>>> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
>>> @@ -1943,7 +1943,7 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
>>> timeout = (q->vm && xe_vm_in_lr_mode(q->vm)) ? MAX_SCHEDULE_TIMEOUT :
>>> msecs_to_jiffies(q->sched_props.job_timeout_ms);
>>> err = xe_sched_init(&ge->sched, &drm_sched_ops, &xe_sched_ops,
>>> - NULL, q->lrc[0]->ring.size / MAX_JOB_SIZE_BYTES, 64,
>>> + NULL, xe_lrc_ring_size() / MAX_JOB_SIZE_BYTES, 64,
>>> timeout, guc_to_gt(guc)->ordered_wq, NULL,
>>> q->name, gt_to_xe(q->gt)->drm.dev);
>>> if (err)
>>> diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
>>> index 188565465779..5fb6c74bdab5 100644
>>> --- a/drivers/gpu/drm/xe/xe_lrc.h
>>> +++ b/drivers/gpu/drm/xe/xe_lrc.h
>>> @@ -74,6 +74,16 @@ static inline void xe_lrc_put(struct xe_lrc *lrc)
>>> kref_put(&lrc->refcount, xe_lrc_destroy);
>>> }
>>> +/**
>>> + * xe_lrc_ring_size() - Xe LRC ring size
>>> + *
>>> + * Return: Size of LRC size
>> Size of LRC ring buffer
>> -Tomasz
>>
>>> + */
>>> +static inline size_t xe_lrc_ring_size(void)
>>> +{
>>> + return SZ_16K;
>>> +}
>>> +
>>> size_t xe_gt_lrc_size(struct xe_gt *gt, enum xe_engine_class class);
>>> u32 xe_lrc_pphwsp_offset(struct xe_lrc *lrc);
>>> u32 xe_lrc_regs_offset(struct xe_lrc *lrc);
next prev parent reply other threads:[~2025-10-02 14:54 UTC|newest]
Thread overview: 83+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-29 2:55 [PATCH v3 00/36] VF migration redesign Matthew Brost
2025-09-29 2:55 ` [PATCH v3 01/36] drm/xe: Add NULL checks to scratch LRC allocation Matthew Brost
2025-09-30 2:06 ` Lis, Tomasz
2025-09-30 22:53 ` Matthew Brost
2025-09-29 2:55 ` [PATCH v3 02/36] drm/xe/vf: Lock querying GGTT config during driver init Matthew Brost
2025-09-29 7:42 ` Michal Wajdeczko
2025-09-29 12:15 ` Matthew Brost
2025-09-30 0:42 ` Lis, Tomasz
2025-09-30 10:25 ` Michal Wajdeczko
2025-09-29 8:13 ` Ville Syrjälä
2025-09-30 13:22 ` Lis, Tomasz
2025-09-29 2:55 ` [PATCH v3 03/36] Revert "drm/xe/vf: Rebase exec queue parallel commands during migration recovery" Matthew Brost
2025-09-30 15:22 ` Michal Wajdeczko
2025-09-29 2:55 ` [PATCH v3 04/36] Revert "drm/xe/vf: Post migration, repopulate ring area for pending request" Matthew Brost
2025-09-30 15:24 ` Michal Wajdeczko
2025-09-29 2:55 ` [PATCH v3 05/36] Revert "drm/xe/vf: Fixup CTB send buffer messages after migration" Matthew Brost
2025-09-30 15:27 ` Michal Wajdeczko
2025-09-29 2:55 ` [PATCH v3 06/36] drm/xe: Save off position in ring in which a job was programmed Matthew Brost
2025-09-29 2:55 ` [PATCH v3 07/36] drm/xe/guc: Track pending-enable source in submission state Matthew Brost
2025-09-29 2:55 ` [PATCH v3 08/36] drm/xe: Track LR jobs in DRM scheduler pending list Matthew Brost
2025-09-29 2:55 ` [PATCH v3 09/36] drm/xe: Don't change LRC ring head on job resubmission Matthew Brost
2025-09-30 2:38 ` Lis, Tomasz
2025-09-29 2:55 ` [PATCH v3 10/36] drm/xe: Make LRC W/A scratch buffer usage consistent Matthew Brost
2025-09-29 2:55 ` [PATCH v3 11/36] drm/xe/guc: Document GuC submission backend Matthew Brost
2025-09-30 3:28 ` Lis, Tomasz
2025-09-30 6:30 ` Matthew Brost
2025-09-29 2:55 ` [PATCH v3 12/36] drm/xe/vf: Add xe_gt_recovery_inprogress helper Matthew Brost
2025-09-29 8:04 ` Michal Wajdeczko
2025-09-29 8:52 ` Matthew Brost
2025-09-29 2:55 ` [PATCH v3 13/36] drm/xe/vf: Make VF recovery run on per-GT worker Matthew Brost
2025-09-30 14:47 ` Lis, Tomasz
2025-09-29 2:55 ` [PATCH v3 14/36] drm/xe/vf: Abort H2G sends during VF post-migration recovery Matthew Brost
2025-09-29 8:17 ` Michal Wajdeczko
2025-09-29 2:55 ` [PATCH v3 15/36] drm/xe/vf: Remove memory allocations from VF post migration recovery Matthew Brost
2025-09-30 15:00 ` Lis, Tomasz
2025-09-29 2:55 ` [PATCH v3 16/36] drm/xe/vf: Close multi-GT GGTT shift race Matthew Brost
2025-09-29 8:44 ` Michal Wajdeczko
2025-09-29 12:31 ` Matthew Brost
2025-09-29 2:55 ` [PATCH v3 17/36] drm/xe/vf: Teardown VF post migration worker on driver unload Matthew Brost
2025-09-30 16:24 ` Lis, Tomasz
2025-09-29 2:55 ` [PATCH v3 18/36] drm/xe/vf: Don't allow GT reset to be queued during VF post migration recovery Matthew Brost
2025-09-29 9:17 ` Michal Wajdeczko
2025-09-29 12:50 ` Matthew Brost
2025-09-29 2:55 ` [PATCH v3 19/36] drm/xe/vf: Wakeup in GuC backend on " Matthew Brost
2025-09-29 2:55 ` [PATCH v3 20/36] drm/xe/vf: Avoid indefinite blocking in preempt rebind worker for VFs supporting migration Matthew Brost
2025-10-01 13:45 ` Lis, Tomasz
2025-10-01 13:56 ` Lis, Tomasz
2025-09-29 2:55 ` [PATCH v3 21/36] drm/xe/vf: Extra debug on GGTT shift Matthew Brost
2025-09-29 2:55 ` [PATCH v3 22/36] drm/xe/vf: Use GUC_HXG_TYPE_EVENT for GuC context register Matthew Brost
2025-09-29 2:55 ` [PATCH v3 23/36] drm/xe/vf: Flush and stop CTs in VF post migration recovery Matthew Brost
2025-09-29 21:31 ` Michal Wajdeczko
2025-09-29 2:55 ` [PATCH v3 24/36] drm/xe/vf: Reset TLB invalidations during " Matthew Brost
2025-10-01 13:53 ` Lis, Tomasz
2025-09-29 2:55 ` [PATCH v3 25/36] drm/xe/vf: Kickstart after resfix in " Matthew Brost
2025-09-29 2:55 ` [PATCH v3 26/36] drm/xe/vf: Start CTs before resfix " Matthew Brost
2025-09-29 21:49 ` Michal Wajdeczko
2025-09-30 6:26 ` Matthew Brost
2025-09-29 2:55 ` [PATCH v3 27/36] drm/xe/vf: Abort VF post migration recovery on failure Matthew Brost
2025-10-01 14:06 ` Lis, Tomasz
2025-09-29 2:55 ` [PATCH v3 28/36] drm/xe/vf: Replay GuC submission state on pause / unpause Matthew Brost
2025-10-01 14:37 ` Lis, Tomasz
2025-09-29 2:55 ` [PATCH v3 29/36] drm/xe: Move queue init before LRC creation Matthew Brost
2025-10-02 0:44 ` Lis, Tomasz
2025-10-02 7:36 ` Matthew Brost
2025-10-02 14:54 ` Lis, Tomasz [this message]
2025-09-29 2:55 ` [PATCH v3 30/36] drm/xe/vf: Add debug prints for GuC replaying state during VF recovery Matthew Brost
2025-10-02 1:02 ` Lis, Tomasz
2025-09-29 2:55 ` [PATCH v3 31/36] drm/xe/vf: Workaround for race condition in GuC firmware during VF pause Matthew Brost
2025-10-02 1:09 ` Lis, Tomasz
2025-10-02 6:12 ` Matthew Brost
2025-09-29 2:55 ` [PATCH v3 32/36] drm/xe: Use PPGTT addresses for TLB invalidation to avoid GGTT fixups Matthew Brost
2025-10-02 1:25 ` Lis, Tomasz
2025-09-29 2:55 ` [PATCH v3 33/36] drm/xe/vf: Use primary GT ordered work queue on media GT on PTL VF Matthew Brost
2025-09-29 2:55 ` [PATCH v3 34/36] drm/xe/vf: Ensure media GT VF recovery runs after primary GT on PTL Matthew Brost
2025-09-29 2:55 ` [PATCH v3 35/36] drm/xe/vf: Rebase CCS save/restore BB GGTT addresses Matthew Brost
2025-09-29 2:55 ` [PATCH v3 36/36] drm/xe/guc: Increase wait timeout to 2sec after BUSY reply from GuC Matthew Brost
2025-09-29 15:17 ` K V P, Satyanarayana
2025-09-30 12:39 ` Matthew Brost
2025-09-30 13:38 ` Michal Wajdeczko
2025-09-30 14:39 ` Matthew Brost
2025-09-29 3:06 ` ✗ CI.checkpatch: warning for VF migration redesign (rev3) Patchwork
2025-09-29 3:08 ` ✓ CI.KUnit: success " Patchwork
2025-09-29 6:28 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7c80681f-ae86-43f8-a128-f7a63e03a79b@intel.com \
--to=tomasz.lis@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox