public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Summers, Stuart" <stuart.summers@intel.com>
To: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"Brost,  Matthew" <matthew.brost@intel.com>
Cc: "Ghimiray, Himal Prasad" <himal.prasad.ghimiray@intel.com>,
	"Yadav, Arvind" <arvind.yadav@intel.com>,
	"thomas.hellstrom@linux.intel.com"
	<thomas.hellstrom@linux.intel.com>,
	"Dugast, Francois" <francois.dugast@intel.com>
Subject: Re: [PATCH v3 20/25] drm/xe: Add ULLS migration job support to migration layer
Date: Thu, 5 Mar 2026 23:34:36 +0000	[thread overview]
Message-ID: <7be318280fc180267ce14a299de7315cb237137a.camel@intel.com> (raw)
In-Reply-To: <20260228013501.106680-21-matthew.brost@intel.com>

On Fri, 2026-02-27 at 17:34 -0800, Matthew Brost wrote:
> Add function to enter ULLS mode for migration job and delayed worker
> to
> exit (power saving). ULLS mode expected to entered upon page fault or
> SVM prefetch. ULLS mode exit delay is currently set to 5us.
> 
> ULLS mode only support on DGFX and USM platforms where a hardware
> engine
> is reserved for migrations jobs. When in ULLS mode, set several flags
> on
> migration jobs so submission backend / ring ops can properly submit
> in
> ULLS mode.
> 
> Upon ULLS mode enter, send a job trigger waiting a semphore pipling
> initial GuC / HW conetxt switch.
> 
> Upon ULLS mode exit, send a job to trigger that current ULLS
> semaphore so the ring can be taken off the hardware.

Assuming we do go down the ULLS in the KMD route, can you add a little
documentation for how this is being managed? Just in terms of how the
KMD is interacting with GuC and HW to manage this basically, how you
might configure, etc. Not specific to this patch, but maybe more for
the ULLS portion of the series generally...

> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_exec_queue.c      |   5 +-
>  drivers/gpu/drm/xe/xe_exec_queue.h      |   4 +-
>  drivers/gpu/drm/xe/xe_migrate.c         | 180
> ++++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_migrate.h         |   2 +
>  drivers/gpu/drm/xe/xe_pt.c              |   2 +-
>  drivers/gpu/drm/xe/xe_sched_job_types.h |   6 +
>  drivers/gpu/drm/xe/xe_vm.c              |   2 +-
>  7 files changed, 195 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c
> b/drivers/gpu/drm/xe/xe_exec_queue.c
> index ee2119cf45c1..4fa99f12c566 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -1348,6 +1348,7 @@ bool xe_exec_queue_is_lr(struct xe_exec_queue
> *q)
>  /**
>   * xe_exec_queue_is_idle() - Whether an exec_queue is idle.
>   * @q: The exec_queue
> + * @extra_jobs: Extra jobs on the queue
>   *
>   * FIXME: Need to determine what to use as the short-lived
>   * timeline lock for the exec_queues, so that the return value
> @@ -1359,9 +1360,9 @@ bool xe_exec_queue_is_lr(struct xe_exec_queue
> *q)
>   *
>   * Return: True if the exec_queue is idle, false otherwise.
>   */
> -bool xe_exec_queue_is_idle(struct xe_exec_queue *q)
> +bool xe_exec_queue_is_idle(struct xe_exec_queue *q, int extra_jobs)
>  {
> -       return !atomic_read(&q->job_cnt);
> +       return !(atomic_read(&q->job_cnt) - extra_jobs);
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h
> b/drivers/gpu/drm/xe/xe_exec_queue.h
> index b5aabab388c1..a11648b62a98 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> @@ -116,7 +116,7 @@ static inline struct xe_exec_queue
> *xe_exec_queue_multi_queue_primary(struct xe_
>  
>  bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
>  
> -bool xe_exec_queue_is_idle(struct xe_exec_queue *q);
> +bool xe_exec_queue_is_idle(struct xe_exec_queue *q, int extra_jobs);

Is this extra_jobs bit something coming in a future patch? I might have
missed, but I'm not seeing any non-zero usage here.

>  
>  void xe_exec_queue_kill(struct xe_exec_queue *q);
>  
> @@ -176,7 +176,7 @@ struct xe_lrc *xe_exec_queue_get_lrc(struct
> xe_exec_queue *q, u16 idx);
>   */
>  static inline bool xe_exec_queue_idle_skip_suspend(struct
> xe_exec_queue *q)
>  {
> -       return !xe_exec_queue_is_parallel(q) &&
> xe_exec_queue_is_idle(q);
> +       return !xe_exec_queue_is_parallel(q) &&
> xe_exec_queue_is_idle(q, 0);
>  }
>  
>  #endif
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c
> b/drivers/gpu/drm/xe/xe_migrate.c
> index c9ee6325ec9d..62f27868f56b 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -8,6 +8,7 @@
>  #include <linux/bitfield.h>
>  #include <linux/sizes.h>
>  
> +#include <drm/drm_drv.h>
>  #include <drm/drm_managed.h>
>  #include <drm/drm_pagemap.h>
>  #include <drm/ttm/ttm_tt.h>
> @@ -23,6 +24,7 @@
>  #include "xe_bb.h"
>  #include "xe_bo.h"
>  #include "xe_exec_queue.h"
> +#include "xe_force_wake.h"
>  #include "xe_ggtt.h"
>  #include "xe_gt.h"
>  #include "xe_gt_printk.h"
> @@ -30,6 +32,7 @@
>  #include "xe_lrc.h"
>  #include "xe_map.h"
>  #include "xe_mocs.h"
> +#include "xe_pm.h"
>  #include "xe_printk.h"
>  #include "xe_pt.h"
>  #include "xe_res_cursor.h"
> @@ -75,6 +78,14 @@ struct xe_migrate {
>         struct dma_fence *fence;
>         /** @min_chunk_size: For dgfx, Minimum chunk size */
>         u64 min_chunk_size;
> +       /** @ulls: ULLS support */
> +       struct {
> +               /** @ulls.enabled: ULLS is enabled */
> +               bool enabled;
> +#define ULLS_EXIT_JIFFIES      (HZ / 50)

It might be nice to make this configurable through sysfs or debugfs
even...

> +               /** @ulls.exit_work: ULLS exit worker */
> +               struct delayed_work exit_work;
> +       } ulls;
>  };
>  
>  #define MAX_PREEMPTDISABLE_TRANSFER SZ_8M /* Around 1ms. */
> @@ -96,6 +107,16 @@ struct xe_migrate {
>  static void xe_migrate_fini(void *arg)
>  {
>         struct xe_migrate *m = arg;
> +       struct xe_device *xe = tile_to_xe(m->tile);
> +
> +       disable_delayed_work_sync(&m->ulls.exit_work);
> +       mutex_lock(&m->job_mutex);
> +       if (m->ulls.enabled) {
> +               xe_force_wake_put(gt_to_fw(m->q->hwe->gt), m->q->hwe-
> >domain);
> +               xe_pm_runtime_put(xe);
> +               m->ulls.enabled = false;
> +       }
> +       mutex_unlock(&m->job_mutex);
>  
>         xe_vm_lock(m->q->vm, false);
>         xe_bo_unpin(m->pt_bo);
> @@ -410,6 +431,140 @@ static int xe_migrate_lock_prepare_vm(struct
> xe_tile *tile, struct xe_migrate *m
>         return err;
>  }
>  
> +/**
> + * xe_migrate_ulls_enter() - Enter ULLS mode
> + * @m: The migration context.
> + *
> + * If DGFX and not a VF, enter ULLS mode bypassing GuC / HW context
> + * switches by utilizing semaphore and continuously running batches.
> + */
> +void xe_migrate_ulls_enter(struct xe_migrate *m)
> +{
> +       struct xe_device *xe = tile_to_xe(m->tile);
> +       struct xe_sched_job *job = NULL;
> +       u64 batch_addr[2] = { 0, 0 };
> +       bool alloc = false;
> +
> +       xe_assert(xe, xe->info.has_usm);
> +
> +       if (!IS_DGFX(xe) || IS_SRIOV_VF(xe))
> +               return;
> +
> +job_alloc:
> +       if (alloc) {
> +               /*
> +                * Must be done outside job_mutex as that lock is
> tainted with
> +                * reclaim.

Where is the reclaim happening for this? It seems ugly jumping back and
forth like this to avoid the lock.

> +                */
> +               job = xe_sched_job_create(m->q, batch_addr);
> +               if (WARN_ON_ONCE(IS_ERR(job)))
> +                       return;         /* Not fatal */
> +       }
> +
> +       mutex_lock(&m->job_mutex);
> +       if (!m->ulls.enabled) {
> +               unsigned int fw_ref;
> +
> +               if (!job) {
> +                       alloc = true;
> +                       mutex_unlock(&m->job_mutex);
> +                       goto job_alloc;

Why are you jumping through this alloc/!job hoop here? Can we just do
this in one place instead of jumping back and forth?

> +               }
> +
> +               /* Pairs with FW put on ULLS exit */
> +               fw_ref = xe_force_wake_get(gt_to_fw(m->q->hwe->gt),
> +                                          m->q->hwe->domain);
> +               if (fw_ref) {
> +                       struct xe_device *xe = tile_to_xe(m->tile);
> +                       struct dma_fence *fence;
> +
> +                       /* Pairs with PM put on ULLS exit */
> +                       xe_pm_runtime_get_noresume(xe);
> +
> +                       xe_sched_job_get(job);
> +                       xe_sched_job_arm(job);
> +                       job->is_ulls = true;
> +                       job->is_ulls_first = true;
> +                       fence = dma_fence_get(&job->drm.s_fence-
> >finished);
> +                       xe_sched_job_push(job);
> +
> +                       dma_fence_put(fence);
> +
> +                       xe_dbg(xe, "Migrate ULLS mode enter");
> +                       m->ulls.enabled = true;
> +               }
> +       }
> +       if (job)
> +               xe_sched_job_put(job);
> +       if (m->ulls.enabled)
> +               mod_delayed_work(system_percpu_wq, &m-
> >ulls.exit_work,
> +                                ULLS_EXIT_JIFFIES);
> +       mutex_unlock(&m->job_mutex);
> +}
> +
> +static void xe_migrate_ulls_exit(struct work_struct *work)
> +{
> +       struct xe_migrate *m = container_of(work, struct xe_migrate,
> +                                           ulls.exit_work.work);
> +       struct xe_device *xe = tile_to_xe(m->tile);
> +       struct xe_sched_job *job = NULL;
> +       struct dma_fence *fence;
> +       u64 batch_addr[2] = { 0, 0 };
> +       int idx;
> +
> +       xe_assert(xe, m->ulls.enabled);
> +
> +       if (!drm_dev_enter(&xe->drm, &idx))
> +               return;
> +
> +       /*
> +        * Must be done outside job_mutex as that lock is tainted
> with
> +        * reclaim and must be done holding a pm ref.
> +        */
> +       job = xe_sched_job_create(m->q, batch_addr);
> +       if (WARN_ON_ONCE(IS_ERR(job))) {
> +               drm_dev_exit(idx);
> +               mod_delayed_work(system_percpu_wq, &m-
> >ulls.exit_work,
> +                                ULLS_EXIT_JIFFIES);
> +               return;         /* Not fatal */
> +       }
> +
> +       mutex_lock(&m->job_mutex);
> +
> +       if (!xe_exec_queue_is_idle(m->q, 1))
> +               goto unlock_exit;
> +
> +       xe_sched_job_get(job);
> +       xe_sched_job_arm(job);
> +       job->is_ulls = true;
> +       job->is_ulls_last = true;
> +       fence = dma_fence_get(&job->drm.s_fence->finished);
> +       xe_sched_job_push(job);
> +
> +       /* Serialize force wake put */
> +       dma_fence_wait(fence, false);
> +       dma_fence_put(fence);
> +
> +       m->ulls.enabled = false;
> +unlock_exit:
> +       if (job)
> +               xe_sched_job_put(job);
> +       if (!m->ulls.enabled) {
> +               /* Pairs with PM gets on enter */
> +               xe_force_wake_put(gt_to_fw(m->q->hwe->gt), m->q->hwe-
> >domain);
> +               xe_pm_runtime_put(xe);

Maybe reverse these to match the gets above.

> +
> +               cancel_delayed_work(&m->ulls.exit_work);
> +               xe_dbg(xe, "Migrate ULLS mode exit");
> +       } else {
> +               mod_delayed_work(system_percpu_wq, &m-
> >ulls.exit_work,
> +                                ULLS_EXIT_JIFFIES);
> +       }
> +
> +       drm_dev_exit(idx);
> +       mutex_unlock(&m->job_mutex);
> +}
> +
>  /**
>   * xe_migrate_init() - Initialize a migrate context
>   * @m: The migration context
> @@ -473,6 +628,8 @@ int xe_migrate_init(struct xe_migrate *m)
>         might_lock(&m->job_mutex);
>         fs_reclaim_release(GFP_KERNEL);
>  
> +       INIT_DELAYED_WORK(&m->ulls.exit_work, xe_migrate_ulls_exit);
> +
>         err = devm_add_action_or_reset(xe->drm.dev, xe_migrate_fini,
> m);
>         if (err)
>                 return err;
> @@ -818,6 +975,26 @@ static u32 xe_migrate_ccs_copy(struct xe_migrate
> *m,
>         return flush_flags;
>  }
>  
> +static bool xe_migrate_is_ulls(struct xe_migrate *m)
> +{
> +       lockdep_assert_held(&m->job_mutex);
> +
> +       return m->ulls.enabled;
> +}
> +
> +static void xe_migrate_job_set_ulls_flags(struct xe_migrate *m,
> +                                         struct xe_sched_job *job)
> +{
> +       lockdep_assert_held(&m->job_mutex);
> +       xe_tile_assert(m->tile, m->q == job->q);

Nit: Should we have a helper here like you have for the bind queue?

> +
> +       if (xe_migrate_is_ulls(m)) {
> +               job->is_ulls = true;
> +               mod_delayed_work(system_percpu_wq, &m-
> >ulls.exit_work,
> +                                ULLS_EXIT_JIFFIES);
> +       }
> +}
> +
>  /**
>   * xe_migrate_copy() - Copy content of TTM resources.
>   * @m: The migration context.
> @@ -992,6 +1169,7 @@ struct dma_fence *xe_migrate_copy(struct
> xe_migrate *m,
>  
>                 mutex_lock(&m->job_mutex);
>                 xe_sched_job_arm(job);
> +               xe_migrate_job_set_ulls_flags(m, job);
>                 dma_fence_put(fence);
>                 fence = dma_fence_get(&job->drm.s_fence->finished);
>                 xe_sched_job_push(job);
> @@ -1602,6 +1780,7 @@ struct dma_fence *xe_migrate_clear(struct
> xe_migrate *m,
>  
>                 mutex_lock(&m->job_mutex);
>                 xe_sched_job_arm(job);
> +               xe_migrate_job_set_ulls_flags(m, job);
>                 dma_fence_put(fence);
>                 fence = dma_fence_get(&job->drm.s_fence->finished);
>                 xe_sched_job_push(job);
> @@ -1881,6 +2060,7 @@ static struct dma_fence *xe_migrate_vram(struct
> xe_migrate *m,
>  
>         mutex_lock(&m->job_mutex);
>         xe_sched_job_arm(job);
> +       xe_migrate_job_set_ulls_flags(m, job);
>         fence = dma_fence_get(&job->drm.s_fence->finished);
>         xe_sched_job_push(job);
>  
> diff --git a/drivers/gpu/drm/xe/xe_migrate.h
> b/drivers/gpu/drm/xe/xe_migrate.h
> index f6fa23c6c4fb..71606fb4fad0 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.h
> +++ b/drivers/gpu/drm/xe/xe_migrate.h
> @@ -85,4 +85,6 @@ struct xe_vm *xe_migrate_get_vm(struct xe_migrate
> *m);
>  
>  void xe_migrate_wait(struct xe_migrate *m);
>  
> +void xe_migrate_ulls_enter(struct xe_migrate *m);
> +
>  #endif
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index ef34fbfc14f0..2c0f9a99d7a9 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -1317,7 +1317,7 @@ static int xe_pt_vm_dependencies(struct
> xe_sched_job *job,
>         if (!job && !no_in_syncs(vops->syncs, vops->num_syncs))
>                 return -ETIME;
>  
> -       if (!job && !xe_exec_queue_is_idle(vops->q))
> +       if (!job && !xe_exec_queue_is_idle(vops->q, 0))
>                 return -ETIME;
>  
>         if (vops->flags & (XE_VMA_OPS_FLAG_WAIT_VM_BOOKKEEP |
> diff --git a/drivers/gpu/drm/xe/xe_sched_job_types.h
> b/drivers/gpu/drm/xe/xe_sched_job_types.h
> index 3a797de746ad..fe2d2ee12efc 100644
> --- a/drivers/gpu/drm/xe/xe_sched_job_types.h
> +++ b/drivers/gpu/drm/xe/xe_sched_job_types.h
> @@ -89,6 +89,12 @@ struct xe_sched_job {
>         bool last_replay;
>         /** @is_pt_job: is a PT job */
>         bool is_pt_job;
> +       /** @is_ulls: is ULLS job */
> +       bool is_ulls;
> +       /** @is_ulls_first: is first ULLS job */

This flag I'm not fully understanding. Why do we need to separate this
from is_ulls?

Thanks,
Stuart

> +       bool is_ulls_first;
> +       /** @is_ulls_last: is last ULLS job */
> +       bool is_ulls_last;
>         union {
>                 /** @ptrs: per instance pointers. */
>                 DECLARE_FLEX_ARRAY(struct xe_job_ptrs, ptrs);
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index d4629e953b01..931d46696811 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -146,7 +146,7 @@ static bool xe_vm_is_idle(struct xe_vm *vm)
>  
>         xe_vm_assert_held(vm);
>         list_for_each_entry(q, &vm->preempt.exec_queues, lr.link) {
> -               if (!xe_exec_queue_is_idle(q))
> +               if (!xe_exec_queue_is_idle(q, 0))
>                         return false;
>         }
>  


  reply	other threads:[~2026-03-05 23:34 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-28  1:34 [PATCH v3 00/25] CPU binds and ULLS on migration queue Matthew Brost
2026-02-28  1:34 ` [PATCH v3 01/25] drm/xe: Drop struct xe_migrate_pt_update argument from populate/clear vfuns Matthew Brost
2026-03-05 14:17   ` Francois Dugast
2026-02-28  1:34 ` [PATCH v3 02/25] drm/xe: Add xe_migrate_update_pgtables_cpu_execute helper Matthew Brost
2026-03-05 14:39   ` Francois Dugast
2026-02-28  1:34 ` [PATCH v3 03/25] drm/xe: Decouple exec queue idle check from LRC Matthew Brost
2026-03-02 20:50   ` Summers, Stuart
2026-03-02 21:02     ` Matthew Brost
2026-03-03 21:26       ` Summers, Stuart
2026-03-03 22:42         ` Matthew Brost
2026-03-03 22:54           ` Summers, Stuart
2026-02-28  1:34 ` [PATCH v3 04/25] drm/xe: Add job count to GuC exec queue snapshot Matthew Brost
2026-03-02 20:50   ` Summers, Stuart
2026-02-28  1:34 ` [PATCH v3 05/25] drm/xe: Update xe_bo_put_deferred arguments to include writeback flag Matthew Brost
2026-04-01 12:20   ` Francois Dugast
2026-04-01 22:39     ` Matthew Brost
2026-02-28  1:34 ` [PATCH v3 06/25] drm/xe: Add XE_BO_FLAG_PUT_VM_ASYNC Matthew Brost
2026-04-01 12:22   ` Francois Dugast
2026-04-01 22:38     ` Matthew Brost
2026-02-28  1:34 ` [PATCH v3 07/25] drm/xe: Update scheduler job layer to support PT jobs Matthew Brost
2026-03-03 22:50   ` Summers, Stuart
2026-03-03 23:00     ` Matthew Brost
2026-02-28  1:34 ` [PATCH v3 08/25] drm/xe: Add helpers to access PT ops Matthew Brost
2026-04-07 15:22   ` Francois Dugast
2026-02-28  1:34 ` [PATCH v3 09/25] drm/xe: Add struct xe_pt_job_ops Matthew Brost
2026-03-03 23:26   ` Summers, Stuart
2026-03-03 23:28     ` Matthew Brost
2026-02-28  1:34 ` [PATCH v3 10/25] drm/xe: Update GuC submission backend to run PT jobs Matthew Brost
2026-03-03 23:28   ` Summers, Stuart
2026-03-04  0:26     ` Matthew Brost
2026-03-04 20:43       ` Summers, Stuart
2026-03-04 21:53         ` Matthew Brost
2026-03-05 20:24           ` Summers, Stuart
2026-02-28  1:34 ` [PATCH v3 11/25] drm/xe: Store level in struct xe_vm_pgtable_update Matthew Brost
2026-03-03 23:44   ` Summers, Stuart
2026-02-28  1:34 ` [PATCH v3 12/25] drm/xe: Don't use migrate exec queue for page fault binds Matthew Brost
2026-02-28  1:34 ` [PATCH v3 13/25] drm/xe: Enable CPU binds for jobs Matthew Brost
2026-02-28  1:34 ` [PATCH v3 14/25] drm/xe: Remove unused arguments from xe_migrate_pt_update_ops Matthew Brost
2026-02-28  1:34 ` [PATCH v3 15/25] drm/xe: Make bind queues operate cross-tile Matthew Brost
2026-02-28  1:34 ` [PATCH v3 16/25] drm/xe: Add CPU bind layer Matthew Brost
2026-02-28  1:34 ` [PATCH v3 17/25] drm/xe: Add device flag to enable PT mirroring across tiles Matthew Brost
2026-02-28  1:34 ` [PATCH v3 18/25] drm/xe: Add xe_hw_engine_write_ring_tail Matthew Brost
2026-02-28  1:34 ` [PATCH v3 19/25] drm/xe: Add ULLS support to LRC Matthew Brost
2026-03-05 20:21   ` Francois Dugast
2026-02-28  1:34 ` [PATCH v3 20/25] drm/xe: Add ULLS migration job support to migration layer Matthew Brost
2026-03-05 23:34   ` Summers, Stuart [this message]
2026-03-09 23:11     ` Matthew Brost
2026-02-28  1:34 ` [PATCH v3 21/25] drm/xe: Add MI_SEMAPHORE_WAIT instruction defs Matthew Brost
2026-02-28  1:34 ` [PATCH v3 22/25] drm/xe: Add ULLS migration job support to ring ops Matthew Brost
2026-02-28  1:34 ` [PATCH v3 23/25] drm/xe: Add ULLS migration job support to GuC submission Matthew Brost
2026-02-28  1:35 ` [PATCH v3 24/25] drm/xe: Enter ULLS for migration jobs upon page fault or SVM prefetch Matthew Brost
2026-02-28  1:35 ` [PATCH v3 25/25] drm/xe: Add modparam to enable / disable ULLS on migrate queue Matthew Brost
2026-03-05 22:59   ` Summers, Stuart
2026-04-01 22:44     ` Matthew Brost
2026-02-28  1:43 ` ✗ CI.checkpatch: warning for CPU binds and ULLS on migration queue (rev3) Patchwork
2026-02-28  1:44 ` ✓ CI.KUnit: success " Patchwork
2026-02-28  2:32 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-28 13:59 ` ✗ Xe.CI.FULL: failure " Patchwork
2026-03-02 17:54   ` Summers, Stuart
2026-03-02 18:13     ` Matthew Brost
2026-03-05 22:56 ` [PATCH v3 00/25] CPU binds and ULLS on migration queue Summers, Stuart
2026-03-10 22:17   ` Matthew Brost
2026-03-20 15:31 ` Thomas Hellström

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7be318280fc180267ce14a299de7315cb237137a.camel@intel.com \
    --to=stuart.summers@intel.com \
    --cc=arvind.yadav@intel.com \
    --cc=francois.dugast@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox