* [PATCH] drm/sched: Use struct for drm_sched_init() params
@ 2025-01-22 14:08 Philipp Stanner
2025-01-22 14:30 ` Danilo Krummrich
` (12 more replies)
0 siblings, 13 replies; 35+ messages in thread
From: Philipp Stanner @ 2025-01-22 14:08 UTC (permalink / raw)
To: Alex Deucher, Christian König, Xinhui Pan, David Airlie,
Simona Vetter, Lucas Stach, Russell King, Christian Gmeiner,
Frank Binns, Matt Coster, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, Karol Herbst,
Lyude Paul, Danilo Krummrich, Boris Brezillon, Rob Herring,
Steven Price, Liviu Dudau, Luben Tuikov, Matthew Brost,
Philipp Stanner, Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li
Cc: amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe, Philipp Stanner
drm_sched_init() has a great many parameters and upcoming new
functionality for the scheduler might add even more. Generally, the
great number of parameters reduces readability and has already caused
one missnaming in:
commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in nouveau_sched_init()").
Introduce a new struct for the scheduler init parameters and port all
users.
Signed-off-by: Philipp Stanner <phasta@kernel.org>
---
Howdy,
I have a patch-series in the pipe that will add a `flags` argument to
drm_sched_init(). I thought it would be wise to first rework the API as
detailed in this patch. It's really a lot of parameters by now, and I
would expect that it might get more and more over the years for special
use cases etc.
Regards,
P.
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +++-
drivers/gpu/drm/etnaviv/etnaviv_sched.c | 20 ++-
drivers/gpu/drm/imagination/pvr_queue.c | 21 +++-
drivers/gpu/drm/lima/lima_sched.c | 21 +++-
drivers/gpu/drm/msm/msm_ringbuffer.c | 22 ++--
drivers/gpu/drm/nouveau/nouveau_sched.c | 20 ++-
drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++--
drivers/gpu/drm/panthor/panthor_mmu.c | 18 ++-
drivers/gpu/drm/panthor/panthor_sched.c | 23 ++--
drivers/gpu/drm/scheduler/sched_main.c | 53 +++-----
drivers/gpu/drm/v3d/v3d_sched.c | 135 +++++++++++++++------
drivers/gpu/drm/xe/xe_execlist.c | 20 ++-
drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++-
include/drm/gpu_scheduler.h | 35 +++++-
14 files changed, 311 insertions(+), 139 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index cd4fac120834..c1f03eb5f5ea 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2821,6 +2821,9 @@ static int amdgpu_device_init_schedulers(struct amdgpu_device *adev)
{
long timeout;
int r, i;
+ struct drm_sched_init_params params;
+
+ memset(¶ms, 0, sizeof(struct drm_sched_init_params));
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
struct amdgpu_ring *ring = adev->rings[i];
@@ -2844,12 +2847,18 @@ static int amdgpu_device_init_schedulers(struct amdgpu_device *adev)
break;
}
- r = drm_sched_init(&ring->sched, &amdgpu_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- ring->num_hw_submission, 0,
- timeout, adev->reset_domain->wq,
- ring->sched_score, ring->name,
- adev->dev);
+ params.ops = &amdgpu_sched_ops;
+ params.submit_wq = NULL; /* Use the system_wq. */
+ params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
+ params.credit_limit = ring->num_hw_submission;
+ params.hang_limit = 0;
+ params.timeout = timeout;
+ params.timeout_wq = adev->reset_domain->wq;
+ params.score = ring->sched_score;
+ params.name = ring->name;
+ params.dev = adev->dev;
+
+ r = drm_sched_init(&ring->sched, ¶ms);
if (r) {
DRM_ERROR("Failed to create scheduler on ring %s.\n",
ring->name);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
index 5b67eda122db..7d8517f1963e 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
@@ -145,12 +145,22 @@ int etnaviv_sched_push_job(struct etnaviv_gem_submit *submit)
int etnaviv_sched_init(struct etnaviv_gpu *gpu)
{
int ret;
+ struct drm_sched_init_params params;
- ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- etnaviv_hw_jobs_limit, etnaviv_job_hang_limit,
- msecs_to_jiffies(500), NULL, NULL,
- dev_name(gpu->dev), gpu->dev);
+ memset(¶ms, 0, sizeof(struct drm_sched_init_params));
+
+ params.ops = &etnaviv_sched_ops;
+ params.submit_wq = NULL; /* Use the system_wq. */
+ params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
+ params.credit_limit = etnaviv_hw_jobs_limit;
+ params.hang_limit = etnaviv_job_hang_limit;
+ params.timeout = msecs_to_jiffies(500);
+ params.timeout_wq = NULL; /* Use the system_wq. */
+ params.score = NULL;
+ params.name = dev_name(gpu->dev);
+ params.dev = gpu->dev;
+
+ ret = drm_sched_init(&gpu->sched, ¶ms);
if (ret)
return ret;
diff --git a/drivers/gpu/drm/imagination/pvr_queue.c b/drivers/gpu/drm/imagination/pvr_queue.c
index c4f08432882b..03a2ce1a88e7 100644
--- a/drivers/gpu/drm/imagination/pvr_queue.c
+++ b/drivers/gpu/drm/imagination/pvr_queue.c
@@ -1211,10 +1211,13 @@ struct pvr_queue *pvr_queue_create(struct pvr_context *ctx,
};
struct pvr_device *pvr_dev = ctx->pvr_dev;
struct drm_gpu_scheduler *sched;
+ struct drm_sched_init_params sched_params;
struct pvr_queue *queue;
int ctx_state_size, err;
void *cpu_map;
+ memset(&sched_params, 0, sizeof(struct drm_sched_init_params));
+
if (WARN_ON(type >= sizeof(props)))
return ERR_PTR(-EINVAL);
@@ -1282,12 +1285,18 @@ struct pvr_queue *pvr_queue_create(struct pvr_context *ctx,
queue->timeline_ufo.value = cpu_map;
- err = drm_sched_init(&queue->scheduler,
- &pvr_queue_sched_ops,
- pvr_dev->sched_wq, 1, 64 * 1024, 1,
- msecs_to_jiffies(500),
- pvr_dev->sched_wq, NULL, "pvr-queue",
- pvr_dev->base.dev);
+ sched_params.ops = &pvr_queue_sched_ops;
+ sched_params.submit_wq = pvr_dev->sched_wq;
+ sched_params.num_rqs = 1;
+ sched_params.credit_limit = 64 * 1024;
+ sched_params.hang_limit = 1;
+ sched_params.timeout = msecs_to_jiffies(500);
+ sched_params.timeout_wq = pvr_dev->sched_wq;
+ sched_params.score = NULL;
+ sched_params.name = "pvr-queue";
+ sched_params.dev = pvr_dev->base.dev;
+
+ err = drm_sched_init(&queue->scheduler, &sched_params);
if (err)
goto err_release_ufo;
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index b40c90e97d7e..a64c50fb6d1e 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -513,20 +513,29 @@ static void lima_sched_recover_work(struct work_struct *work)
int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char *name)
{
+ struct drm_sched_init_params params;
unsigned int timeout = lima_sched_timeout_ms > 0 ?
lima_sched_timeout_ms : 10000;
+ memset(¶ms, 0, sizeof(struct drm_sched_init_params));
+
pipe->fence_context = dma_fence_context_alloc(1);
spin_lock_init(&pipe->fence_lock);
INIT_WORK(&pipe->recover_work, lima_sched_recover_work);
- return drm_sched_init(&pipe->base, &lima_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- 1,
- lima_job_hang_limit,
- msecs_to_jiffies(timeout), NULL,
- NULL, name, pipe->ldev->dev);
+ params.ops = &lima_sched_ops;
+ params.submit_wq = NULL; /* Use the system_wq. */
+ params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
+ params.credit_limit = 1;
+ params.hang_limit = lima_job_hang_limit;
+ params.timeout = msecs_to_jiffies(timeout);
+ params.timeout_wq = NULL; /* Use the system_wq. */
+ params.score = NULL;
+ params.name = name;
+ params.dev = pipe->ldev->dev;
+
+ return drm_sched_init(&pipe->base, ¶ms);
}
void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c
index c803556a8f64..49a2c7422dc6 100644
--- a/drivers/gpu/drm/msm/msm_ringbuffer.c
+++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
@@ -59,11 +59,13 @@ static const struct drm_sched_backend_ops msm_sched_ops = {
struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
void *memptrs, uint64_t memptrs_iova)
{
+ struct drm_sched_init_params params;
struct msm_ringbuffer *ring;
- long sched_timeout;
char name[32];
int ret;
+ memset(¶ms, 0, sizeof(struct drm_sched_init_params));
+
/* We assume everywhere that MSM_GPU_RINGBUFFER_SZ is a power of 2 */
BUILD_BUG_ON(!is_power_of_2(MSM_GPU_RINGBUFFER_SZ));
@@ -95,13 +97,19 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
ring->memptrs = memptrs;
ring->memptrs_iova = memptrs_iova;
- /* currently managing hangcheck ourselves: */
- sched_timeout = MAX_SCHEDULE_TIMEOUT;
+ params.ops = &msm_sched_ops;
+ params.submit_wq = NULL; /* Use the system_wq. */
+ params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
+ params.credit_limit = num_hw_submissions;
+ params.hang_limit = 0;
+ /* currently managing hangcheck ourselves: */
+ params.timeout = MAX_SCHEDULE_TIMEOUT;
+ params.timeout_wq = NULL; /* Use the system_wq. */
+ params.score = NULL;
+ params.name = to_msm_bo(ring->bo)->name;
+ params.dev = gpu->dev->dev;
- ret = drm_sched_init(&ring->sched, &msm_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- num_hw_submissions, 0, sched_timeout,
- NULL, NULL, to_msm_bo(ring->bo)->name, gpu->dev->dev);
+ ret = drm_sched_init(&ring->sched, ¶ms);
if (ret) {
goto fail;
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c b/drivers/gpu/drm/nouveau/nouveau_sched.c
index 4412f2711fb5..f20c2e612750 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sched.c
+++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
@@ -404,9 +404,11 @@ nouveau_sched_init(struct nouveau_sched *sched, struct nouveau_drm *drm,
{
struct drm_gpu_scheduler *drm_sched = &sched->base;
struct drm_sched_entity *entity = &sched->entity;
- const long timeout = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
+ struct drm_sched_init_params params;
int ret;
+ memset(¶ms, 0, sizeof(struct drm_sched_init_params));
+
if (!wq) {
wq = alloc_workqueue("nouveau_sched_wq_%d", 0, WQ_MAX_ACTIVE,
current->pid);
@@ -416,10 +418,18 @@ nouveau_sched_init(struct nouveau_sched *sched, struct nouveau_drm *drm,
sched->wq = wq;
}
- ret = drm_sched_init(drm_sched, &nouveau_sched_ops, wq,
- NOUVEAU_SCHED_PRIORITY_COUNT,
- credit_limit, 0, timeout,
- NULL, NULL, "nouveau_sched", drm->dev->dev);
+ params.ops = &nouveau_sched_ops;
+ params.submit_wq = wq;
+ params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
+ params.credit_limit = credit_limit;
+ params.hang_limit = 0;
+ params.timeout = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
+ params.timeout_wq = NULL; /* Use the system_wq. */
+ params.score = NULL;
+ params.name = "nouveau_sched";
+ params.dev = drm->dev->dev;
+
+ ret = drm_sched_init(drm_sched, ¶ms);
if (ret)
goto fail_wq;
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index 9b8e82fb8bc4..6b509ff446b5 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -836,10 +836,13 @@ static irqreturn_t panfrost_job_irq_handler(int irq, void *data)
int panfrost_job_init(struct panfrost_device *pfdev)
{
+ struct drm_sched_init_params params;
struct panfrost_job_slot *js;
unsigned int nentries = 2;
int ret, j;
+ memset(¶ms, 0, sizeof(struct drm_sched_init_params));
+
/* All GPUs have two entries per queue, but without jobchain
* disambiguation stopping the right job in the close path is tricky,
* so let's just advertise one entry in that case.
@@ -872,16 +875,21 @@ int panfrost_job_init(struct panfrost_device *pfdev)
if (!pfdev->reset.wq)
return -ENOMEM;
+ params.ops = &panfrost_sched_ops;
+ params.submit_wq = NULL; /* Use the system_wq. */
+ params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
+ params.credit_limit = nentries;
+ params.hang_limit = 0;
+ params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
+ params.timeout_wq = pfdev->reset.wq;
+ params.score = NULL;
+ params.name = "pan_js";
+ params.dev = pfdev->dev;
+
for (j = 0; j < NUM_JOB_SLOTS; j++) {
js->queue[j].fence_context = dma_fence_context_alloc(1);
- ret = drm_sched_init(&js->queue[j].sched,
- &panfrost_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- nentries, 0,
- msecs_to_jiffies(JOB_TIMEOUT_MS),
- pfdev->reset.wq,
- NULL, "pan_js", pfdev->dev);
+ ret = drm_sched_init(&js->queue[j].sched, ¶ms);
if (ret) {
dev_err(pfdev->dev, "Failed to create scheduler: %d.", ret);
goto err_sched;
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index a49132f3778b..4362442cbfd8 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -2268,6 +2268,7 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
u64 full_va_range = 1ull << va_bits;
struct drm_gem_object *dummy_gem;
struct drm_gpu_scheduler *sched;
+ struct drm_sched_init_params sched_params;
struct io_pgtable_cfg pgtbl_cfg;
u64 mair, min_va, va_range;
struct panthor_vm *vm;
@@ -2284,6 +2285,8 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
goto err_free_vm;
}
+ memset(&sched_params, 0, sizeof(struct drm_sched_init_params));
+
mutex_init(&vm->heaps.lock);
vm->for_mcu = for_mcu;
vm->ptdev = ptdev;
@@ -2325,11 +2328,18 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
goto err_mm_takedown;
}
+ sched_params.ops = &panthor_vm_bind_ops;
+ sched_params.submit_wq = ptdev->mmu->vm.wq;
+ sched_params.num_rqs = 1;
+ sched_params.credit_limit = 1;
+ sched_params.hang_limit = 0;
/* Bind operations are synchronous for now, no timeout needed. */
- ret = drm_sched_init(&vm->sched, &panthor_vm_bind_ops, ptdev->mmu->vm.wq,
- 1, 1, 0,
- MAX_SCHEDULE_TIMEOUT, NULL, NULL,
- "panthor-vm-bind", ptdev->base.dev);
+ sched_params.timeout = MAX_SCHEDULE_TIMEOUT;
+ sched_params.timeout_wq = NULL; /* Use the system_wq. */
+ sched_params.score = NULL;
+ sched_params.name = "panthor-vm-bind";
+ sched_params.dev = ptdev->base.dev;
+ ret = drm_sched_init(&vm->sched, &sched_params);
if (ret)
goto err_free_io_pgtable;
diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
index ef4bec7ff9c7..a324346d302f 100644
--- a/drivers/gpu/drm/panthor/panthor_sched.c
+++ b/drivers/gpu/drm/panthor/panthor_sched.c
@@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group *group,
const struct drm_panthor_queue_create *args)
{
struct drm_gpu_scheduler *drm_sched;
+ struct drm_sched_init_params sched_params;
struct panthor_queue *queue;
int ret;
@@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group *group,
if (!queue)
return ERR_PTR(-ENOMEM);
+ memset(&sched_params, 0, sizeof(struct drm_sched_init_params));
+
queue->fence_ctx.id = dma_fence_context_alloc(1);
spin_lock_init(&queue->fence_ctx.lock);
INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
@@ -3341,17 +3344,23 @@ group_create_queue(struct panthor_group *group,
if (ret)
goto err_free_queue;
+ sched_params.ops = &panthor_queue_sched_ops;
+ sched_params.submit_wq = group->ptdev->scheduler->wq;
+ sched_params.num_rqs = 1;
/*
- * Credit limit argument tells us the total number of instructions
+ * The credit limit argument tells us the total number of instructions
* across all CS slots in the ringbuffer, with some jobs requiring
* twice as many as others, depending on their profiling status.
*/
- ret = drm_sched_init(&queue->scheduler, &panthor_queue_sched_ops,
- group->ptdev->scheduler->wq, 1,
- args->ringbuf_size / sizeof(u64),
- 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
- group->ptdev->reset.wq,
- NULL, "panthor-queue", group->ptdev->base.dev);
+ sched_params.credit_limit = args->ringbuf_size / sizeof(u64);
+ sched_params.hang_limit = 0;
+ sched_params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
+ sched_params.timeout_wq = group->ptdev->reset.wq;
+ sched_params.score = NULL;
+ sched_params.name = "panthor-queue";
+ sched_params.dev = group->ptdev->base.dev;
+
+ ret = drm_sched_init(&queue->scheduler, &sched_params);
if (ret)
goto err_free_queue;
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 57da84908752..27db748a5269 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -1240,40 +1240,25 @@ static void drm_sched_run_job_work(struct work_struct *w)
* drm_sched_init - Init a gpu scheduler instance
*
* @sched: scheduler instance
- * @ops: backend operations for this scheduler
- * @submit_wq: workqueue to use for submission. If NULL, an ordered wq is
- * allocated and used
- * @num_rqs: number of runqueues, one for each priority, up to DRM_SCHED_PRIORITY_COUNT
- * @credit_limit: the number of credits this scheduler can hold from all jobs
- * @hang_limit: number of times to allow a job to hang before dropping it
- * @timeout: timeout value in jiffies for the scheduler
- * @timeout_wq: workqueue to use for timeout work. If NULL, the system_wq is
- * used
- * @score: optional score atomic shared with other schedulers
- * @name: name used for debugging
- * @dev: target &struct device
+ * @params: scheduler initialization parameters
*
* Return 0 on success, otherwise error code.
*/
int drm_sched_init(struct drm_gpu_scheduler *sched,
- const struct drm_sched_backend_ops *ops,
- struct workqueue_struct *submit_wq,
- u32 num_rqs, u32 credit_limit, unsigned int hang_limit,
- long timeout, struct workqueue_struct *timeout_wq,
- atomic_t *score, const char *name, struct device *dev)
+ const struct drm_sched_init_params *params)
{
int i;
- sched->ops = ops;
- sched->credit_limit = credit_limit;
- sched->name = name;
- sched->timeout = timeout;
- sched->timeout_wq = timeout_wq ? : system_wq;
- sched->hang_limit = hang_limit;
- sched->score = score ? score : &sched->_score;
- sched->dev = dev;
+ sched->ops = params->ops;
+ sched->credit_limit = params->credit_limit;
+ sched->name = params->name;
+ sched->timeout = params->timeout;
+ sched->timeout_wq = params->timeout_wq ? : system_wq;
+ sched->hang_limit = params->hang_limit;
+ sched->score = params->score ? params->score : &sched->_score;
+ sched->dev = params->dev;
- if (num_rqs > DRM_SCHED_PRIORITY_COUNT) {
+ if (params->num_rqs > DRM_SCHED_PRIORITY_COUNT) {
/* This is a gross violation--tell drivers what the problem is.
*/
drm_err(sched, "%s: num_rqs cannot be greater than DRM_SCHED_PRIORITY_COUNT\n",
@@ -1288,16 +1273,16 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
return 0;
}
- if (submit_wq) {
- sched->submit_wq = submit_wq;
+ if (params->submit_wq) {
+ sched->submit_wq = params->submit_wq;
sched->own_submit_wq = false;
} else {
#ifdef CONFIG_LOCKDEP
- sched->submit_wq = alloc_ordered_workqueue_lockdep_map(name,
- WQ_MEM_RECLAIM,
- &drm_sched_lockdep_map);
+ sched->submit_wq = alloc_ordered_workqueue_lockdep_map(
+ params->name, WQ_MEM_RECLAIM,
+ &drm_sched_lockdep_map);
#else
- sched->submit_wq = alloc_ordered_workqueue(name, WQ_MEM_RECLAIM);
+ sched->submit_wq = alloc_ordered_workqueue(params->name, WQ_MEM_RECLAIM);
#endif
if (!sched->submit_wq)
return -ENOMEM;
@@ -1305,11 +1290,11 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
sched->own_submit_wq = true;
}
- sched->sched_rq = kmalloc_array(num_rqs, sizeof(*sched->sched_rq),
+ sched->sched_rq = kmalloc_array(params->num_rqs, sizeof(*sched->sched_rq),
GFP_KERNEL | __GFP_ZERO);
if (!sched->sched_rq)
goto Out_check_own;
- sched->num_rqs = num_rqs;
+ sched->num_rqs = params->num_rqs;
for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) {
sched->sched_rq[i] = kzalloc(sizeof(*sched->sched_rq[i]), GFP_KERNEL);
if (!sched->sched_rq[i])
diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
index 99ac4995b5a1..716e6d074d87 100644
--- a/drivers/gpu/drm/v3d/v3d_sched.c
+++ b/drivers/gpu/drm/v3d/v3d_sched.c
@@ -814,67 +814,124 @@ static const struct drm_sched_backend_ops v3d_cpu_sched_ops = {
.free_job = v3d_cpu_job_free
};
+/*
+ * v3d's scheduler instances are all identical, except for ops and name.
+ */
+static void
+v3d_common_sched_init(struct drm_sched_init_params *params, struct device *dev)
+{
+ memset(params, 0, sizeof(struct drm_sched_init_params));
+
+ params->submit_wq = NULL; /* Use the system_wq. */
+ params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
+ params->credit_limit = 1;
+ params->hang_limit = 0;
+ params->timeout = msecs_to_jiffies(500);
+ params->timeout_wq = NULL; /* Use the system_wq. */
+ params->score = NULL;
+ params->dev = dev;
+}
+
+static int
+v3d_bin_sched_init(struct v3d_dev *v3d)
+{
+ struct drm_sched_init_params params;
+
+ v3d_common_sched_init(¶ms, v3d->drm.dev);
+ params.ops = &v3d_bin_sched_ops;
+ params.name = "v3d_bin";
+
+ return drm_sched_init(&v3d->queue[V3D_BIN].sched, ¶ms);
+}
+
+static int
+v3d_render_sched_init(struct v3d_dev *v3d)
+{
+ struct drm_sched_init_params params;
+
+ v3d_common_sched_init(¶ms, v3d->drm.dev);
+ params.ops = &v3d_render_sched_ops;
+ params.name = "v3d_render";
+
+ return drm_sched_init(&v3d->queue[V3D_RENDER].sched, ¶ms);
+}
+
+static int
+v3d_tfu_sched_init(struct v3d_dev *v3d)
+{
+ struct drm_sched_init_params params;
+
+ v3d_common_sched_init(¶ms, v3d->drm.dev);
+ params.ops = &v3d_tfu_sched_ops;
+ params.name = "v3d_tfu";
+
+ return drm_sched_init(&v3d->queue[V3D_TFU].sched, ¶ms);
+}
+
+static int
+v3d_csd_sched_init(struct v3d_dev *v3d)
+{
+ struct drm_sched_init_params params;
+
+ v3d_common_sched_init(¶ms, v3d->drm.dev);
+ params.ops = &v3d_csd_sched_ops;
+ params.name = "v3d_csd";
+
+ return drm_sched_init(&v3d->queue[V3D_CSD].sched, ¶ms);
+}
+
+static int
+v3d_cache_sched_init(struct v3d_dev *v3d)
+{
+ struct drm_sched_init_params params;
+
+ v3d_common_sched_init(¶ms, v3d->drm.dev);
+ params.ops = &v3d_cache_clean_sched_ops;
+ params.name = "v3d_cache_clean";
+
+ return drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched, ¶ms);
+}
+
+static int
+v3d_cpu_sched_init(struct v3d_dev *v3d)
+{
+ struct drm_sched_init_params params;
+
+ v3d_common_sched_init(¶ms, v3d->drm.dev);
+ params.ops = &v3d_cpu_sched_ops;
+ params.name = "v3d_cpu";
+
+ return drm_sched_init(&v3d->queue[V3D_CPU].sched, ¶ms);
+}
+
int
v3d_sched_init(struct v3d_dev *v3d)
{
- int hw_jobs_limit = 1;
- int job_hang_limit = 0;
- int hang_limit_ms = 500;
int ret;
- ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
- &v3d_bin_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- hw_jobs_limit, job_hang_limit,
- msecs_to_jiffies(hang_limit_ms), NULL,
- NULL, "v3d_bin", v3d->drm.dev);
+ ret = v3d_bin_sched_init(v3d);
if (ret)
return ret;
- ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
- &v3d_render_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- hw_jobs_limit, job_hang_limit,
- msecs_to_jiffies(hang_limit_ms), NULL,
- NULL, "v3d_render", v3d->drm.dev);
+ ret = v3d_render_sched_init(v3d);
if (ret)
goto fail;
- ret = drm_sched_init(&v3d->queue[V3D_TFU].sched,
- &v3d_tfu_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- hw_jobs_limit, job_hang_limit,
- msecs_to_jiffies(hang_limit_ms), NULL,
- NULL, "v3d_tfu", v3d->drm.dev);
+ ret = v3d_tfu_sched_init(v3d);
if (ret)
goto fail;
if (v3d_has_csd(v3d)) {
- ret = drm_sched_init(&v3d->queue[V3D_CSD].sched,
- &v3d_csd_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- hw_jobs_limit, job_hang_limit,
- msecs_to_jiffies(hang_limit_ms), NULL,
- NULL, "v3d_csd", v3d->drm.dev);
+ ret = v3d_csd_sched_init(v3d);
if (ret)
goto fail;
- ret = drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched,
- &v3d_cache_clean_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- hw_jobs_limit, job_hang_limit,
- msecs_to_jiffies(hang_limit_ms), NULL,
- NULL, "v3d_cache_clean", v3d->drm.dev);
+ ret = v3d_cache_sched_init(v3d);
if (ret)
goto fail;
}
- ret = drm_sched_init(&v3d->queue[V3D_CPU].sched,
- &v3d_cpu_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- 1, job_hang_limit,
- msecs_to_jiffies(hang_limit_ms), NULL,
- NULL, "v3d_cpu", v3d->drm.dev);
+ ret = v3d_cpu_sched_init(v3d);
if (ret)
goto fail;
diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
index a8c416a48812..7f29b7f04af4 100644
--- a/drivers/gpu/drm/xe/xe_execlist.c
+++ b/drivers/gpu/drm/xe/xe_execlist.c
@@ -332,10 +332,13 @@ static const struct drm_sched_backend_ops drm_sched_ops = {
static int execlist_exec_queue_init(struct xe_exec_queue *q)
{
struct drm_gpu_scheduler *sched;
+ struct drm_sched_init_params params;
struct xe_execlist_exec_queue *exl;
struct xe_device *xe = gt_to_xe(q->gt);
int err;
+ memset(¶ms, 0, sizeof(struct drm_sched_init_params));
+
xe_assert(xe, !xe_device_uc_enabled(xe));
drm_info(&xe->drm, "Enabling execlist submission (GuC submission disabled)\n");
@@ -346,11 +349,18 @@ static int execlist_exec_queue_init(struct xe_exec_queue *q)
exl->q = q;
- err = drm_sched_init(&exl->sched, &drm_sched_ops, NULL, 1,
- q->lrc[0]->ring.size / MAX_JOB_SIZE_BYTES,
- XE_SCHED_HANG_LIMIT, XE_SCHED_JOB_TIMEOUT,
- NULL, NULL, q->hwe->name,
- gt_to_xe(q->gt)->drm.dev);
+ params.ops = &drm_sched_ops;
+ params.submit_wq = NULL; /* Use the system_wq. */
+ params.num_rqs = 1;
+ params.credit_limit = q->lrc[0]->ring.size / MAX_JOB_SIZE_BYTES;
+ params.hang_limit = XE_SCHED_HANG_LIMIT;
+ params.timeout = XE_SCHED_JOB_TIMEOUT;
+ params.timeout_wq = NULL; /* Use the system_wq. */
+ params.score = NULL;
+ params.name = q->hwe->name;
+ params.dev = gt_to_xe(q->gt)->drm.dev;
+
+ err = drm_sched_init(&exl->sched, ¶ms);
if (err)
goto err_free;
diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
index 50361b4638f9..2129fee83f25 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
@@ -63,13 +63,26 @@ int xe_sched_init(struct xe_gpu_scheduler *sched,
atomic_t *score, const char *name,
struct device *dev)
{
+ struct drm_sched_init_params params;
+
sched->ops = xe_ops;
INIT_LIST_HEAD(&sched->msgs);
INIT_WORK(&sched->work_process_msg, xe_sched_process_msg_work);
- return drm_sched_init(&sched->base, ops, submit_wq, 1, hw_submission,
- hang_limit, timeout, timeout_wq, score, name,
- dev);
+ memset(¶ms, 0, sizeof(struct drm_sched_init_params));
+
+ params.ops = ops;
+ params.submit_wq = submit_wq;
+ params.num_rqs = 1;
+ params.credit_limit = hw_submission;
+ params.hang_limit = hang_limit;
+ params.timeout = timeout;
+ params.timeout_wq = timeout_wq;
+ params.score = score;
+ params.name = name;
+ params.dev = dev;
+
+ return drm_sched_init(&sched->base, ¶ms);
}
void xe_sched_fini(struct xe_gpu_scheduler *sched)
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 95e17504e46a..1a834ef43862 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -553,12 +553,37 @@ struct drm_gpu_scheduler {
struct device *dev;
};
+/**
+ * struct drm_sched_init_params - parameters for initializing a DRM GPU scheduler
+ *
+ * @ops: backend operations provided by the driver
+ * @submit_wq: workqueue to use for submission. If NULL, an ordered wq is
+ * allocated and used
+ * @num_rqs: Number of run-queues. This is at most DRM_SCHED_PRIORITY_COUNT,
+ * as there's usually one run-queue per priority, but could be less.
+ * @credit_limit: the number of credits this scheduler can hold from all jobs
+ * @hang_limit: number of times to allow a job to hang before dropping it
+ * @timeout: timeout value in jiffies for the scheduler
+ * @timeout_wq: workqueue to use for timeout work. If NULL, the system_wq is
+ * used
+ * @score: optional score atomic shared with other schedulers
+ * @name: name used for debugging
+ * @dev: associated device. Used for debugging
+ */
+struct drm_sched_init_params {
+ const struct drm_sched_backend_ops *ops;
+ struct workqueue_struct *submit_wq;
+ struct workqueue_struct *timeout_wq;
+ u32 num_rqs, credit_limit;
+ unsigned int hang_limit;
+ long timeout;
+ atomic_t *score;
+ const char *name;
+ struct device *dev;
+};
+
int drm_sched_init(struct drm_gpu_scheduler *sched,
- const struct drm_sched_backend_ops *ops,
- struct workqueue_struct *submit_wq,
- u32 num_rqs, u32 credit_limit, unsigned int hang_limit,
- long timeout, struct workqueue_struct *timeout_wq,
- atomic_t *score, const char *name, struct device *dev);
+ const struct drm_sched_init_params *params);
void drm_sched_fini(struct drm_gpu_scheduler *sched);
int drm_sched_job_init(struct drm_sched_job *job,
--
2.47.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
@ 2025-01-22 14:30 ` Danilo Krummrich
2025-01-22 14:34 ` Christian König
` (11 subsequent siblings)
12 siblings, 0 replies; 35+ messages in thread
From: Danilo Krummrich @ 2025-01-22 14:30 UTC (permalink / raw)
To: Philipp Stanner
Cc: Alex Deucher, Christian König, Xinhui Pan, David Airlie,
Simona Vetter, Lucas Stach, Russell King, Christian Gmeiner,
Frank Binns, Matt Coster, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, Karol Herbst,
Lyude Paul, Boris Brezillon, Rob Herring, Steven Price,
Liviu Dudau, Luben Tuikov, Matthew Brost, Philipp Stanner,
Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li, amd-gfx, dri-devel,
linux-kernel, etnaviv, lima, linux-arm-msm, freedreno, nouveau,
intel-xe
On Wed, Jan 22, 2025 at 03:08:20PM +0100, Philipp Stanner wrote:
> drm_sched_init() has a great many parameters and upcoming new
> functionality for the scheduler might add even more. Generally, the
> great number of parameters reduces readability and has already caused
> one missnaming in:
>
> commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in nouveau_sched_init()").
>
> Introduce a new struct for the scheduler init parameters and port all
> users.
>
> Signed-off-by: Philipp Stanner <phasta@kernel.org>
> ---
> Howdy,
>
> I have a patch-series in the pipe that will add a `flags` argument to
> drm_sched_init(). I thought it would be wise to first rework the API as
> detailed in this patch. It's really a lot of parameters by now, and I
> would expect that it might get more and more over the years for special
> use cases etc.
>
> Regards,
> P.
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +++-
> drivers/gpu/drm/etnaviv/etnaviv_sched.c | 20 ++-
> drivers/gpu/drm/imagination/pvr_queue.c | 21 +++-
> drivers/gpu/drm/lima/lima_sched.c | 21 +++-
> drivers/gpu/drm/msm/msm_ringbuffer.c | 22 ++--
> drivers/gpu/drm/nouveau/nouveau_sched.c | 20 ++-
> drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++--
> drivers/gpu/drm/panthor/panthor_mmu.c | 18 ++-
> drivers/gpu/drm/panthor/panthor_sched.c | 23 ++--
> drivers/gpu/drm/scheduler/sched_main.c | 53 +++-----
> drivers/gpu/drm/v3d/v3d_sched.c | 135 +++++++++++++++------
> drivers/gpu/drm/xe/xe_execlist.c | 20 ++-
> drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++-
> include/drm/gpu_scheduler.h | 35 +++++-
> 14 files changed, 311 insertions(+), 139 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index cd4fac120834..c1f03eb5f5ea 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -2821,6 +2821,9 @@ static int amdgpu_device_init_schedulers(struct amdgpu_device *adev)
> {
> long timeout;
> int r, i;
> + struct drm_sched_init_params params;
> +
> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
I think we should drop the memset() and just write it as:
struct drm_sched_init_params params = {};
<snip>
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 95e17504e46a..1a834ef43862 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -553,12 +553,37 @@ struct drm_gpu_scheduler {
> struct device *dev;
> };
>
> +/**
> + * struct drm_sched_init_params - parameters for initializing a DRM GPU scheduler
Since this is a separate structure now, I think we should point out which fields
are mandatory to set and which of those have a valid default to zero.
> + *
> + * @ops: backend operations provided by the driver
> + * @submit_wq: workqueue to use for submission. If NULL, an ordered wq is
> + * allocated and used
> + * @num_rqs: Number of run-queues. This is at most DRM_SCHED_PRIORITY_COUNT,
> + * as there's usually one run-queue per priority, but could be less.
> + * @credit_limit: the number of credits this scheduler can hold from all jobs
> + * @hang_limit: number of times to allow a job to hang before dropping it
> + * @timeout: timeout value in jiffies for the scheduler
> + * @timeout_wq: workqueue to use for timeout work. If NULL, the system_wq is
> + * used
> + * @score: optional score atomic shared with other schedulers
> + * @name: name used for debugging
> + * @dev: associated device. Used for debugging
> + */
> +struct drm_sched_init_params {
> + const struct drm_sched_backend_ops *ops;
> + struct workqueue_struct *submit_wq;
> + struct workqueue_struct *timeout_wq;
> + u32 num_rqs, credit_limit;
> + unsigned int hang_limit;
> + long timeout;
> + atomic_t *score;
> + const char *name;
> + struct device *dev;
> +};
> +
> int drm_sched_init(struct drm_gpu_scheduler *sched,
> - const struct drm_sched_backend_ops *ops,
> - struct workqueue_struct *submit_wq,
> - u32 num_rqs, u32 credit_limit, unsigned int hang_limit,
> - long timeout, struct workqueue_struct *timeout_wq,
> - atomic_t *score, const char *name, struct device *dev);
> + const struct drm_sched_init_params *params);
>
> void drm_sched_fini(struct drm_gpu_scheduler *sched);
> int drm_sched_job_init(struct drm_sched_job *job,
> --
> 2.47.1
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
2025-01-22 14:30 ` Danilo Krummrich
@ 2025-01-22 14:34 ` Christian König
2025-01-22 14:48 ` Philipp Stanner
2025-01-22 15:51 ` Boris Brezillon
` (10 subsequent siblings)
12 siblings, 1 reply; 35+ messages in thread
From: Christian König @ 2025-01-22 14:34 UTC (permalink / raw)
To: Philipp Stanner, Alex Deucher, Xinhui Pan, David Airlie,
Simona Vetter, Lucas Stach, Russell King, Christian Gmeiner,
Frank Binns, Matt Coster, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, Karol Herbst,
Lyude Paul, Danilo Krummrich, Boris Brezillon, Rob Herring,
Steven Price, Liviu Dudau, Luben Tuikov, Matthew Brost,
Philipp Stanner, Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li
Cc: amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
Am 22.01.25 um 15:08 schrieb Philipp Stanner:
> drm_sched_init() has a great many parameters and upcoming new
> functionality for the scheduler might add even more. Generally, the
> great number of parameters reduces readability and has already caused
> one missnaming in:
>
> commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in nouveau_sched_init()").
>
> Introduce a new struct for the scheduler init parameters and port all
> users.
>
> Signed-off-by: Philipp Stanner <phasta@kernel.org>
> ---
> Howdy,
>
> I have a patch-series in the pipe that will add a `flags` argument to
> drm_sched_init(). I thought it would be wise to first rework the API as
> detailed in this patch. It's really a lot of parameters by now, and I
> would expect that it might get more and more over the years for special
> use cases etc.
>
> Regards,
> P.
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +++-
> drivers/gpu/drm/etnaviv/etnaviv_sched.c | 20 ++-
> drivers/gpu/drm/imagination/pvr_queue.c | 21 +++-
> drivers/gpu/drm/lima/lima_sched.c | 21 +++-
> drivers/gpu/drm/msm/msm_ringbuffer.c | 22 ++--
> drivers/gpu/drm/nouveau/nouveau_sched.c | 20 ++-
> drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++--
> drivers/gpu/drm/panthor/panthor_mmu.c | 18 ++-
> drivers/gpu/drm/panthor/panthor_sched.c | 23 ++--
> drivers/gpu/drm/scheduler/sched_main.c | 53 +++-----
> drivers/gpu/drm/v3d/v3d_sched.c | 135 +++++++++++++++------
> drivers/gpu/drm/xe/xe_execlist.c | 20 ++-
> drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++-
> include/drm/gpu_scheduler.h | 35 +++++-
> 14 files changed, 311 insertions(+), 139 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index cd4fac120834..c1f03eb5f5ea 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -2821,6 +2821,9 @@ static int amdgpu_device_init_schedulers(struct amdgpu_device *adev)
> {
> long timeout;
> int r, i;
> + struct drm_sched_init_params params;
Please keep the reverse xmas tree ordering for variable declaration.
E.g. long lines first and variables like "i" and "r" last.
Apart from that looks like a good idea to me.
> +
> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
>
> for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
> struct amdgpu_ring *ring = adev->rings[i];
> @@ -2844,12 +2847,18 @@ static int amdgpu_device_init_schedulers(struct amdgpu_device *adev)
> break;
> }
>
> - r = drm_sched_init(&ring->sched, &amdgpu_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - ring->num_hw_submission, 0,
> - timeout, adev->reset_domain->wq,
> - ring->sched_score, ring->name,
> - adev->dev);
> + params.ops = &amdgpu_sched_ops;
> + params.submit_wq = NULL; /* Use the system_wq. */
> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> + params.credit_limit = ring->num_hw_submission;
> + params.hang_limit = 0;
Could we please remove the hang limit as first step to get this awful
feature deprecated?
Thanks,
Christian.
> + params.timeout = timeout;
> + params.timeout_wq = adev->reset_domain->wq;
> + params.score = ring->sched_score;
> + params.name = ring->name;
> + params.dev = adev->dev;
> +
> + r = drm_sched_init(&ring->sched, ¶ms);
> if (r) {
> DRM_ERROR("Failed to create scheduler on ring %s.\n",
> ring->name);
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> index 5b67eda122db..7d8517f1963e 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> @@ -145,12 +145,22 @@ int etnaviv_sched_push_job(struct etnaviv_gem_submit *submit)
> int etnaviv_sched_init(struct etnaviv_gpu *gpu)
> {
> int ret;
> + struct drm_sched_init_params params;
>
> - ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - etnaviv_hw_jobs_limit, etnaviv_job_hang_limit,
> - msecs_to_jiffies(500), NULL, NULL,
> - dev_name(gpu->dev), gpu->dev);
> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> +
> + params.ops = &etnaviv_sched_ops;
> + params.submit_wq = NULL; /* Use the system_wq. */
> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> + params.credit_limit = etnaviv_hw_jobs_limit;
> + params.hang_limit = etnaviv_job_hang_limit;
> + params.timeout = msecs_to_jiffies(500);
> + params.timeout_wq = NULL; /* Use the system_wq. */
> + params.score = NULL;
> + params.name = dev_name(gpu->dev);
> + params.dev = gpu->dev;
> +
> + ret = drm_sched_init(&gpu->sched, ¶ms);
> if (ret)
> return ret;
>
> diff --git a/drivers/gpu/drm/imagination/pvr_queue.c b/drivers/gpu/drm/imagination/pvr_queue.c
> index c4f08432882b..03a2ce1a88e7 100644
> --- a/drivers/gpu/drm/imagination/pvr_queue.c
> +++ b/drivers/gpu/drm/imagination/pvr_queue.c
> @@ -1211,10 +1211,13 @@ struct pvr_queue *pvr_queue_create(struct pvr_context *ctx,
> };
> struct pvr_device *pvr_dev = ctx->pvr_dev;
> struct drm_gpu_scheduler *sched;
> + struct drm_sched_init_params sched_params;
> struct pvr_queue *queue;
> int ctx_state_size, err;
> void *cpu_map;
>
> + memset(&sched_params, 0, sizeof(struct drm_sched_init_params));
> +
> if (WARN_ON(type >= sizeof(props)))
> return ERR_PTR(-EINVAL);
>
> @@ -1282,12 +1285,18 @@ struct pvr_queue *pvr_queue_create(struct pvr_context *ctx,
>
> queue->timeline_ufo.value = cpu_map;
>
> - err = drm_sched_init(&queue->scheduler,
> - &pvr_queue_sched_ops,
> - pvr_dev->sched_wq, 1, 64 * 1024, 1,
> - msecs_to_jiffies(500),
> - pvr_dev->sched_wq, NULL, "pvr-queue",
> - pvr_dev->base.dev);
> + sched_params.ops = &pvr_queue_sched_ops;
> + sched_params.submit_wq = pvr_dev->sched_wq;
> + sched_params.num_rqs = 1;
> + sched_params.credit_limit = 64 * 1024;
> + sched_params.hang_limit = 1;
> + sched_params.timeout = msecs_to_jiffies(500);
> + sched_params.timeout_wq = pvr_dev->sched_wq;
> + sched_params.score = NULL;
> + sched_params.name = "pvr-queue";
> + sched_params.dev = pvr_dev->base.dev;
> +
> + err = drm_sched_init(&queue->scheduler, &sched_params);
> if (err)
> goto err_release_ufo;
>
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index b40c90e97d7e..a64c50fb6d1e 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -513,20 +513,29 @@ static void lima_sched_recover_work(struct work_struct *work)
>
> int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char *name)
> {
> + struct drm_sched_init_params params;
> unsigned int timeout = lima_sched_timeout_ms > 0 ?
> lima_sched_timeout_ms : 10000;
>
> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> +
> pipe->fence_context = dma_fence_context_alloc(1);
> spin_lock_init(&pipe->fence_lock);
>
> INIT_WORK(&pipe->recover_work, lima_sched_recover_work);
>
> - return drm_sched_init(&pipe->base, &lima_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - 1,
> - lima_job_hang_limit,
> - msecs_to_jiffies(timeout), NULL,
> - NULL, name, pipe->ldev->dev);
> + params.ops = &lima_sched_ops;
> + params.submit_wq = NULL; /* Use the system_wq. */
> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> + params.credit_limit = 1;
> + params.hang_limit = lima_job_hang_limit;
> + params.timeout = msecs_to_jiffies(timeout);
> + params.timeout_wq = NULL; /* Use the system_wq. */
> + params.score = NULL;
> + params.name = name;
> + params.dev = pipe->ldev->dev;
> +
> + return drm_sched_init(&pipe->base, ¶ms);
> }
>
> void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
> diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c
> index c803556a8f64..49a2c7422dc6 100644
> --- a/drivers/gpu/drm/msm/msm_ringbuffer.c
> +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
> @@ -59,11 +59,13 @@ static const struct drm_sched_backend_ops msm_sched_ops = {
> struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
> void *memptrs, uint64_t memptrs_iova)
> {
> + struct drm_sched_init_params params;
> struct msm_ringbuffer *ring;
> - long sched_timeout;
> char name[32];
> int ret;
>
> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> +
> /* We assume everywhere that MSM_GPU_RINGBUFFER_SZ is a power of 2 */
> BUILD_BUG_ON(!is_power_of_2(MSM_GPU_RINGBUFFER_SZ));
>
> @@ -95,13 +97,19 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
> ring->memptrs = memptrs;
> ring->memptrs_iova = memptrs_iova;
>
> - /* currently managing hangcheck ourselves: */
> - sched_timeout = MAX_SCHEDULE_TIMEOUT;
> + params.ops = &msm_sched_ops;
> + params.submit_wq = NULL; /* Use the system_wq. */
> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> + params.credit_limit = num_hw_submissions;
> + params.hang_limit = 0;
> + /* currently managing hangcheck ourselves: */
> + params.timeout = MAX_SCHEDULE_TIMEOUT;
> + params.timeout_wq = NULL; /* Use the system_wq. */
> + params.score = NULL;
> + params.name = to_msm_bo(ring->bo)->name;
> + params.dev = gpu->dev->dev;
>
> - ret = drm_sched_init(&ring->sched, &msm_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - num_hw_submissions, 0, sched_timeout,
> - NULL, NULL, to_msm_bo(ring->bo)->name, gpu->dev->dev);
> + ret = drm_sched_init(&ring->sched, ¶ms);
> if (ret) {
> goto fail;
> }
> diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c b/drivers/gpu/drm/nouveau/nouveau_sched.c
> index 4412f2711fb5..f20c2e612750 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_sched.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
> @@ -404,9 +404,11 @@ nouveau_sched_init(struct nouveau_sched *sched, struct nouveau_drm *drm,
> {
> struct drm_gpu_scheduler *drm_sched = &sched->base;
> struct drm_sched_entity *entity = &sched->entity;
> - const long timeout = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
> + struct drm_sched_init_params params;
> int ret;
>
> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> +
> if (!wq) {
> wq = alloc_workqueue("nouveau_sched_wq_%d", 0, WQ_MAX_ACTIVE,
> current->pid);
> @@ -416,10 +418,18 @@ nouveau_sched_init(struct nouveau_sched *sched, struct nouveau_drm *drm,
> sched->wq = wq;
> }
>
> - ret = drm_sched_init(drm_sched, &nouveau_sched_ops, wq,
> - NOUVEAU_SCHED_PRIORITY_COUNT,
> - credit_limit, 0, timeout,
> - NULL, NULL, "nouveau_sched", drm->dev->dev);
> + params.ops = &nouveau_sched_ops;
> + params.submit_wq = wq;
> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> + params.credit_limit = credit_limit;
> + params.hang_limit = 0;
> + params.timeout = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
> + params.timeout_wq = NULL; /* Use the system_wq. */
> + params.score = NULL;
> + params.name = "nouveau_sched";
> + params.dev = drm->dev->dev;
> +
> + ret = drm_sched_init(drm_sched, ¶ms);
> if (ret)
> goto fail_wq;
>
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 9b8e82fb8bc4..6b509ff446b5 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -836,10 +836,13 @@ static irqreturn_t panfrost_job_irq_handler(int irq, void *data)
>
> int panfrost_job_init(struct panfrost_device *pfdev)
> {
> + struct drm_sched_init_params params;
> struct panfrost_job_slot *js;
> unsigned int nentries = 2;
> int ret, j;
>
> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> +
> /* All GPUs have two entries per queue, but without jobchain
> * disambiguation stopping the right job in the close path is tricky,
> * so let's just advertise one entry in that case.
> @@ -872,16 +875,21 @@ int panfrost_job_init(struct panfrost_device *pfdev)
> if (!pfdev->reset.wq)
> return -ENOMEM;
>
> + params.ops = &panfrost_sched_ops;
> + params.submit_wq = NULL; /* Use the system_wq. */
> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> + params.credit_limit = nentries;
> + params.hang_limit = 0;
> + params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
> + params.timeout_wq = pfdev->reset.wq;
> + params.score = NULL;
> + params.name = "pan_js";
> + params.dev = pfdev->dev;
> +
> for (j = 0; j < NUM_JOB_SLOTS; j++) {
> js->queue[j].fence_context = dma_fence_context_alloc(1);
>
> - ret = drm_sched_init(&js->queue[j].sched,
> - &panfrost_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - nentries, 0,
> - msecs_to_jiffies(JOB_TIMEOUT_MS),
> - pfdev->reset.wq,
> - NULL, "pan_js", pfdev->dev);
> + ret = drm_sched_init(&js->queue[j].sched, ¶ms);
> if (ret) {
> dev_err(pfdev->dev, "Failed to create scheduler: %d.", ret);
> goto err_sched;
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index a49132f3778b..4362442cbfd8 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -2268,6 +2268,7 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
> u64 full_va_range = 1ull << va_bits;
> struct drm_gem_object *dummy_gem;
> struct drm_gpu_scheduler *sched;
> + struct drm_sched_init_params sched_params;
> struct io_pgtable_cfg pgtbl_cfg;
> u64 mair, min_va, va_range;
> struct panthor_vm *vm;
> @@ -2284,6 +2285,8 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
> goto err_free_vm;
> }
>
> + memset(&sched_params, 0, sizeof(struct drm_sched_init_params));
> +
> mutex_init(&vm->heaps.lock);
> vm->for_mcu = for_mcu;
> vm->ptdev = ptdev;
> @@ -2325,11 +2328,18 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
> goto err_mm_takedown;
> }
>
> + sched_params.ops = &panthor_vm_bind_ops;
> + sched_params.submit_wq = ptdev->mmu->vm.wq;
> + sched_params.num_rqs = 1;
> + sched_params.credit_limit = 1;
> + sched_params.hang_limit = 0;
> /* Bind operations are synchronous for now, no timeout needed. */
> - ret = drm_sched_init(&vm->sched, &panthor_vm_bind_ops, ptdev->mmu->vm.wq,
> - 1, 1, 0,
> - MAX_SCHEDULE_TIMEOUT, NULL, NULL,
> - "panthor-vm-bind", ptdev->base.dev);
> + sched_params.timeout = MAX_SCHEDULE_TIMEOUT;
> + sched_params.timeout_wq = NULL; /* Use the system_wq. */
> + sched_params.score = NULL;
> + sched_params.name = "panthor-vm-bind";
> + sched_params.dev = ptdev->base.dev;
> + ret = drm_sched_init(&vm->sched, &sched_params);
> if (ret)
> goto err_free_io_pgtable;
>
> diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
> index ef4bec7ff9c7..a324346d302f 100644
> --- a/drivers/gpu/drm/panthor/panthor_sched.c
> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group *group,
> const struct drm_panthor_queue_create *args)
> {
> struct drm_gpu_scheduler *drm_sched;
> + struct drm_sched_init_params sched_params;
> struct panthor_queue *queue;
> int ret;
>
> @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group *group,
> if (!queue)
> return ERR_PTR(-ENOMEM);
>
> + memset(&sched_params, 0, sizeof(struct drm_sched_init_params));
> +
> queue->fence_ctx.id = dma_fence_context_alloc(1);
> spin_lock_init(&queue->fence_ctx.lock);
> INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
> @@ -3341,17 +3344,23 @@ group_create_queue(struct panthor_group *group,
> if (ret)
> goto err_free_queue;
>
> + sched_params.ops = &panthor_queue_sched_ops;
> + sched_params.submit_wq = group->ptdev->scheduler->wq;
> + sched_params.num_rqs = 1;
> /*
> - * Credit limit argument tells us the total number of instructions
> + * The credit limit argument tells us the total number of instructions
> * across all CS slots in the ringbuffer, with some jobs requiring
> * twice as many as others, depending on their profiling status.
> */
> - ret = drm_sched_init(&queue->scheduler, &panthor_queue_sched_ops,
> - group->ptdev->scheduler->wq, 1,
> - args->ringbuf_size / sizeof(u64),
> - 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
> - group->ptdev->reset.wq,
> - NULL, "panthor-queue", group->ptdev->base.dev);
> + sched_params.credit_limit = args->ringbuf_size / sizeof(u64);
> + sched_params.hang_limit = 0;
> + sched_params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
> + sched_params.timeout_wq = group->ptdev->reset.wq;
> + sched_params.score = NULL;
> + sched_params.name = "panthor-queue";
> + sched_params.dev = group->ptdev->base.dev;
> +
> + ret = drm_sched_init(&queue->scheduler, &sched_params);
> if (ret)
> goto err_free_queue;
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 57da84908752..27db748a5269 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -1240,40 +1240,25 @@ static void drm_sched_run_job_work(struct work_struct *w)
> * drm_sched_init - Init a gpu scheduler instance
> *
> * @sched: scheduler instance
> - * @ops: backend operations for this scheduler
> - * @submit_wq: workqueue to use for submission. If NULL, an ordered wq is
> - * allocated and used
> - * @num_rqs: number of runqueues, one for each priority, up to DRM_SCHED_PRIORITY_COUNT
> - * @credit_limit: the number of credits this scheduler can hold from all jobs
> - * @hang_limit: number of times to allow a job to hang before dropping it
> - * @timeout: timeout value in jiffies for the scheduler
> - * @timeout_wq: workqueue to use for timeout work. If NULL, the system_wq is
> - * used
> - * @score: optional score atomic shared with other schedulers
> - * @name: name used for debugging
> - * @dev: target &struct device
> + * @params: scheduler initialization parameters
> *
> * Return 0 on success, otherwise error code.
> */
> int drm_sched_init(struct drm_gpu_scheduler *sched,
> - const struct drm_sched_backend_ops *ops,
> - struct workqueue_struct *submit_wq,
> - u32 num_rqs, u32 credit_limit, unsigned int hang_limit,
> - long timeout, struct workqueue_struct *timeout_wq,
> - atomic_t *score, const char *name, struct device *dev)
> + const struct drm_sched_init_params *params)
> {
> int i;
>
> - sched->ops = ops;
> - sched->credit_limit = credit_limit;
> - sched->name = name;
> - sched->timeout = timeout;
> - sched->timeout_wq = timeout_wq ? : system_wq;
> - sched->hang_limit = hang_limit;
> - sched->score = score ? score : &sched->_score;
> - sched->dev = dev;
> + sched->ops = params->ops;
> + sched->credit_limit = params->credit_limit;
> + sched->name = params->name;
> + sched->timeout = params->timeout;
> + sched->timeout_wq = params->timeout_wq ? : system_wq;
> + sched->hang_limit = params->hang_limit;
> + sched->score = params->score ? params->score : &sched->_score;
> + sched->dev = params->dev;
>
> - if (num_rqs > DRM_SCHED_PRIORITY_COUNT) {
> + if (params->num_rqs > DRM_SCHED_PRIORITY_COUNT) {
> /* This is a gross violation--tell drivers what the problem is.
> */
> drm_err(sched, "%s: num_rqs cannot be greater than DRM_SCHED_PRIORITY_COUNT\n",
> @@ -1288,16 +1273,16 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
> return 0;
> }
>
> - if (submit_wq) {
> - sched->submit_wq = submit_wq;
> + if (params->submit_wq) {
> + sched->submit_wq = params->submit_wq;
> sched->own_submit_wq = false;
> } else {
> #ifdef CONFIG_LOCKDEP
> - sched->submit_wq = alloc_ordered_workqueue_lockdep_map(name,
> - WQ_MEM_RECLAIM,
> - &drm_sched_lockdep_map);
> + sched->submit_wq = alloc_ordered_workqueue_lockdep_map(
> + params->name, WQ_MEM_RECLAIM,
> + &drm_sched_lockdep_map);
> #else
> - sched->submit_wq = alloc_ordered_workqueue(name, WQ_MEM_RECLAIM);
> + sched->submit_wq = alloc_ordered_workqueue(params->name, WQ_MEM_RECLAIM);
> #endif
> if (!sched->submit_wq)
> return -ENOMEM;
> @@ -1305,11 +1290,11 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
> sched->own_submit_wq = true;
> }
>
> - sched->sched_rq = kmalloc_array(num_rqs, sizeof(*sched->sched_rq),
> + sched->sched_rq = kmalloc_array(params->num_rqs, sizeof(*sched->sched_rq),
> GFP_KERNEL | __GFP_ZERO);
> if (!sched->sched_rq)
> goto Out_check_own;
> - sched->num_rqs = num_rqs;
> + sched->num_rqs = params->num_rqs;
> for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) {
> sched->sched_rq[i] = kzalloc(sizeof(*sched->sched_rq[i]), GFP_KERNEL);
> if (!sched->sched_rq[i])
> diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
> index 99ac4995b5a1..716e6d074d87 100644
> --- a/drivers/gpu/drm/v3d/v3d_sched.c
> +++ b/drivers/gpu/drm/v3d/v3d_sched.c
> @@ -814,67 +814,124 @@ static const struct drm_sched_backend_ops v3d_cpu_sched_ops = {
> .free_job = v3d_cpu_job_free
> };
>
> +/*
> + * v3d's scheduler instances are all identical, except for ops and name.
> + */
> +static void
> +v3d_common_sched_init(struct drm_sched_init_params *params, struct device *dev)
> +{
> + memset(params, 0, sizeof(struct drm_sched_init_params));
> +
> + params->submit_wq = NULL; /* Use the system_wq. */
> + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
> + params->credit_limit = 1;
> + params->hang_limit = 0;
> + params->timeout = msecs_to_jiffies(500);
> + params->timeout_wq = NULL; /* Use the system_wq. */
> + params->score = NULL;
> + params->dev = dev;
> +}
> +
> +static int
> +v3d_bin_sched_init(struct v3d_dev *v3d)
> +{
> + struct drm_sched_init_params params;
> +
> + v3d_common_sched_init(¶ms, v3d->drm.dev);
> + params.ops = &v3d_bin_sched_ops;
> + params.name = "v3d_bin";
> +
> + return drm_sched_init(&v3d->queue[V3D_BIN].sched, ¶ms);
> +}
> +
> +static int
> +v3d_render_sched_init(struct v3d_dev *v3d)
> +{
> + struct drm_sched_init_params params;
> +
> + v3d_common_sched_init(¶ms, v3d->drm.dev);
> + params.ops = &v3d_render_sched_ops;
> + params.name = "v3d_render";
> +
> + return drm_sched_init(&v3d->queue[V3D_RENDER].sched, ¶ms);
> +}
> +
> +static int
> +v3d_tfu_sched_init(struct v3d_dev *v3d)
> +{
> + struct drm_sched_init_params params;
> +
> + v3d_common_sched_init(¶ms, v3d->drm.dev);
> + params.ops = &v3d_tfu_sched_ops;
> + params.name = "v3d_tfu";
> +
> + return drm_sched_init(&v3d->queue[V3D_TFU].sched, ¶ms);
> +}
> +
> +static int
> +v3d_csd_sched_init(struct v3d_dev *v3d)
> +{
> + struct drm_sched_init_params params;
> +
> + v3d_common_sched_init(¶ms, v3d->drm.dev);
> + params.ops = &v3d_csd_sched_ops;
> + params.name = "v3d_csd";
> +
> + return drm_sched_init(&v3d->queue[V3D_CSD].sched, ¶ms);
> +}
> +
> +static int
> +v3d_cache_sched_init(struct v3d_dev *v3d)
> +{
> + struct drm_sched_init_params params;
> +
> + v3d_common_sched_init(¶ms, v3d->drm.dev);
> + params.ops = &v3d_cache_clean_sched_ops;
> + params.name = "v3d_cache_clean";
> +
> + return drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched, ¶ms);
> +}
> +
> +static int
> +v3d_cpu_sched_init(struct v3d_dev *v3d)
> +{
> + struct drm_sched_init_params params;
> +
> + v3d_common_sched_init(¶ms, v3d->drm.dev);
> + params.ops = &v3d_cpu_sched_ops;
> + params.name = "v3d_cpu";
> +
> + return drm_sched_init(&v3d->queue[V3D_CPU].sched, ¶ms);
> +}
> +
> int
> v3d_sched_init(struct v3d_dev *v3d)
> {
> - int hw_jobs_limit = 1;
> - int job_hang_limit = 0;
> - int hang_limit_ms = 500;
> int ret;
>
> - ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
> - &v3d_bin_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - hw_jobs_limit, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_bin", v3d->drm.dev);
> + ret = v3d_bin_sched_init(v3d);
> if (ret)
> return ret;
>
> - ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> - &v3d_render_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - hw_jobs_limit, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_render", v3d->drm.dev);
> + ret = v3d_render_sched_init(v3d);
> if (ret)
> goto fail;
>
> - ret = drm_sched_init(&v3d->queue[V3D_TFU].sched,
> - &v3d_tfu_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - hw_jobs_limit, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_tfu", v3d->drm.dev);
> + ret = v3d_tfu_sched_init(v3d);
> if (ret)
> goto fail;
>
> if (v3d_has_csd(v3d)) {
> - ret = drm_sched_init(&v3d->queue[V3D_CSD].sched,
> - &v3d_csd_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - hw_jobs_limit, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_csd", v3d->drm.dev);
> + ret = v3d_csd_sched_init(v3d);
> if (ret)
> goto fail;
>
> - ret = drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched,
> - &v3d_cache_clean_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - hw_jobs_limit, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_cache_clean", v3d->drm.dev);
> + ret = v3d_cache_sched_init(v3d);
> if (ret)
> goto fail;
> }
>
> - ret = drm_sched_init(&v3d->queue[V3D_CPU].sched,
> - &v3d_cpu_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - 1, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_cpu", v3d->drm.dev);
> + ret = v3d_cpu_sched_init(v3d);
> if (ret)
> goto fail;
>
> diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
> index a8c416a48812..7f29b7f04af4 100644
> --- a/drivers/gpu/drm/xe/xe_execlist.c
> +++ b/drivers/gpu/drm/xe/xe_execlist.c
> @@ -332,10 +332,13 @@ static const struct drm_sched_backend_ops drm_sched_ops = {
> static int execlist_exec_queue_init(struct xe_exec_queue *q)
> {
> struct drm_gpu_scheduler *sched;
> + struct drm_sched_init_params params;
> struct xe_execlist_exec_queue *exl;
> struct xe_device *xe = gt_to_xe(q->gt);
> int err;
>
> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> +
> xe_assert(xe, !xe_device_uc_enabled(xe));
>
> drm_info(&xe->drm, "Enabling execlist submission (GuC submission disabled)\n");
> @@ -346,11 +349,18 @@ static int execlist_exec_queue_init(struct xe_exec_queue *q)
>
> exl->q = q;
>
> - err = drm_sched_init(&exl->sched, &drm_sched_ops, NULL, 1,
> - q->lrc[0]->ring.size / MAX_JOB_SIZE_BYTES,
> - XE_SCHED_HANG_LIMIT, XE_SCHED_JOB_TIMEOUT,
> - NULL, NULL, q->hwe->name,
> - gt_to_xe(q->gt)->drm.dev);
> + params.ops = &drm_sched_ops;
> + params.submit_wq = NULL; /* Use the system_wq. */
> + params.num_rqs = 1;
> + params.credit_limit = q->lrc[0]->ring.size / MAX_JOB_SIZE_BYTES;
> + params.hang_limit = XE_SCHED_HANG_LIMIT;
> + params.timeout = XE_SCHED_JOB_TIMEOUT;
> + params.timeout_wq = NULL; /* Use the system_wq. */
> + params.score = NULL;
> + params.name = q->hwe->name;
> + params.dev = gt_to_xe(q->gt)->drm.dev;
> +
> + err = drm_sched_init(&exl->sched, ¶ms);
> if (err)
> goto err_free;
>
> diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> index 50361b4638f9..2129fee83f25 100644
> --- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> +++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> @@ -63,13 +63,26 @@ int xe_sched_init(struct xe_gpu_scheduler *sched,
> atomic_t *score, const char *name,
> struct device *dev)
> {
> + struct drm_sched_init_params params;
> +
> sched->ops = xe_ops;
> INIT_LIST_HEAD(&sched->msgs);
> INIT_WORK(&sched->work_process_msg, xe_sched_process_msg_work);
>
> - return drm_sched_init(&sched->base, ops, submit_wq, 1, hw_submission,
> - hang_limit, timeout, timeout_wq, score, name,
> - dev);
> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> +
> + params.ops = ops;
> + params.submit_wq = submit_wq;
> + params.num_rqs = 1;
> + params.credit_limit = hw_submission;
> + params.hang_limit = hang_limit;
> + params.timeout = timeout;
> + params.timeout_wq = timeout_wq;
> + params.score = score;
> + params.name = name;
> + params.dev = dev;
> +
> + return drm_sched_init(&sched->base, ¶ms);
> }
>
> void xe_sched_fini(struct xe_gpu_scheduler *sched)
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 95e17504e46a..1a834ef43862 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -553,12 +553,37 @@ struct drm_gpu_scheduler {
> struct device *dev;
> };
>
> +/**
> + * struct drm_sched_init_params - parameters for initializing a DRM GPU scheduler
> + *
> + * @ops: backend operations provided by the driver
> + * @submit_wq: workqueue to use for submission. If NULL, an ordered wq is
> + * allocated and used
> + * @num_rqs: Number of run-queues. This is at most DRM_SCHED_PRIORITY_COUNT,
> + * as there's usually one run-queue per priority, but could be less.
> + * @credit_limit: the number of credits this scheduler can hold from all jobs
> + * @hang_limit: number of times to allow a job to hang before dropping it
> + * @timeout: timeout value in jiffies for the scheduler
> + * @timeout_wq: workqueue to use for timeout work. If NULL, the system_wq is
> + * used
> + * @score: optional score atomic shared with other schedulers
> + * @name: name used for debugging
> + * @dev: associated device. Used for debugging
> + */
> +struct drm_sched_init_params {
> + const struct drm_sched_backend_ops *ops;
> + struct workqueue_struct *submit_wq;
> + struct workqueue_struct *timeout_wq;
> + u32 num_rqs, credit_limit;
> + unsigned int hang_limit;
> + long timeout;
> + atomic_t *score;
> + const char *name;
> + struct device *dev;
> +};
> +
> int drm_sched_init(struct drm_gpu_scheduler *sched,
> - const struct drm_sched_backend_ops *ops,
> - struct workqueue_struct *submit_wq,
> - u32 num_rqs, u32 credit_limit, unsigned int hang_limit,
> - long timeout, struct workqueue_struct *timeout_wq,
> - atomic_t *score, const char *name, struct device *dev);
> + const struct drm_sched_init_params *params);
>
> void drm_sched_fini(struct drm_gpu_scheduler *sched);
> int drm_sched_job_init(struct drm_sched_job *job,
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:34 ` Christian König
@ 2025-01-22 14:48 ` Philipp Stanner
2025-01-22 15:02 ` Matthew Brost
2025-01-22 15:06 ` Christian König
0 siblings, 2 replies; 35+ messages in thread
From: Philipp Stanner @ 2025-01-22 14:48 UTC (permalink / raw)
To: Christian König, Philipp Stanner, Alex Deucher, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Danilo Krummrich, Boris Brezillon,
Rob Herring, Steven Price, Liviu Dudau, Luben Tuikov,
Matthew Brost, Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li
Cc: amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
On Wed, 2025-01-22 at 15:34 +0100, Christian König wrote:
> Am 22.01.25 um 15:08 schrieb Philipp Stanner:
> > drm_sched_init() has a great many parameters and upcoming new
> > functionality for the scheduler might add even more. Generally, the
> > great number of parameters reduces readability and has already
> > caused
> > one missnaming in:
> >
> > commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in
> > nouveau_sched_init()").
> >
> > Introduce a new struct for the scheduler init parameters and port
> > all
> > users.
> >
> > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > ---
> > Howdy,
> >
> > I have a patch-series in the pipe that will add a `flags` argument
> > to
> > drm_sched_init(). I thought it would be wise to first rework the
> > API as
> > detailed in this patch. It's really a lot of parameters by now, and
> > I
> > would expect that it might get more and more over the years for
> > special
> > use cases etc.
> >
> > Regards,
> > P.
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +++-
> > drivers/gpu/drm/etnaviv/etnaviv_sched.c | 20 ++-
> > drivers/gpu/drm/imagination/pvr_queue.c | 21 +++-
> > drivers/gpu/drm/lima/lima_sched.c | 21 +++-
> > drivers/gpu/drm/msm/msm_ringbuffer.c | 22 ++--
> > drivers/gpu/drm/nouveau/nouveau_sched.c | 20 ++-
> > drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++--
> > drivers/gpu/drm/panthor/panthor_mmu.c | 18 ++-
> > drivers/gpu/drm/panthor/panthor_sched.c | 23 ++--
> > drivers/gpu/drm/scheduler/sched_main.c | 53 +++-----
> > drivers/gpu/drm/v3d/v3d_sched.c | 135 +++++++++++++++-
> > -----
> > drivers/gpu/drm/xe/xe_execlist.c | 20 ++-
> > drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++-
> > include/drm/gpu_scheduler.h | 35 +++++-
> > 14 files changed, 311 insertions(+), 139 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > index cd4fac120834..c1f03eb5f5ea 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > @@ -2821,6 +2821,9 @@ static int
> > amdgpu_device_init_schedulers(struct amdgpu_device *adev)
> > {
> > long timeout;
> > int r, i;
> > + struct drm_sched_init_params params;
>
> Please keep the reverse xmas tree ordering for variable declaration.
> E.g. long lines first and variables like "i" and "r" last.
Okay dokay
>
> Apart from that looks like a good idea to me.
>
>
> > +
> > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> >
> > for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
> > struct amdgpu_ring *ring = adev->rings[i];
> > @@ -2844,12 +2847,18 @@ static int
> > amdgpu_device_init_schedulers(struct amdgpu_device *adev)
> > break;
> > }
> >
> > - r = drm_sched_init(&ring->sched,
> > &amdgpu_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - ring->num_hw_submission, 0,
> > - timeout, adev->reset_domain-
> > >wq,
> > - ring->sched_score, ring->name,
> > - adev->dev);
> > + params.ops = &amdgpu_sched_ops;
> > + params.submit_wq = NULL; /* Use the system_wq. */
> > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > + params.credit_limit = ring->num_hw_submission;
> > + params.hang_limit = 0;
>
> Could we please remove the hang limit as first step to get this awful
> feature deprecated?
Remove it? From the struct you mean?
We can mark it as deprecated in the docstring of the new struct. That's
what you mean, don't you?
P.
>
> Thanks,
> Christian.
>
> > + params.timeout = timeout;
> > + params.timeout_wq = adev->reset_domain->wq;
> > + params.score = ring->sched_score;
> > + params.name = ring->name;
> > + params.dev = adev->dev;
> > +
> > + r = drm_sched_init(&ring->sched, ¶ms);
> > if (r) {
> > DRM_ERROR("Failed to create scheduler on
> > ring %s.\n",
> > ring->name);
> > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > index 5b67eda122db..7d8517f1963e 100644
> > --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > @@ -145,12 +145,22 @@ int etnaviv_sched_push_job(struct
> > etnaviv_gem_submit *submit)
> > int etnaviv_sched_init(struct etnaviv_gpu *gpu)
> > {
> > int ret;
> > + struct drm_sched_init_params params;
> >
> > - ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops,
> > NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - etnaviv_hw_jobs_limit,
> > etnaviv_job_hang_limit,
> > - msecs_to_jiffies(500), NULL, NULL,
> > - dev_name(gpu->dev), gpu->dev);
> > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > +
> > + params.ops = &etnaviv_sched_ops;
> > + params.submit_wq = NULL; /* Use the system_wq. */
> > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > + params.credit_limit = etnaviv_hw_jobs_limit;
> > + params.hang_limit = etnaviv_job_hang_limit;
> > + params.timeout = msecs_to_jiffies(500);
> > + params.timeout_wq = NULL; /* Use the system_wq. */
> > + params.score = NULL;
> > + params.name = dev_name(gpu->dev);
> > + params.dev = gpu->dev;
> > +
> > + ret = drm_sched_init(&gpu->sched, ¶ms);
> > if (ret)
> > return ret;
> >
> > diff --git a/drivers/gpu/drm/imagination/pvr_queue.c
> > b/drivers/gpu/drm/imagination/pvr_queue.c
> > index c4f08432882b..03a2ce1a88e7 100644
> > --- a/drivers/gpu/drm/imagination/pvr_queue.c
> > +++ b/drivers/gpu/drm/imagination/pvr_queue.c
> > @@ -1211,10 +1211,13 @@ struct pvr_queue *pvr_queue_create(struct
> > pvr_context *ctx,
> > };
> > struct pvr_device *pvr_dev = ctx->pvr_dev;
> > struct drm_gpu_scheduler *sched;
> > + struct drm_sched_init_params sched_params;
> > struct pvr_queue *queue;
> > int ctx_state_size, err;
> > void *cpu_map;
> >
> > + memset(&sched_params, 0, sizeof(struct
> > drm_sched_init_params));
> > +
> > if (WARN_ON(type >= sizeof(props)))
> > return ERR_PTR(-EINVAL);
> >
> > @@ -1282,12 +1285,18 @@ struct pvr_queue *pvr_queue_create(struct
> > pvr_context *ctx,
> >
> > queue->timeline_ufo.value = cpu_map;
> >
> > - err = drm_sched_init(&queue->scheduler,
> > - &pvr_queue_sched_ops,
> > - pvr_dev->sched_wq, 1, 64 * 1024, 1,
> > - msecs_to_jiffies(500),
> > - pvr_dev->sched_wq, NULL, "pvr-queue",
> > - pvr_dev->base.dev);
> > + sched_params.ops = &pvr_queue_sched_ops;
> > + sched_params.submit_wq = pvr_dev->sched_wq;
> > + sched_params.num_rqs = 1;
> > + sched_params.credit_limit = 64 * 1024;
> > + sched_params.hang_limit = 1;
> > + sched_params.timeout = msecs_to_jiffies(500);
> > + sched_params.timeout_wq = pvr_dev->sched_wq;
> > + sched_params.score = NULL;
> > + sched_params.name = "pvr-queue";
> > + sched_params.dev = pvr_dev->base.dev;
> > +
> > + err = drm_sched_init(&queue->scheduler, &sched_params);
> > if (err)
> > goto err_release_ufo;
> >
> > diff --git a/drivers/gpu/drm/lima/lima_sched.c
> > b/drivers/gpu/drm/lima/lima_sched.c
> > index b40c90e97d7e..a64c50fb6d1e 100644
> > --- a/drivers/gpu/drm/lima/lima_sched.c
> > +++ b/drivers/gpu/drm/lima/lima_sched.c
> > @@ -513,20 +513,29 @@ static void lima_sched_recover_work(struct
> > work_struct *work)
> >
> > int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char
> > *name)
> > {
> > + struct drm_sched_init_params params;
> > unsigned int timeout = lima_sched_timeout_ms > 0 ?
> > lima_sched_timeout_ms : 10000;
> >
> > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > +
> > pipe->fence_context = dma_fence_context_alloc(1);
> > spin_lock_init(&pipe->fence_lock);
> >
> > INIT_WORK(&pipe->recover_work, lima_sched_recover_work);
> >
> > - return drm_sched_init(&pipe->base, &lima_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - 1,
> > - lima_job_hang_limit,
> > - msecs_to_jiffies(timeout), NULL,
> > - NULL, name, pipe->ldev->dev);
> > + params.ops = &lima_sched_ops;
> > + params.submit_wq = NULL; /* Use the system_wq. */
> > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > + params.credit_limit = 1;
> > + params.hang_limit = lima_job_hang_limit;
> > + params.timeout = msecs_to_jiffies(timeout);
> > + params.timeout_wq = NULL; /* Use the system_wq. */
> > + params.score = NULL;
> > + params.name = name;
> > + params.dev = pipe->ldev->dev;
> > +
> > + return drm_sched_init(&pipe->base, ¶ms);
> > }
> >
> > void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
> > diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c
> > b/drivers/gpu/drm/msm/msm_ringbuffer.c
> > index c803556a8f64..49a2c7422dc6 100644
> > --- a/drivers/gpu/drm/msm/msm_ringbuffer.c
> > +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
> > @@ -59,11 +59,13 @@ static const struct drm_sched_backend_ops
> > msm_sched_ops = {
> > struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu,
> > int id,
> > void *memptrs, uint64_t memptrs_iova)
> > {
> > + struct drm_sched_init_params params;
> > struct msm_ringbuffer *ring;
> > - long sched_timeout;
> > char name[32];
> > int ret;
> >
> > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > +
> > /* We assume everywhere that MSM_GPU_RINGBUFFER_SZ is a
> > power of 2 */
> > BUILD_BUG_ON(!is_power_of_2(MSM_GPU_RINGBUFFER_SZ));
> >
> > @@ -95,13 +97,19 @@ struct msm_ringbuffer
> > *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
> > ring->memptrs = memptrs;
> > ring->memptrs_iova = memptrs_iova;
> >
> > - /* currently managing hangcheck ourselves: */
> > - sched_timeout = MAX_SCHEDULE_TIMEOUT;
> > + params.ops = &msm_sched_ops;
> > + params.submit_wq = NULL; /* Use the system_wq. */
> > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > + params.credit_limit = num_hw_submissions;
> > + params.hang_limit = 0;
> > + /* currently managing hangcheck ourselves: */
> > + params.timeout = MAX_SCHEDULE_TIMEOUT;
> > + params.timeout_wq = NULL; /* Use the system_wq. */
> > + params.score = NULL;
> > + params.name = to_msm_bo(ring->bo)->name;
> > + params.dev = gpu->dev->dev;
> >
> > - ret = drm_sched_init(&ring->sched, &msm_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - num_hw_submissions, 0, sched_timeout,
> > - NULL, NULL, to_msm_bo(ring->bo)-
> > >name, gpu->dev->dev);
> > + ret = drm_sched_init(&ring->sched, ¶ms);
> > if (ret) {
> > goto fail;
> > }
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c
> > b/drivers/gpu/drm/nouveau/nouveau_sched.c
> > index 4412f2711fb5..f20c2e612750 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_sched.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
> > @@ -404,9 +404,11 @@ nouveau_sched_init(struct nouveau_sched
> > *sched, struct nouveau_drm *drm,
> > {
> > struct drm_gpu_scheduler *drm_sched = &sched->base;
> > struct drm_sched_entity *entity = &sched->entity;
> > - const long timeout =
> > msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
> > + struct drm_sched_init_params params;
> > int ret;
> >
> > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > +
> > if (!wq) {
> > wq = alloc_workqueue("nouveau_sched_wq_%d", 0,
> > WQ_MAX_ACTIVE,
> > current->pid);
> > @@ -416,10 +418,18 @@ nouveau_sched_init(struct nouveau_sched
> > *sched, struct nouveau_drm *drm,
> > sched->wq = wq;
> > }
> >
> > - ret = drm_sched_init(drm_sched, &nouveau_sched_ops, wq,
> > - NOUVEAU_SCHED_PRIORITY_COUNT,
> > - credit_limit, 0, timeout,
> > - NULL, NULL, "nouveau_sched", drm-
> > >dev->dev);
> > + params.ops = &nouveau_sched_ops;
> > + params.submit_wq = wq;
> > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > + params.credit_limit = credit_limit;
> > + params.hang_limit = 0;
> > + params.timeout =
> > msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
> > + params.timeout_wq = NULL; /* Use the system_wq. */
> > + params.score = NULL;
> > + params.name = "nouveau_sched";
> > + params.dev = drm->dev->dev;
> > +
> > + ret = drm_sched_init(drm_sched, ¶ms);
> > if (ret)
> > goto fail_wq;
> >
> > diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
> > b/drivers/gpu/drm/panfrost/panfrost_job.c
> > index 9b8e82fb8bc4..6b509ff446b5 100644
> > --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> > +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> > @@ -836,10 +836,13 @@ static irqreturn_t
> > panfrost_job_irq_handler(int irq, void *data)
> >
> > int panfrost_job_init(struct panfrost_device *pfdev)
> > {
> > + struct drm_sched_init_params params;
> > struct panfrost_job_slot *js;
> > unsigned int nentries = 2;
> > int ret, j;
> >
> > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > +
> > /* All GPUs have two entries per queue, but without
> > jobchain
> > * disambiguation stopping the right job in the close path
> > is tricky,
> > * so let's just advertise one entry in that case.
> > @@ -872,16 +875,21 @@ int panfrost_job_init(struct panfrost_device
> > *pfdev)
> > if (!pfdev->reset.wq)
> > return -ENOMEM;
> >
> > + params.ops = &panfrost_sched_ops;
> > + params.submit_wq = NULL; /* Use the system_wq. */
> > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > + params.credit_limit = nentries;
> > + params.hang_limit = 0;
> > + params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
> > + params.timeout_wq = pfdev->reset.wq;
> > + params.score = NULL;
> > + params.name = "pan_js";
> > + params.dev = pfdev->dev;
> > +
> > for (j = 0; j < NUM_JOB_SLOTS; j++) {
> > js->queue[j].fence_context =
> > dma_fence_context_alloc(1);
> >
> > - ret = drm_sched_init(&js->queue[j].sched,
> > - &panfrost_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - nentries, 0,
> > -
> > msecs_to_jiffies(JOB_TIMEOUT_MS),
> > - pfdev->reset.wq,
> > - NULL, "pan_js", pfdev->dev);
> > + ret = drm_sched_init(&js->queue[j].sched,
> > ¶ms);
> > if (ret) {
> > dev_err(pfdev->dev, "Failed to create
> > scheduler: %d.", ret);
> > goto err_sched;
> > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c
> > b/drivers/gpu/drm/panthor/panthor_mmu.c
> > index a49132f3778b..4362442cbfd8 100644
> > --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> > @@ -2268,6 +2268,7 @@ panthor_vm_create(struct panthor_device
> > *ptdev, bool for_mcu,
> > u64 full_va_range = 1ull << va_bits;
> > struct drm_gem_object *dummy_gem;
> > struct drm_gpu_scheduler *sched;
> > + struct drm_sched_init_params sched_params;
> > struct io_pgtable_cfg pgtbl_cfg;
> > u64 mair, min_va, va_range;
> > struct panthor_vm *vm;
> > @@ -2284,6 +2285,8 @@ panthor_vm_create(struct panthor_device
> > *ptdev, bool for_mcu,
> > goto err_free_vm;
> > }
> >
> > + memset(&sched_params, 0, sizeof(struct
> > drm_sched_init_params));
> > +
> > mutex_init(&vm->heaps.lock);
> > vm->for_mcu = for_mcu;
> > vm->ptdev = ptdev;
> > @@ -2325,11 +2328,18 @@ panthor_vm_create(struct panthor_device
> > *ptdev, bool for_mcu,
> > goto err_mm_takedown;
> > }
> >
> > + sched_params.ops = &panthor_vm_bind_ops;
> > + sched_params.submit_wq = ptdev->mmu->vm.wq;
> > + sched_params.num_rqs = 1;
> > + sched_params.credit_limit = 1;
> > + sched_params.hang_limit = 0;
> > /* Bind operations are synchronous for now, no timeout
> > needed. */
> > - ret = drm_sched_init(&vm->sched, &panthor_vm_bind_ops,
> > ptdev->mmu->vm.wq,
> > - 1, 1, 0,
> > - MAX_SCHEDULE_TIMEOUT, NULL, NULL,
> > - "panthor-vm-bind", ptdev->base.dev);
> > + sched_params.timeout = MAX_SCHEDULE_TIMEOUT;
> > + sched_params.timeout_wq = NULL; /* Use the system_wq. */
> > + sched_params.score = NULL;
> > + sched_params.name = "panthor-vm-bind";
> > + sched_params.dev = ptdev->base.dev;
> > + ret = drm_sched_init(&vm->sched, &sched_params);
> > if (ret)
> > goto err_free_io_pgtable;
> >
> > diff --git a/drivers/gpu/drm/panthor/panthor_sched.c
> > b/drivers/gpu/drm/panthor/panthor_sched.c
> > index ef4bec7ff9c7..a324346d302f 100644
> > --- a/drivers/gpu/drm/panthor/panthor_sched.c
> > +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> > @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group
> > *group,
> > const struct drm_panthor_queue_create *args)
> > {
> > struct drm_gpu_scheduler *drm_sched;
> > + struct drm_sched_init_params sched_params;
> > struct panthor_queue *queue;
> > int ret;
> >
> > @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group
> > *group,
> > if (!queue)
> > return ERR_PTR(-ENOMEM);
> >
> > + memset(&sched_params, 0, sizeof(struct
> > drm_sched_init_params));
> > +
> > queue->fence_ctx.id = dma_fence_context_alloc(1);
> > spin_lock_init(&queue->fence_ctx.lock);
> > INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
> > @@ -3341,17 +3344,23 @@ group_create_queue(struct panthor_group
> > *group,
> > if (ret)
> > goto err_free_queue;
> >
> > + sched_params.ops = &panthor_queue_sched_ops;
> > + sched_params.submit_wq = group->ptdev->scheduler->wq;
> > + sched_params.num_rqs = 1;
> > /*
> > - * Credit limit argument tells us the total number of
> > instructions
> > + * The credit limit argument tells us the total number of
> > instructions
> > * across all CS slots in the ringbuffer, with some jobs
> > requiring
> > * twice as many as others, depending on their profiling
> > status.
> > */
> > - ret = drm_sched_init(&queue->scheduler,
> > &panthor_queue_sched_ops,
> > - group->ptdev->scheduler->wq, 1,
> > - args->ringbuf_size / sizeof(u64),
> > - 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
> > - group->ptdev->reset.wq,
> > - NULL, "panthor-queue", group->ptdev-
> > >base.dev);
> > + sched_params.credit_limit = args->ringbuf_size /
> > sizeof(u64);
> > + sched_params.hang_limit = 0;
> > + sched_params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
> > + sched_params.timeout_wq = group->ptdev->reset.wq;
> > + sched_params.score = NULL;
> > + sched_params.name = "panthor-queue";
> > + sched_params.dev = group->ptdev->base.dev;
> > +
> > + ret = drm_sched_init(&queue->scheduler, &sched_params);
> > if (ret)
> > goto err_free_queue;
> >
> > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > b/drivers/gpu/drm/scheduler/sched_main.c
> > index 57da84908752..27db748a5269 100644
> > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > @@ -1240,40 +1240,25 @@ static void drm_sched_run_job_work(struct
> > work_struct *w)
> > * drm_sched_init - Init a gpu scheduler instance
> > *
> > * @sched: scheduler instance
> > - * @ops: backend operations for this scheduler
> > - * @submit_wq: workqueue to use for submission. If NULL, an
> > ordered wq is
> > - * allocated and used
> > - * @num_rqs: number of runqueues, one for each priority, up to
> > DRM_SCHED_PRIORITY_COUNT
> > - * @credit_limit: the number of credits this scheduler can hold
> > from all jobs
> > - * @hang_limit: number of times to allow a job to hang before
> > dropping it
> > - * @timeout: timeout value in jiffies for the scheduler
> > - * @timeout_wq: workqueue to use for timeout work. If NULL, the
> > system_wq is
> > - * used
> > - * @score: optional score atomic shared with other schedulers
> > - * @name: name used for debugging
> > - * @dev: target &struct device
> > + * @params: scheduler initialization parameters
> > *
> > * Return 0 on success, otherwise error code.
> > */
> > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > - const struct drm_sched_backend_ops *ops,
> > - struct workqueue_struct *submit_wq,
> > - u32 num_rqs, u32 credit_limit, unsigned int
> > hang_limit,
> > - long timeout, struct workqueue_struct
> > *timeout_wq,
> > - atomic_t *score, const char *name, struct
> > device *dev)
> > + const struct drm_sched_init_params *params)
> > {
> > int i;
> >
> > - sched->ops = ops;
> > - sched->credit_limit = credit_limit;
> > - sched->name = name;
> > - sched->timeout = timeout;
> > - sched->timeout_wq = timeout_wq ? : system_wq;
> > - sched->hang_limit = hang_limit;
> > - sched->score = score ? score : &sched->_score;
> > - sched->dev = dev;
> > + sched->ops = params->ops;
> > + sched->credit_limit = params->credit_limit;
> > + sched->name = params->name;
> > + sched->timeout = params->timeout;
> > + sched->timeout_wq = params->timeout_wq ? : system_wq;
> > + sched->hang_limit = params->hang_limit;
> > + sched->score = params->score ? params->score : &sched-
> > >_score;
> > + sched->dev = params->dev;
> >
> > - if (num_rqs > DRM_SCHED_PRIORITY_COUNT) {
> > + if (params->num_rqs > DRM_SCHED_PRIORITY_COUNT) {
> > /* This is a gross violation--tell drivers what
> > the problem is.
> > */
> > drm_err(sched, "%s: num_rqs cannot be greater than
> > DRM_SCHED_PRIORITY_COUNT\n",
> > @@ -1288,16 +1273,16 @@ int drm_sched_init(struct drm_gpu_scheduler
> > *sched,
> > return 0;
> > }
> >
> > - if (submit_wq) {
> > - sched->submit_wq = submit_wq;
> > + if (params->submit_wq) {
> > + sched->submit_wq = params->submit_wq;
> > sched->own_submit_wq = false;
> > } else {
> > #ifdef CONFIG_LOCKDEP
> > - sched->submit_wq =
> > alloc_ordered_workqueue_lockdep_map(name,
> > -
> > WQ_MEM_RECLAIM,
> > -
> > &drm_sched_lockdep_map);
> > + sched->submit_wq =
> > alloc_ordered_workqueue_lockdep_map(
> > + params->name,
> > WQ_MEM_RECLAIM,
> > + &drm_sched_lockdep_map);
> > #else
> > - sched->submit_wq = alloc_ordered_workqueue(name,
> > WQ_MEM_RECLAIM);
> > + sched->submit_wq = alloc_ordered_workqueue(params-
> > >name, WQ_MEM_RECLAIM);
> > #endif
> > if (!sched->submit_wq)
> > return -ENOMEM;
> > @@ -1305,11 +1290,11 @@ int drm_sched_init(struct drm_gpu_scheduler
> > *sched,
> > sched->own_submit_wq = true;
> > }
> >
> > - sched->sched_rq = kmalloc_array(num_rqs, sizeof(*sched-
> > >sched_rq),
> > + sched->sched_rq = kmalloc_array(params->num_rqs,
> > sizeof(*sched->sched_rq),
> > GFP_KERNEL | __GFP_ZERO);
> > if (!sched->sched_rq)
> > goto Out_check_own;
> > - sched->num_rqs = num_rqs;
> > + sched->num_rqs = params->num_rqs;
> > for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs;
> > i++) {
> > sched->sched_rq[i] = kzalloc(sizeof(*sched-
> > >sched_rq[i]), GFP_KERNEL);
> > if (!sched->sched_rq[i])
> > diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
> > b/drivers/gpu/drm/v3d/v3d_sched.c
> > index 99ac4995b5a1..716e6d074d87 100644
> > --- a/drivers/gpu/drm/v3d/v3d_sched.c
> > +++ b/drivers/gpu/drm/v3d/v3d_sched.c
> > @@ -814,67 +814,124 @@ static const struct drm_sched_backend_ops
> > v3d_cpu_sched_ops = {
> > .free_job = v3d_cpu_job_free
> > };
> >
> > +/*
> > + * v3d's scheduler instances are all identical, except for ops and
> > name.
> > + */
> > +static void
> > +v3d_common_sched_init(struct drm_sched_init_params *params, struct
> > device *dev)
> > +{
> > + memset(params, 0, sizeof(struct drm_sched_init_params));
> > +
> > + params->submit_wq = NULL; /* Use the system_wq. */
> > + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > + params->credit_limit = 1;
> > + params->hang_limit = 0;
> > + params->timeout = msecs_to_jiffies(500);
> > + params->timeout_wq = NULL; /* Use the system_wq. */
> > + params->score = NULL;
> > + params->dev = dev;
> > +}
> > +
> > +static int
> > +v3d_bin_sched_init(struct v3d_dev *v3d)
> > +{
> > + struct drm_sched_init_params params;
> > +
> > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > + params.ops = &v3d_bin_sched_ops;
> > + params.name = "v3d_bin";
> > +
> > + return drm_sched_init(&v3d->queue[V3D_BIN].sched,
> > ¶ms);
> > +}
> > +
> > +static int
> > +v3d_render_sched_init(struct v3d_dev *v3d)
> > +{
> > + struct drm_sched_init_params params;
> > +
> > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > + params.ops = &v3d_render_sched_ops;
> > + params.name = "v3d_render";
> > +
> > + return drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> > ¶ms);
> > +}
> > +
> > +static int
> > +v3d_tfu_sched_init(struct v3d_dev *v3d)
> > +{
> > + struct drm_sched_init_params params;
> > +
> > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > + params.ops = &v3d_tfu_sched_ops;
> > + params.name = "v3d_tfu";
> > +
> > + return drm_sched_init(&v3d->queue[V3D_TFU].sched,
> > ¶ms);
> > +}
> > +
> > +static int
> > +v3d_csd_sched_init(struct v3d_dev *v3d)
> > +{
> > + struct drm_sched_init_params params;
> > +
> > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > + params.ops = &v3d_csd_sched_ops;
> > + params.name = "v3d_csd";
> > +
> > + return drm_sched_init(&v3d->queue[V3D_CSD].sched,
> > ¶ms);
> > +}
> > +
> > +static int
> > +v3d_cache_sched_init(struct v3d_dev *v3d)
> > +{
> > + struct drm_sched_init_params params;
> > +
> > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > + params.ops = &v3d_cache_clean_sched_ops;
> > + params.name = "v3d_cache_clean";
> > +
> > + return drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched,
> > ¶ms);
> > +}
> > +
> > +static int
> > +v3d_cpu_sched_init(struct v3d_dev *v3d)
> > +{
> > + struct drm_sched_init_params params;
> > +
> > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > + params.ops = &v3d_cpu_sched_ops;
> > + params.name = "v3d_cpu";
> > +
> > + return drm_sched_init(&v3d->queue[V3D_CPU].sched,
> > ¶ms);
> > +}
> > +
> > int
> > v3d_sched_init(struct v3d_dev *v3d)
> > {
> > - int hw_jobs_limit = 1;
> > - int job_hang_limit = 0;
> > - int hang_limit_ms = 500;
> > int ret;
> >
> > - ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
> > - &v3d_bin_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - hw_jobs_limit, job_hang_limit,
> > - msecs_to_jiffies(hang_limit_ms),
> > NULL,
> > - NULL, "v3d_bin", v3d->drm.dev);
> > + ret = v3d_bin_sched_init(v3d);
> > if (ret)
> > return ret;
> >
> > - ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> > - &v3d_render_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - hw_jobs_limit, job_hang_limit,
> > - msecs_to_jiffies(hang_limit_ms),
> > NULL,
> > - NULL, "v3d_render", v3d->drm.dev);
> > + ret = v3d_render_sched_init(v3d);
> > if (ret)
> > goto fail;
> >
> > - ret = drm_sched_init(&v3d->queue[V3D_TFU].sched,
> > - &v3d_tfu_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - hw_jobs_limit, job_hang_limit,
> > - msecs_to_jiffies(hang_limit_ms),
> > NULL,
> > - NULL, "v3d_tfu", v3d->drm.dev);
> > + ret = v3d_tfu_sched_init(v3d);
> > if (ret)
> > goto fail;
> >
> > if (v3d_has_csd(v3d)) {
> > - ret = drm_sched_init(&v3d->queue[V3D_CSD].sched,
> > - &v3d_csd_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - hw_jobs_limit,
> > job_hang_limit,
> > -
> > msecs_to_jiffies(hang_limit_ms), NULL,
> > - NULL, "v3d_csd", v3d-
> > >drm.dev);
> > + ret = v3d_csd_sched_init(v3d);
> > if (ret)
> > goto fail;
> >
> > - ret = drm_sched_init(&v3d-
> > >queue[V3D_CACHE_CLEAN].sched,
> > - &v3d_cache_clean_sched_ops,
> > NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - hw_jobs_limit,
> > job_hang_limit,
> > -
> > msecs_to_jiffies(hang_limit_ms), NULL,
> > - NULL, "v3d_cache_clean", v3d-
> > >drm.dev);
> > + ret = v3d_cache_sched_init(v3d);
> > if (ret)
> > goto fail;
> > }
> >
> > - ret = drm_sched_init(&v3d->queue[V3D_CPU].sched,
> > - &v3d_cpu_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - 1, job_hang_limit,
> > - msecs_to_jiffies(hang_limit_ms),
> > NULL,
> > - NULL, "v3d_cpu", v3d->drm.dev);
> > + ret = v3d_cpu_sched_init(v3d);
> > if (ret)
> > goto fail;
> >
> > diff --git a/drivers/gpu/drm/xe/xe_execlist.c
> > b/drivers/gpu/drm/xe/xe_execlist.c
> > index a8c416a48812..7f29b7f04af4 100644
> > --- a/drivers/gpu/drm/xe/xe_execlist.c
> > +++ b/drivers/gpu/drm/xe/xe_execlist.c
> > @@ -332,10 +332,13 @@ static const struct drm_sched_backend_ops
> > drm_sched_ops = {
> > static int execlist_exec_queue_init(struct xe_exec_queue *q)
> > {
> > struct drm_gpu_scheduler *sched;
> > + struct drm_sched_init_params params;
> > struct xe_execlist_exec_queue *exl;
> > struct xe_device *xe = gt_to_xe(q->gt);
> > int err;
> >
> > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > +
> > xe_assert(xe, !xe_device_uc_enabled(xe));
> >
> > drm_info(&xe->drm, "Enabling execlist submission (GuC
> > submission disabled)\n");
> > @@ -346,11 +349,18 @@ static int execlist_exec_queue_init(struct
> > xe_exec_queue *q)
> >
> > exl->q = q;
> >
> > - err = drm_sched_init(&exl->sched, &drm_sched_ops, NULL, 1,
> > - q->lrc[0]->ring.size /
> > MAX_JOB_SIZE_BYTES,
> > - XE_SCHED_HANG_LIMIT,
> > XE_SCHED_JOB_TIMEOUT,
> > - NULL, NULL, q->hwe->name,
> > - gt_to_xe(q->gt)->drm.dev);
> > + params.ops = &drm_sched_ops;
> > + params.submit_wq = NULL; /* Use the system_wq. */
> > + params.num_rqs = 1;
> > + params.credit_limit = q->lrc[0]->ring.size /
> > MAX_JOB_SIZE_BYTES;
> > + params.hang_limit = XE_SCHED_HANG_LIMIT;
> > + params.timeout = XE_SCHED_JOB_TIMEOUT;
> > + params.timeout_wq = NULL; /* Use the system_wq. */
> > + params.score = NULL;
> > + params.name = q->hwe->name;
> > + params.dev = gt_to_xe(q->gt)->drm.dev;
> > +
> > + err = drm_sched_init(&exl->sched, ¶ms);
> > if (err)
> > goto err_free;
> >
> > diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > index 50361b4638f9..2129fee83f25 100644
> > --- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > +++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > @@ -63,13 +63,26 @@ int xe_sched_init(struct xe_gpu_scheduler
> > *sched,
> > atomic_t *score, const char *name,
> > struct device *dev)
> > {
> > + struct drm_sched_init_params params;
> > +
> > sched->ops = xe_ops;
> > INIT_LIST_HEAD(&sched->msgs);
> > INIT_WORK(&sched->work_process_msg,
> > xe_sched_process_msg_work);
> >
> > - return drm_sched_init(&sched->base, ops, submit_wq, 1,
> > hw_submission,
> > - hang_limit, timeout, timeout_wq,
> > score, name,
> > - dev);
> > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > +
> > + params.ops = ops;
> > + params.submit_wq = submit_wq;
> > + params.num_rqs = 1;
> > + params.credit_limit = hw_submission;
> > + params.hang_limit = hang_limit;
> > + params.timeout = timeout;
> > + params.timeout_wq = timeout_wq;
> > + params.score = score;
> > + params.name = name;
> > + params.dev = dev;
> > +
> > + return drm_sched_init(&sched->base, ¶ms);
> > }
> >
> > void xe_sched_fini(struct xe_gpu_scheduler *sched)
> > diff --git a/include/drm/gpu_scheduler.h
> > b/include/drm/gpu_scheduler.h
> > index 95e17504e46a..1a834ef43862 100644
> > --- a/include/drm/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -553,12 +553,37 @@ struct drm_gpu_scheduler {
> > struct device *dev;
> > };
> >
> > +/**
> > + * struct drm_sched_init_params - parameters for initializing a
> > DRM GPU scheduler
> > + *
> > + * @ops: backend operations provided by the driver
> > + * @submit_wq: workqueue to use for submission. If NULL, an
> > ordered wq is
> > + * allocated and used
> > + * @num_rqs: Number of run-queues. This is at most
> > DRM_SCHED_PRIORITY_COUNT,
> > + * as there's usually one run-queue per priority, but
> > could be less.
> > + * @credit_limit: the number of credits this scheduler can hold
> > from all jobs
> > + * @hang_limit: number of times to allow a job to hang before
> > dropping it
> > + * @timeout: timeout value in jiffies for the scheduler
> > + * @timeout_wq: workqueue to use for timeout work. If NULL, the
> > system_wq is
> > + * used
> > + * @score: optional score atomic shared with other schedulers
> > + * @name: name used for debugging
> > + * @dev: associated device. Used for debugging
> > + */
> > +struct drm_sched_init_params {
> > + const struct drm_sched_backend_ops *ops;
> > + struct workqueue_struct *submit_wq;
> > + struct workqueue_struct *timeout_wq;
> > + u32 num_rqs, credit_limit;
> > + unsigned int hang_limit;
> > + long timeout;
> > + atomic_t *score;
> > + const char *name;
> > + struct device *dev;
> > +};
> > +
> > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > - const struct drm_sched_backend_ops *ops,
> > - struct workqueue_struct *submit_wq,
> > - u32 num_rqs, u32 credit_limit, unsigned int
> > hang_limit,
> > - long timeout, struct workqueue_struct
> > *timeout_wq,
> > - atomic_t *score, const char *name, struct
> > device *dev);
> > + const struct drm_sched_init_params *params);
> >
> > void drm_sched_fini(struct drm_gpu_scheduler *sched);
> > int drm_sched_job_init(struct drm_sched_job *job,
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:48 ` Philipp Stanner
@ 2025-01-22 15:02 ` Matthew Brost
2025-01-22 15:06 ` Christian König
1 sibling, 0 replies; 35+ messages in thread
From: Matthew Brost @ 2025-01-22 15:02 UTC (permalink / raw)
To: Philipp Stanner
Cc: Christian König, Philipp Stanner, Alex Deucher, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Danilo Krummrich, Boris Brezillon,
Rob Herring, Steven Price, Liviu Dudau, Luben Tuikov, Melissa Wen,
Maíra Canal, Lucas De Marchi, Thomas Hellström,
Rodrigo Vivi, Sunil Khatri, Lijo Lazar, Mario Limonciello, Ma Jun,
Yunxiang Li, amd-gfx, dri-devel, linux-kernel, etnaviv, lima,
linux-arm-msm, freedreno, nouveau, intel-xe
On Wed, Jan 22, 2025 at 03:48:54PM +0100, Philipp Stanner wrote:
> On Wed, 2025-01-22 at 15:34 +0100, Christian König wrote:
> > Am 22.01.25 um 15:08 schrieb Philipp Stanner:
> > > drm_sched_init() has a great many parameters and upcoming new
> > > functionality for the scheduler might add even more. Generally, the
> > > great number of parameters reduces readability and has already
> > > caused
> > > one missnaming in:
> > >
> > > commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in
> > > nouveau_sched_init()").
> > >
> > > Introduce a new struct for the scheduler init parameters and port
> > > all
> > > users.
> > >
> > > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > > ---
> > > Howdy,
> > >
> > > I have a patch-series in the pipe that will add a `flags` argument
> > > to
> > > drm_sched_init(). I thought it would be wise to first rework the
> > > API as
> > > detailed in this patch. It's really a lot of parameters by now, and
> > > I
> > > would expect that it might get more and more over the years for
> > > special
> > > use cases etc.
> > >
> > > Regards,
> > > P.
> > > ---
> > > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +++-
> > > drivers/gpu/drm/etnaviv/etnaviv_sched.c | 20 ++-
> > > drivers/gpu/drm/imagination/pvr_queue.c | 21 +++-
> > > drivers/gpu/drm/lima/lima_sched.c | 21 +++-
> > > drivers/gpu/drm/msm/msm_ringbuffer.c | 22 ++--
> > > drivers/gpu/drm/nouveau/nouveau_sched.c | 20 ++-
> > > drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++--
> > > drivers/gpu/drm/panthor/panthor_mmu.c | 18 ++-
> > > drivers/gpu/drm/panthor/panthor_sched.c | 23 ++--
> > > drivers/gpu/drm/scheduler/sched_main.c | 53 +++-----
> > > drivers/gpu/drm/v3d/v3d_sched.c | 135 +++++++++++++++-
> > > -----
> > > drivers/gpu/drm/xe/xe_execlist.c | 20 ++-
> > > drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++-
> > > include/drm/gpu_scheduler.h | 35 +++++-
> > > 14 files changed, 311 insertions(+), 139 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > > index cd4fac120834..c1f03eb5f5ea 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > > @@ -2821,6 +2821,9 @@ static int
> > > amdgpu_device_init_schedulers(struct amdgpu_device *adev)
> > > {
> > > long timeout;
> > > int r, i;
> > > + struct drm_sched_init_params params;
> >
> > Please keep the reverse xmas tree ordering for variable declaration.
> > E.g. long lines first and variables like "i" and "r" last.
>
> Okay dokay
>
> >
> > Apart from that looks like a good idea to me.
> >
+1. Looks like a good idea to me. Quite sure I have transposed
arguments in the past and broken thing, this would be a way to avoid
this.
One bikeshed. s/drm_sched_init_params/drm_sched_init_args? No strong
preference though.
Matt
> >
> > > +
> > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > >
> > > for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
> > > struct amdgpu_ring *ring = adev->rings[i];
> > > @@ -2844,12 +2847,18 @@ static int
> > > amdgpu_device_init_schedulers(struct amdgpu_device *adev)
> > > break;
> > > }
> > >
> > > - r = drm_sched_init(&ring->sched,
> > > &amdgpu_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - ring->num_hw_submission, 0,
> > > - timeout, adev->reset_domain-
> > > >wq,
> > > - ring->sched_score, ring->name,
> > > - adev->dev);
> > > + params.ops = &amdgpu_sched_ops;
> > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > + params.credit_limit = ring->num_hw_submission;
> > > + params.hang_limit = 0;
> >
> > Could we please remove the hang limit as first step to get this awful
> > feature deprecated?
>
> Remove it? From the struct you mean?
>
> We can mark it as deprecated in the docstring of the new struct. That's
> what you mean, don't you?
>
> P.
>
> >
> > Thanks,
> > Christian.
> >
> > > + params.timeout = timeout;
> > > + params.timeout_wq = adev->reset_domain->wq;
> > > + params.score = ring->sched_score;
> > > + params.name = ring->name;
> > > + params.dev = adev->dev;
> > > +
> > > + r = drm_sched_init(&ring->sched, ¶ms);
> > > if (r) {
> > > DRM_ERROR("Failed to create scheduler on
> > > ring %s.\n",
> > > ring->name);
> > > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > > b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > > index 5b67eda122db..7d8517f1963e 100644
> > > --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > > +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > > @@ -145,12 +145,22 @@ int etnaviv_sched_push_job(struct
> > > etnaviv_gem_submit *submit)
> > > int etnaviv_sched_init(struct etnaviv_gpu *gpu)
> > > {
> > > int ret;
> > > + struct drm_sched_init_params params;
> > >
> > > - ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops,
> > > NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - etnaviv_hw_jobs_limit,
> > > etnaviv_job_hang_limit,
> > > - msecs_to_jiffies(500), NULL, NULL,
> > > - dev_name(gpu->dev), gpu->dev);
> > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > +
> > > + params.ops = &etnaviv_sched_ops;
> > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > + params.credit_limit = etnaviv_hw_jobs_limit;
> > > + params.hang_limit = etnaviv_job_hang_limit;
> > > + params.timeout = msecs_to_jiffies(500);
> > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > + params.score = NULL;
> > > + params.name = dev_name(gpu->dev);
> > > + params.dev = gpu->dev;
> > > +
> > > + ret = drm_sched_init(&gpu->sched, ¶ms);
> > > if (ret)
> > > return ret;
> > >
> > > diff --git a/drivers/gpu/drm/imagination/pvr_queue.c
> > > b/drivers/gpu/drm/imagination/pvr_queue.c
> > > index c4f08432882b..03a2ce1a88e7 100644
> > > --- a/drivers/gpu/drm/imagination/pvr_queue.c
> > > +++ b/drivers/gpu/drm/imagination/pvr_queue.c
> > > @@ -1211,10 +1211,13 @@ struct pvr_queue *pvr_queue_create(struct
> > > pvr_context *ctx,
> > > };
> > > struct pvr_device *pvr_dev = ctx->pvr_dev;
> > > struct drm_gpu_scheduler *sched;
> > > + struct drm_sched_init_params sched_params;
> > > struct pvr_queue *queue;
> > > int ctx_state_size, err;
> > > void *cpu_map;
> > >
> > > + memset(&sched_params, 0, sizeof(struct
> > > drm_sched_init_params));
> > > +
> > > if (WARN_ON(type >= sizeof(props)))
> > > return ERR_PTR(-EINVAL);
> > >
> > > @@ -1282,12 +1285,18 @@ struct pvr_queue *pvr_queue_create(struct
> > > pvr_context *ctx,
> > >
> > > queue->timeline_ufo.value = cpu_map;
> > >
> > > - err = drm_sched_init(&queue->scheduler,
> > > - &pvr_queue_sched_ops,
> > > - pvr_dev->sched_wq, 1, 64 * 1024, 1,
> > > - msecs_to_jiffies(500),
> > > - pvr_dev->sched_wq, NULL, "pvr-queue",
> > > - pvr_dev->base.dev);
> > > + sched_params.ops = &pvr_queue_sched_ops;
> > > + sched_params.submit_wq = pvr_dev->sched_wq;
> > > + sched_params.num_rqs = 1;
> > > + sched_params.credit_limit = 64 * 1024;
> > > + sched_params.hang_limit = 1;
> > > + sched_params.timeout = msecs_to_jiffies(500);
> > > + sched_params.timeout_wq = pvr_dev->sched_wq;
> > > + sched_params.score = NULL;
> > > + sched_params.name = "pvr-queue";
> > > + sched_params.dev = pvr_dev->base.dev;
> > > +
> > > + err = drm_sched_init(&queue->scheduler, &sched_params);
> > > if (err)
> > > goto err_release_ufo;
> > >
> > > diff --git a/drivers/gpu/drm/lima/lima_sched.c
> > > b/drivers/gpu/drm/lima/lima_sched.c
> > > index b40c90e97d7e..a64c50fb6d1e 100644
> > > --- a/drivers/gpu/drm/lima/lima_sched.c
> > > +++ b/drivers/gpu/drm/lima/lima_sched.c
> > > @@ -513,20 +513,29 @@ static void lima_sched_recover_work(struct
> > > work_struct *work)
> > >
> > > int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char
> > > *name)
> > > {
> > > + struct drm_sched_init_params params;
> > > unsigned int timeout = lima_sched_timeout_ms > 0 ?
> > > lima_sched_timeout_ms : 10000;
> > >
> > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > +
> > > pipe->fence_context = dma_fence_context_alloc(1);
> > > spin_lock_init(&pipe->fence_lock);
> > >
> > > INIT_WORK(&pipe->recover_work, lima_sched_recover_work);
> > >
> > > - return drm_sched_init(&pipe->base, &lima_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - 1,
> > > - lima_job_hang_limit,
> > > - msecs_to_jiffies(timeout), NULL,
> > > - NULL, name, pipe->ldev->dev);
> > > + params.ops = &lima_sched_ops;
> > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > + params.credit_limit = 1;
> > > + params.hang_limit = lima_job_hang_limit;
> > > + params.timeout = msecs_to_jiffies(timeout);
> > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > + params.score = NULL;
> > > + params.name = name;
> > > + params.dev = pipe->ldev->dev;
> > > +
> > > + return drm_sched_init(&pipe->base, ¶ms);
> > > }
> > >
> > > void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
> > > diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c
> > > b/drivers/gpu/drm/msm/msm_ringbuffer.c
> > > index c803556a8f64..49a2c7422dc6 100644
> > > --- a/drivers/gpu/drm/msm/msm_ringbuffer.c
> > > +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
> > > @@ -59,11 +59,13 @@ static const struct drm_sched_backend_ops
> > > msm_sched_ops = {
> > > struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu,
> > > int id,
> > > void *memptrs, uint64_t memptrs_iova)
> > > {
> > > + struct drm_sched_init_params params;
> > > struct msm_ringbuffer *ring;
> > > - long sched_timeout;
> > > char name[32];
> > > int ret;
> > >
> > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > +
> > > /* We assume everywhere that MSM_GPU_RINGBUFFER_SZ is a
> > > power of 2 */
> > > BUILD_BUG_ON(!is_power_of_2(MSM_GPU_RINGBUFFER_SZ));
> > >
> > > @@ -95,13 +97,19 @@ struct msm_ringbuffer
> > > *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
> > > ring->memptrs = memptrs;
> > > ring->memptrs_iova = memptrs_iova;
> > >
> > > - /* currently managing hangcheck ourselves: */
> > > - sched_timeout = MAX_SCHEDULE_TIMEOUT;
> > > + params.ops = &msm_sched_ops;
> > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > + params.credit_limit = num_hw_submissions;
> > > + params.hang_limit = 0;
> > > + /* currently managing hangcheck ourselves: */
> > > + params.timeout = MAX_SCHEDULE_TIMEOUT;
> > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > + params.score = NULL;
> > > + params.name = to_msm_bo(ring->bo)->name;
> > > + params.dev = gpu->dev->dev;
> > >
> > > - ret = drm_sched_init(&ring->sched, &msm_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - num_hw_submissions, 0, sched_timeout,
> > > - NULL, NULL, to_msm_bo(ring->bo)-
> > > >name, gpu->dev->dev);
> > > + ret = drm_sched_init(&ring->sched, ¶ms);
> > > if (ret) {
> > > goto fail;
> > > }
> > > diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c
> > > b/drivers/gpu/drm/nouveau/nouveau_sched.c
> > > index 4412f2711fb5..f20c2e612750 100644
> > > --- a/drivers/gpu/drm/nouveau/nouveau_sched.c
> > > +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
> > > @@ -404,9 +404,11 @@ nouveau_sched_init(struct nouveau_sched
> > > *sched, struct nouveau_drm *drm,
> > > {
> > > struct drm_gpu_scheduler *drm_sched = &sched->base;
> > > struct drm_sched_entity *entity = &sched->entity;
> > > - const long timeout =
> > > msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
> > > + struct drm_sched_init_params params;
> > > int ret;
> > >
> > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > +
> > > if (!wq) {
> > > wq = alloc_workqueue("nouveau_sched_wq_%d", 0,
> > > WQ_MAX_ACTIVE,
> > > current->pid);
> > > @@ -416,10 +418,18 @@ nouveau_sched_init(struct nouveau_sched
> > > *sched, struct nouveau_drm *drm,
> > > sched->wq = wq;
> > > }
> > >
> > > - ret = drm_sched_init(drm_sched, &nouveau_sched_ops, wq,
> > > - NOUVEAU_SCHED_PRIORITY_COUNT,
> > > - credit_limit, 0, timeout,
> > > - NULL, NULL, "nouveau_sched", drm-
> > > >dev->dev);
> > > + params.ops = &nouveau_sched_ops;
> > > + params.submit_wq = wq;
> > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > + params.credit_limit = credit_limit;
> > > + params.hang_limit = 0;
> > > + params.timeout =
> > > msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
> > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > + params.score = NULL;
> > > + params.name = "nouveau_sched";
> > > + params.dev = drm->dev->dev;
> > > +
> > > + ret = drm_sched_init(drm_sched, ¶ms);
> > > if (ret)
> > > goto fail_wq;
> > >
> > > diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
> > > b/drivers/gpu/drm/panfrost/panfrost_job.c
> > > index 9b8e82fb8bc4..6b509ff446b5 100644
> > > --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> > > +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> > > @@ -836,10 +836,13 @@ static irqreturn_t
> > > panfrost_job_irq_handler(int irq, void *data)
> > >
> > > int panfrost_job_init(struct panfrost_device *pfdev)
> > > {
> > > + struct drm_sched_init_params params;
> > > struct panfrost_job_slot *js;
> > > unsigned int nentries = 2;
> > > int ret, j;
> > >
> > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > +
> > > /* All GPUs have two entries per queue, but without
> > > jobchain
> > > * disambiguation stopping the right job in the close path
> > > is tricky,
> > > * so let's just advertise one entry in that case.
> > > @@ -872,16 +875,21 @@ int panfrost_job_init(struct panfrost_device
> > > *pfdev)
> > > if (!pfdev->reset.wq)
> > > return -ENOMEM;
> > >
> > > + params.ops = &panfrost_sched_ops;
> > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > + params.credit_limit = nentries;
> > > + params.hang_limit = 0;
> > > + params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
> > > + params.timeout_wq = pfdev->reset.wq;
> > > + params.score = NULL;
> > > + params.name = "pan_js";
> > > + params.dev = pfdev->dev;
> > > +
> > > for (j = 0; j < NUM_JOB_SLOTS; j++) {
> > > js->queue[j].fence_context =
> > > dma_fence_context_alloc(1);
> > >
> > > - ret = drm_sched_init(&js->queue[j].sched,
> > > - &panfrost_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - nentries, 0,
> > > -
> > > msecs_to_jiffies(JOB_TIMEOUT_MS),
> > > - pfdev->reset.wq,
> > > - NULL, "pan_js", pfdev->dev);
> > > + ret = drm_sched_init(&js->queue[j].sched,
> > > ¶ms);
> > > if (ret) {
> > > dev_err(pfdev->dev, "Failed to create
> > > scheduler: %d.", ret);
> > > goto err_sched;
> > > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c
> > > b/drivers/gpu/drm/panthor/panthor_mmu.c
> > > index a49132f3778b..4362442cbfd8 100644
> > > --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> > > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> > > @@ -2268,6 +2268,7 @@ panthor_vm_create(struct panthor_device
> > > *ptdev, bool for_mcu,
> > > u64 full_va_range = 1ull << va_bits;
> > > struct drm_gem_object *dummy_gem;
> > > struct drm_gpu_scheduler *sched;
> > > + struct drm_sched_init_params sched_params;
> > > struct io_pgtable_cfg pgtbl_cfg;
> > > u64 mair, min_va, va_range;
> > > struct panthor_vm *vm;
> > > @@ -2284,6 +2285,8 @@ panthor_vm_create(struct panthor_device
> > > *ptdev, bool for_mcu,
> > > goto err_free_vm;
> > > }
> > >
> > > + memset(&sched_params, 0, sizeof(struct
> > > drm_sched_init_params));
> > > +
> > > mutex_init(&vm->heaps.lock);
> > > vm->for_mcu = for_mcu;
> > > vm->ptdev = ptdev;
> > > @@ -2325,11 +2328,18 @@ panthor_vm_create(struct panthor_device
> > > *ptdev, bool for_mcu,
> > > goto err_mm_takedown;
> > > }
> > >
> > > + sched_params.ops = &panthor_vm_bind_ops;
> > > + sched_params.submit_wq = ptdev->mmu->vm.wq;
> > > + sched_params.num_rqs = 1;
> > > + sched_params.credit_limit = 1;
> > > + sched_params.hang_limit = 0;
> > > /* Bind operations are synchronous for now, no timeout
> > > needed. */
> > > - ret = drm_sched_init(&vm->sched, &panthor_vm_bind_ops,
> > > ptdev->mmu->vm.wq,
> > > - 1, 1, 0,
> > > - MAX_SCHEDULE_TIMEOUT, NULL, NULL,
> > > - "panthor-vm-bind", ptdev->base.dev);
> > > + sched_params.timeout = MAX_SCHEDULE_TIMEOUT;
> > > + sched_params.timeout_wq = NULL; /* Use the system_wq. */
> > > + sched_params.score = NULL;
> > > + sched_params.name = "panthor-vm-bind";
> > > + sched_params.dev = ptdev->base.dev;
> > > + ret = drm_sched_init(&vm->sched, &sched_params);
> > > if (ret)
> > > goto err_free_io_pgtable;
> > >
> > > diff --git a/drivers/gpu/drm/panthor/panthor_sched.c
> > > b/drivers/gpu/drm/panthor/panthor_sched.c
> > > index ef4bec7ff9c7..a324346d302f 100644
> > > --- a/drivers/gpu/drm/panthor/panthor_sched.c
> > > +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> > > @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group
> > > *group,
> > > const struct drm_panthor_queue_create *args)
> > > {
> > > struct drm_gpu_scheduler *drm_sched;
> > > + struct drm_sched_init_params sched_params;
> > > struct panthor_queue *queue;
> > > int ret;
> > >
> > > @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group
> > > *group,
> > > if (!queue)
> > > return ERR_PTR(-ENOMEM);
> > >
> > > + memset(&sched_params, 0, sizeof(struct
> > > drm_sched_init_params));
> > > +
> > > queue->fence_ctx.id = dma_fence_context_alloc(1);
> > > spin_lock_init(&queue->fence_ctx.lock);
> > > INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
> > > @@ -3341,17 +3344,23 @@ group_create_queue(struct panthor_group
> > > *group,
> > > if (ret)
> > > goto err_free_queue;
> > >
> > > + sched_params.ops = &panthor_queue_sched_ops;
> > > + sched_params.submit_wq = group->ptdev->scheduler->wq;
> > > + sched_params.num_rqs = 1;
> > > /*
> > > - * Credit limit argument tells us the total number of
> > > instructions
> > > + * The credit limit argument tells us the total number of
> > > instructions
> > > * across all CS slots in the ringbuffer, with some jobs
> > > requiring
> > > * twice as many as others, depending on their profiling
> > > status.
> > > */
> > > - ret = drm_sched_init(&queue->scheduler,
> > > &panthor_queue_sched_ops,
> > > - group->ptdev->scheduler->wq, 1,
> > > - args->ringbuf_size / sizeof(u64),
> > > - 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
> > > - group->ptdev->reset.wq,
> > > - NULL, "panthor-queue", group->ptdev-
> > > >base.dev);
> > > + sched_params.credit_limit = args->ringbuf_size /
> > > sizeof(u64);
> > > + sched_params.hang_limit = 0;
> > > + sched_params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
> > > + sched_params.timeout_wq = group->ptdev->reset.wq;
> > > + sched_params.score = NULL;
> > > + sched_params.name = "panthor-queue";
> > > + sched_params.dev = group->ptdev->base.dev;
> > > +
> > > + ret = drm_sched_init(&queue->scheduler, &sched_params);
> > > if (ret)
> > > goto err_free_queue;
> > >
> > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > > b/drivers/gpu/drm/scheduler/sched_main.c
> > > index 57da84908752..27db748a5269 100644
> > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > @@ -1240,40 +1240,25 @@ static void drm_sched_run_job_work(struct
> > > work_struct *w)
> > > * drm_sched_init - Init a gpu scheduler instance
> > > *
> > > * @sched: scheduler instance
> > > - * @ops: backend operations for this scheduler
> > > - * @submit_wq: workqueue to use for submission. If NULL, an
> > > ordered wq is
> > > - * allocated and used
> > > - * @num_rqs: number of runqueues, one for each priority, up to
> > > DRM_SCHED_PRIORITY_COUNT
> > > - * @credit_limit: the number of credits this scheduler can hold
> > > from all jobs
> > > - * @hang_limit: number of times to allow a job to hang before
> > > dropping it
> > > - * @timeout: timeout value in jiffies for the scheduler
> > > - * @timeout_wq: workqueue to use for timeout work. If NULL, the
> > > system_wq is
> > > - * used
> > > - * @score: optional score atomic shared with other schedulers
> > > - * @name: name used for debugging
> > > - * @dev: target &struct device
> > > + * @params: scheduler initialization parameters
> > > *
> > > * Return 0 on success, otherwise error code.
> > > */
> > > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > - const struct drm_sched_backend_ops *ops,
> > > - struct workqueue_struct *submit_wq,
> > > - u32 num_rqs, u32 credit_limit, unsigned int
> > > hang_limit,
> > > - long timeout, struct workqueue_struct
> > > *timeout_wq,
> > > - atomic_t *score, const char *name, struct
> > > device *dev)
> > > + const struct drm_sched_init_params *params)
> > > {
> > > int i;
> > >
> > > - sched->ops = ops;
> > > - sched->credit_limit = credit_limit;
> > > - sched->name = name;
> > > - sched->timeout = timeout;
> > > - sched->timeout_wq = timeout_wq ? : system_wq;
> > > - sched->hang_limit = hang_limit;
> > > - sched->score = score ? score : &sched->_score;
> > > - sched->dev = dev;
> > > + sched->ops = params->ops;
> > > + sched->credit_limit = params->credit_limit;
> > > + sched->name = params->name;
> > > + sched->timeout = params->timeout;
> > > + sched->timeout_wq = params->timeout_wq ? : system_wq;
> > > + sched->hang_limit = params->hang_limit;
> > > + sched->score = params->score ? params->score : &sched-
> > > >_score;
> > > + sched->dev = params->dev;
> > >
> > > - if (num_rqs > DRM_SCHED_PRIORITY_COUNT) {
> > > + if (params->num_rqs > DRM_SCHED_PRIORITY_COUNT) {
> > > /* This is a gross violation--tell drivers what
> > > the problem is.
> > > */
> > > drm_err(sched, "%s: num_rqs cannot be greater than
> > > DRM_SCHED_PRIORITY_COUNT\n",
> > > @@ -1288,16 +1273,16 @@ int drm_sched_init(struct drm_gpu_scheduler
> > > *sched,
> > > return 0;
> > > }
> > >
> > > - if (submit_wq) {
> > > - sched->submit_wq = submit_wq;
> > > + if (params->submit_wq) {
> > > + sched->submit_wq = params->submit_wq;
> > > sched->own_submit_wq = false;
> > > } else {
> > > #ifdef CONFIG_LOCKDEP
> > > - sched->submit_wq =
> > > alloc_ordered_workqueue_lockdep_map(name,
> > > -
> > > WQ_MEM_RECLAIM,
> > > -
> > > &drm_sched_lockdep_map);
> > > + sched->submit_wq =
> > > alloc_ordered_workqueue_lockdep_map(
> > > + params->name,
> > > WQ_MEM_RECLAIM,
> > > + &drm_sched_lockdep_map);
> > > #else
> > > - sched->submit_wq = alloc_ordered_workqueue(name,
> > > WQ_MEM_RECLAIM);
> > > + sched->submit_wq = alloc_ordered_workqueue(params-
> > > >name, WQ_MEM_RECLAIM);
> > > #endif
> > > if (!sched->submit_wq)
> > > return -ENOMEM;
> > > @@ -1305,11 +1290,11 @@ int drm_sched_init(struct drm_gpu_scheduler
> > > *sched,
> > > sched->own_submit_wq = true;
> > > }
> > >
> > > - sched->sched_rq = kmalloc_array(num_rqs, sizeof(*sched-
> > > >sched_rq),
> > > + sched->sched_rq = kmalloc_array(params->num_rqs,
> > > sizeof(*sched->sched_rq),
> > > GFP_KERNEL | __GFP_ZERO);
> > > if (!sched->sched_rq)
> > > goto Out_check_own;
> > > - sched->num_rqs = num_rqs;
> > > + sched->num_rqs = params->num_rqs;
> > > for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs;
> > > i++) {
> > > sched->sched_rq[i] = kzalloc(sizeof(*sched-
> > > >sched_rq[i]), GFP_KERNEL);
> > > if (!sched->sched_rq[i])
> > > diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
> > > b/drivers/gpu/drm/v3d/v3d_sched.c
> > > index 99ac4995b5a1..716e6d074d87 100644
> > > --- a/drivers/gpu/drm/v3d/v3d_sched.c
> > > +++ b/drivers/gpu/drm/v3d/v3d_sched.c
> > > @@ -814,67 +814,124 @@ static const struct drm_sched_backend_ops
> > > v3d_cpu_sched_ops = {
> > > .free_job = v3d_cpu_job_free
> > > };
> > >
> > > +/*
> > > + * v3d's scheduler instances are all identical, except for ops and
> > > name.
> > > + */
> > > +static void
> > > +v3d_common_sched_init(struct drm_sched_init_params *params, struct
> > > device *dev)
> > > +{
> > > + memset(params, 0, sizeof(struct drm_sched_init_params));
> > > +
> > > + params->submit_wq = NULL; /* Use the system_wq. */
> > > + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > + params->credit_limit = 1;
> > > + params->hang_limit = 0;
> > > + params->timeout = msecs_to_jiffies(500);
> > > + params->timeout_wq = NULL; /* Use the system_wq. */
> > > + params->score = NULL;
> > > + params->dev = dev;
> > > +}
> > > +
> > > +static int
> > > +v3d_bin_sched_init(struct v3d_dev *v3d)
> > > +{
> > > + struct drm_sched_init_params params;
> > > +
> > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > + params.ops = &v3d_bin_sched_ops;
> > > + params.name = "v3d_bin";
> > > +
> > > + return drm_sched_init(&v3d->queue[V3D_BIN].sched,
> > > ¶ms);
> > > +}
> > > +
> > > +static int
> > > +v3d_render_sched_init(struct v3d_dev *v3d)
> > > +{
> > > + struct drm_sched_init_params params;
> > > +
> > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > + params.ops = &v3d_render_sched_ops;
> > > + params.name = "v3d_render";
> > > +
> > > + return drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> > > ¶ms);
> > > +}
> > > +
> > > +static int
> > > +v3d_tfu_sched_init(struct v3d_dev *v3d)
> > > +{
> > > + struct drm_sched_init_params params;
> > > +
> > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > + params.ops = &v3d_tfu_sched_ops;
> > > + params.name = "v3d_tfu";
> > > +
> > > + return drm_sched_init(&v3d->queue[V3D_TFU].sched,
> > > ¶ms);
> > > +}
> > > +
> > > +static int
> > > +v3d_csd_sched_init(struct v3d_dev *v3d)
> > > +{
> > > + struct drm_sched_init_params params;
> > > +
> > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > + params.ops = &v3d_csd_sched_ops;
> > > + params.name = "v3d_csd";
> > > +
> > > + return drm_sched_init(&v3d->queue[V3D_CSD].sched,
> > > ¶ms);
> > > +}
> > > +
> > > +static int
> > > +v3d_cache_sched_init(struct v3d_dev *v3d)
> > > +{
> > > + struct drm_sched_init_params params;
> > > +
> > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > + params.ops = &v3d_cache_clean_sched_ops;
> > > + params.name = "v3d_cache_clean";
> > > +
> > > + return drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched,
> > > ¶ms);
> > > +}
> > > +
> > > +static int
> > > +v3d_cpu_sched_init(struct v3d_dev *v3d)
> > > +{
> > > + struct drm_sched_init_params params;
> > > +
> > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > + params.ops = &v3d_cpu_sched_ops;
> > > + params.name = "v3d_cpu";
> > > +
> > > + return drm_sched_init(&v3d->queue[V3D_CPU].sched,
> > > ¶ms);
> > > +}
> > > +
> > > int
> > > v3d_sched_init(struct v3d_dev *v3d)
> > > {
> > > - int hw_jobs_limit = 1;
> > > - int job_hang_limit = 0;
> > > - int hang_limit_ms = 500;
> > > int ret;
> > >
> > > - ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
> > > - &v3d_bin_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - hw_jobs_limit, job_hang_limit,
> > > - msecs_to_jiffies(hang_limit_ms),
> > > NULL,
> > > - NULL, "v3d_bin", v3d->drm.dev);
> > > + ret = v3d_bin_sched_init(v3d);
> > > if (ret)
> > > return ret;
> > >
> > > - ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> > > - &v3d_render_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - hw_jobs_limit, job_hang_limit,
> > > - msecs_to_jiffies(hang_limit_ms),
> > > NULL,
> > > - NULL, "v3d_render", v3d->drm.dev);
> > > + ret = v3d_render_sched_init(v3d);
> > > if (ret)
> > > goto fail;
> > >
> > > - ret = drm_sched_init(&v3d->queue[V3D_TFU].sched,
> > > - &v3d_tfu_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - hw_jobs_limit, job_hang_limit,
> > > - msecs_to_jiffies(hang_limit_ms),
> > > NULL,
> > > - NULL, "v3d_tfu", v3d->drm.dev);
> > > + ret = v3d_tfu_sched_init(v3d);
> > > if (ret)
> > > goto fail;
> > >
> > > if (v3d_has_csd(v3d)) {
> > > - ret = drm_sched_init(&v3d->queue[V3D_CSD].sched,
> > > - &v3d_csd_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - hw_jobs_limit,
> > > job_hang_limit,
> > > -
> > > msecs_to_jiffies(hang_limit_ms), NULL,
> > > - NULL, "v3d_csd", v3d-
> > > >drm.dev);
> > > + ret = v3d_csd_sched_init(v3d);
> > > if (ret)
> > > goto fail;
> > >
> > > - ret = drm_sched_init(&v3d-
> > > >queue[V3D_CACHE_CLEAN].sched,
> > > - &v3d_cache_clean_sched_ops,
> > > NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - hw_jobs_limit,
> > > job_hang_limit,
> > > -
> > > msecs_to_jiffies(hang_limit_ms), NULL,
> > > - NULL, "v3d_cache_clean", v3d-
> > > >drm.dev);
> > > + ret = v3d_cache_sched_init(v3d);
> > > if (ret)
> > > goto fail;
> > > }
> > >
> > > - ret = drm_sched_init(&v3d->queue[V3D_CPU].sched,
> > > - &v3d_cpu_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - 1, job_hang_limit,
> > > - msecs_to_jiffies(hang_limit_ms),
> > > NULL,
> > > - NULL, "v3d_cpu", v3d->drm.dev);
> > > + ret = v3d_cpu_sched_init(v3d);
> > > if (ret)
> > > goto fail;
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_execlist.c
> > > b/drivers/gpu/drm/xe/xe_execlist.c
> > > index a8c416a48812..7f29b7f04af4 100644
> > > --- a/drivers/gpu/drm/xe/xe_execlist.c
> > > +++ b/drivers/gpu/drm/xe/xe_execlist.c
> > > @@ -332,10 +332,13 @@ static const struct drm_sched_backend_ops
> > > drm_sched_ops = {
> > > static int execlist_exec_queue_init(struct xe_exec_queue *q)
> > > {
> > > struct drm_gpu_scheduler *sched;
> > > + struct drm_sched_init_params params;
> > > struct xe_execlist_exec_queue *exl;
> > > struct xe_device *xe = gt_to_xe(q->gt);
> > > int err;
> > >
> > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > +
> > > xe_assert(xe, !xe_device_uc_enabled(xe));
> > >
> > > drm_info(&xe->drm, "Enabling execlist submission (GuC
> > > submission disabled)\n");
> > > @@ -346,11 +349,18 @@ static int execlist_exec_queue_init(struct
> > > xe_exec_queue *q)
> > >
> > > exl->q = q;
> > >
> > > - err = drm_sched_init(&exl->sched, &drm_sched_ops, NULL, 1,
> > > - q->lrc[0]->ring.size /
> > > MAX_JOB_SIZE_BYTES,
> > > - XE_SCHED_HANG_LIMIT,
> > > XE_SCHED_JOB_TIMEOUT,
> > > - NULL, NULL, q->hwe->name,
> > > - gt_to_xe(q->gt)->drm.dev);
> > > + params.ops = &drm_sched_ops;
> > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > + params.num_rqs = 1;
> > > + params.credit_limit = q->lrc[0]->ring.size /
> > > MAX_JOB_SIZE_BYTES;
> > > + params.hang_limit = XE_SCHED_HANG_LIMIT;
> > > + params.timeout = XE_SCHED_JOB_TIMEOUT;
> > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > + params.score = NULL;
> > > + params.name = q->hwe->name;
> > > + params.dev = gt_to_xe(q->gt)->drm.dev;
> > > +
> > > + err = drm_sched_init(&exl->sched, ¶ms);
> > > if (err)
> > > goto err_free;
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > > b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > > index 50361b4638f9..2129fee83f25 100644
> > > --- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > > +++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > > @@ -63,13 +63,26 @@ int xe_sched_init(struct xe_gpu_scheduler
> > > *sched,
> > > atomic_t *score, const char *name,
> > > struct device *dev)
> > > {
> > > + struct drm_sched_init_params params;
> > > +
> > > sched->ops = xe_ops;
> > > INIT_LIST_HEAD(&sched->msgs);
> > > INIT_WORK(&sched->work_process_msg,
> > > xe_sched_process_msg_work);
> > >
> > > - return drm_sched_init(&sched->base, ops, submit_wq, 1,
> > > hw_submission,
> > > - hang_limit, timeout, timeout_wq,
> > > score, name,
> > > - dev);
> > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > +
> > > + params.ops = ops;
> > > + params.submit_wq = submit_wq;
> > > + params.num_rqs = 1;
> > > + params.credit_limit = hw_submission;
> > > + params.hang_limit = hang_limit;
> > > + params.timeout = timeout;
> > > + params.timeout_wq = timeout_wq;
> > > + params.score = score;
> > > + params.name = name;
> > > + params.dev = dev;
> > > +
> > > + return drm_sched_init(&sched->base, ¶ms);
> > > }
> > >
> > > void xe_sched_fini(struct xe_gpu_scheduler *sched)
> > > diff --git a/include/drm/gpu_scheduler.h
> > > b/include/drm/gpu_scheduler.h
> > > index 95e17504e46a..1a834ef43862 100644
> > > --- a/include/drm/gpu_scheduler.h
> > > +++ b/include/drm/gpu_scheduler.h
> > > @@ -553,12 +553,37 @@ struct drm_gpu_scheduler {
> > > struct device *dev;
> > > };
> > >
> > > +/**
> > > + * struct drm_sched_init_params - parameters for initializing a
> > > DRM GPU scheduler
> > > + *
> > > + * @ops: backend operations provided by the driver
> > > + * @submit_wq: workqueue to use for submission. If NULL, an
> > > ordered wq is
> > > + * allocated and used
> > > + * @num_rqs: Number of run-queues. This is at most
> > > DRM_SCHED_PRIORITY_COUNT,
> > > + * as there's usually one run-queue per priority, but
> > > could be less.
> > > + * @credit_limit: the number of credits this scheduler can hold
> > > from all jobs
> > > + * @hang_limit: number of times to allow a job to hang before
> > > dropping it
> > > + * @timeout: timeout value in jiffies for the scheduler
> > > + * @timeout_wq: workqueue to use for timeout work. If NULL, the
> > > system_wq is
> > > + * used
> > > + * @score: optional score atomic shared with other schedulers
> > > + * @name: name used for debugging
> > > + * @dev: associated device. Used for debugging
> > > + */
> > > +struct drm_sched_init_params {
> > > + const struct drm_sched_backend_ops *ops;
> > > + struct workqueue_struct *submit_wq;
> > > + struct workqueue_struct *timeout_wq;
> > > + u32 num_rqs, credit_limit;
> > > + unsigned int hang_limit;
> > > + long timeout;
> > > + atomic_t *score;
> > > + const char *name;
> > > + struct device *dev;
> > > +};
> > > +
> > > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > - const struct drm_sched_backend_ops *ops,
> > > - struct workqueue_struct *submit_wq,
> > > - u32 num_rqs, u32 credit_limit, unsigned int
> > > hang_limit,
> > > - long timeout, struct workqueue_struct
> > > *timeout_wq,
> > > - atomic_t *score, const char *name, struct
> > > device *dev);
> > > + const struct drm_sched_init_params *params);
> > >
> > > void drm_sched_fini(struct drm_gpu_scheduler *sched);
> > > int drm_sched_job_init(struct drm_sched_job *job,
> >
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:48 ` Philipp Stanner
2025-01-22 15:02 ` Matthew Brost
@ 2025-01-22 15:06 ` Christian König
2025-01-22 15:23 ` Philipp Stanner
2025-01-22 15:29 ` Matthew Brost
1 sibling, 2 replies; 35+ messages in thread
From: Christian König @ 2025-01-22 15:06 UTC (permalink / raw)
To: Philipp Stanner, Philipp Stanner, Alex Deucher, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Danilo Krummrich, Boris Brezillon,
Rob Herring, Steven Price, Liviu Dudau, Luben Tuikov,
Matthew Brost, Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li
Cc: amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
Am 22.01.25 um 15:48 schrieb Philipp Stanner:
> On Wed, 2025-01-22 at 15:34 +0100, Christian König wrote:
>> Am 22.01.25 um 15:08 schrieb Philipp Stanner:
>>> drm_sched_init() has a great many parameters and upcoming new
>>> functionality for the scheduler might add even more. Generally, the
>>> great number of parameters reduces readability and has already
>>> caused
>>> one missnaming in:
>>>
>>> commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in
>>> nouveau_sched_init()").
>>>
>>> Introduce a new struct for the scheduler init parameters and port
>>> all
>>> users.
>>>
>>> Signed-off-by: Philipp Stanner <phasta@kernel.org>
>>> ---
>>> Howdy,
>>>
>>> I have a patch-series in the pipe that will add a `flags` argument
>>> to
>>> drm_sched_init(). I thought it would be wise to first rework the
>>> API as
>>> detailed in this patch. It's really a lot of parameters by now, and
>>> I
>>> would expect that it might get more and more over the years for
>>> special
>>> use cases etc.
>>>
>>> Regards,
>>> P.
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +++-
>>> drivers/gpu/drm/etnaviv/etnaviv_sched.c | 20 ++-
>>> drivers/gpu/drm/imagination/pvr_queue.c | 21 +++-
>>> drivers/gpu/drm/lima/lima_sched.c | 21 +++-
>>> drivers/gpu/drm/msm/msm_ringbuffer.c | 22 ++--
>>> drivers/gpu/drm/nouveau/nouveau_sched.c | 20 ++-
>>> drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++--
>>> drivers/gpu/drm/panthor/panthor_mmu.c | 18 ++-
>>> drivers/gpu/drm/panthor/panthor_sched.c | 23 ++--
>>> drivers/gpu/drm/scheduler/sched_main.c | 53 +++-----
>>> drivers/gpu/drm/v3d/v3d_sched.c | 135 +++++++++++++++-
>>> -----
>>> drivers/gpu/drm/xe/xe_execlist.c | 20 ++-
>>> drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++-
>>> include/drm/gpu_scheduler.h | 35 +++++-
>>> 14 files changed, 311 insertions(+), 139 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> index cd4fac120834..c1f03eb5f5ea 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> @@ -2821,6 +2821,9 @@ static int
>>> amdgpu_device_init_schedulers(struct amdgpu_device *adev)
>>> {
>>> long timeout;
>>> int r, i;
>>> + struct drm_sched_init_params params;
>> Please keep the reverse xmas tree ordering for variable declaration.
>> E.g. long lines first and variables like "i" and "r" last.
> Okay dokay
>
>> Apart from that looks like a good idea to me.
>>
>>
>>> +
>>> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
>>>
>>> for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
>>> struct amdgpu_ring *ring = adev->rings[i];
>>> @@ -2844,12 +2847,18 @@ static int
>>> amdgpu_device_init_schedulers(struct amdgpu_device *adev)
>>> break;
>>> }
>>>
>>> - r = drm_sched_init(&ring->sched,
>>> &amdgpu_sched_ops, NULL,
>>> - DRM_SCHED_PRIORITY_COUNT,
>>> - ring->num_hw_submission, 0,
>>> - timeout, adev->reset_domain-
>>>> wq,
>>> - ring->sched_score, ring->name,
>>> - adev->dev);
>>> + params.ops = &amdgpu_sched_ops;
>>> + params.submit_wq = NULL; /* Use the system_wq. */
>>> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>> + params.credit_limit = ring->num_hw_submission;
>>> + params.hang_limit = 0;
>> Could we please remove the hang limit as first step to get this awful
>> feature deprecated?
> Remove it? From the struct you mean?
>
> We can mark it as deprecated in the docstring of the new struct. That's
> what you mean, don't you?
No, the function using this parameter already deprecated. What I meant
is to start to completely remove this feature.
The hang_limit basically says how often the scheduler should try to run
a job over and over again before giving up.
And we already agreed that trying the same thing over and over again and
expecting different results is the definition of insanity :)
So my suggestion is to drop the parameter and drop the job as soon as it
caused a timeout.
Regards,
Christian.
>
> P.
>
>> Thanks,
>> Christian.
>>
>>> + params.timeout = timeout;
>>> + params.timeout_wq = adev->reset_domain->wq;
>>> + params.score = ring->sched_score;
>>> + params.name = ring->name;
>>> + params.dev = adev->dev;
>>> +
>>> + r = drm_sched_init(&ring->sched, ¶ms);
>>> if (r) {
>>> DRM_ERROR("Failed to create scheduler on
>>> ring %s.\n",
>>> ring->name);
>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>> b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>> index 5b67eda122db..7d8517f1963e 100644
>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>> @@ -145,12 +145,22 @@ int etnaviv_sched_push_job(struct
>>> etnaviv_gem_submit *submit)
>>> int etnaviv_sched_init(struct etnaviv_gpu *gpu)
>>> {
>>> int ret;
>>> + struct drm_sched_init_params params;
>>>
>>> - ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops,
>>> NULL,
>>> - DRM_SCHED_PRIORITY_COUNT,
>>> - etnaviv_hw_jobs_limit,
>>> etnaviv_job_hang_limit,
>>> - msecs_to_jiffies(500), NULL, NULL,
>>> - dev_name(gpu->dev), gpu->dev);
>>> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
>>> +
>>> + params.ops = &etnaviv_sched_ops;
>>> + params.submit_wq = NULL; /* Use the system_wq. */
>>> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>> + params.credit_limit = etnaviv_hw_jobs_limit;
>>> + params.hang_limit = etnaviv_job_hang_limit;
>>> + params.timeout = msecs_to_jiffies(500);
>>> + params.timeout_wq = NULL; /* Use the system_wq. */
>>> + params.score = NULL;
>>> + params.name = dev_name(gpu->dev);
>>> + params.dev = gpu->dev;
>>> +
>>> + ret = drm_sched_init(&gpu->sched, ¶ms);
>>> if (ret)
>>> return ret;
>>>
>>> diff --git a/drivers/gpu/drm/imagination/pvr_queue.c
>>> b/drivers/gpu/drm/imagination/pvr_queue.c
>>> index c4f08432882b..03a2ce1a88e7 100644
>>> --- a/drivers/gpu/drm/imagination/pvr_queue.c
>>> +++ b/drivers/gpu/drm/imagination/pvr_queue.c
>>> @@ -1211,10 +1211,13 @@ struct pvr_queue *pvr_queue_create(struct
>>> pvr_context *ctx,
>>> };
>>> struct pvr_device *pvr_dev = ctx->pvr_dev;
>>> struct drm_gpu_scheduler *sched;
>>> + struct drm_sched_init_params sched_params;
>>> struct pvr_queue *queue;
>>> int ctx_state_size, err;
>>> void *cpu_map;
>>>
>>> + memset(&sched_params, 0, sizeof(struct
>>> drm_sched_init_params));
>>> +
>>> if (WARN_ON(type >= sizeof(props)))
>>> return ERR_PTR(-EINVAL);
>>>
>>> @@ -1282,12 +1285,18 @@ struct pvr_queue *pvr_queue_create(struct
>>> pvr_context *ctx,
>>>
>>> queue->timeline_ufo.value = cpu_map;
>>>
>>> - err = drm_sched_init(&queue->scheduler,
>>> - &pvr_queue_sched_ops,
>>> - pvr_dev->sched_wq, 1, 64 * 1024, 1,
>>> - msecs_to_jiffies(500),
>>> - pvr_dev->sched_wq, NULL, "pvr-queue",
>>> - pvr_dev->base.dev);
>>> + sched_params.ops = &pvr_queue_sched_ops;
>>> + sched_params.submit_wq = pvr_dev->sched_wq;
>>> + sched_params.num_rqs = 1;
>>> + sched_params.credit_limit = 64 * 1024;
>>> + sched_params.hang_limit = 1;
>>> + sched_params.timeout = msecs_to_jiffies(500);
>>> + sched_params.timeout_wq = pvr_dev->sched_wq;
>>> + sched_params.score = NULL;
>>> + sched_params.name = "pvr-queue";
>>> + sched_params.dev = pvr_dev->base.dev;
>>> +
>>> + err = drm_sched_init(&queue->scheduler, &sched_params);
>>> if (err)
>>> goto err_release_ufo;
>>>
>>> diff --git a/drivers/gpu/drm/lima/lima_sched.c
>>> b/drivers/gpu/drm/lima/lima_sched.c
>>> index b40c90e97d7e..a64c50fb6d1e 100644
>>> --- a/drivers/gpu/drm/lima/lima_sched.c
>>> +++ b/drivers/gpu/drm/lima/lima_sched.c
>>> @@ -513,20 +513,29 @@ static void lima_sched_recover_work(struct
>>> work_struct *work)
>>>
>>> int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char
>>> *name)
>>> {
>>> + struct drm_sched_init_params params;
>>> unsigned int timeout = lima_sched_timeout_ms > 0 ?
>>> lima_sched_timeout_ms : 10000;
>>>
>>> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
>>> +
>>> pipe->fence_context = dma_fence_context_alloc(1);
>>> spin_lock_init(&pipe->fence_lock);
>>>
>>> INIT_WORK(&pipe->recover_work, lima_sched_recover_work);
>>>
>>> - return drm_sched_init(&pipe->base, &lima_sched_ops, NULL,
>>> - DRM_SCHED_PRIORITY_COUNT,
>>> - 1,
>>> - lima_job_hang_limit,
>>> - msecs_to_jiffies(timeout), NULL,
>>> - NULL, name, pipe->ldev->dev);
>>> + params.ops = &lima_sched_ops;
>>> + params.submit_wq = NULL; /* Use the system_wq. */
>>> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>> + params.credit_limit = 1;
>>> + params.hang_limit = lima_job_hang_limit;
>>> + params.timeout = msecs_to_jiffies(timeout);
>>> + params.timeout_wq = NULL; /* Use the system_wq. */
>>> + params.score = NULL;
>>> + params.name = name;
>>> + params.dev = pipe->ldev->dev;
>>> +
>>> + return drm_sched_init(&pipe->base, ¶ms);
>>> }
>>>
>>> void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
>>> diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c
>>> b/drivers/gpu/drm/msm/msm_ringbuffer.c
>>> index c803556a8f64..49a2c7422dc6 100644
>>> --- a/drivers/gpu/drm/msm/msm_ringbuffer.c
>>> +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
>>> @@ -59,11 +59,13 @@ static const struct drm_sched_backend_ops
>>> msm_sched_ops = {
>>> struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu,
>>> int id,
>>> void *memptrs, uint64_t memptrs_iova)
>>> {
>>> + struct drm_sched_init_params params;
>>> struct msm_ringbuffer *ring;
>>> - long sched_timeout;
>>> char name[32];
>>> int ret;
>>>
>>> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
>>> +
>>> /* We assume everywhere that MSM_GPU_RINGBUFFER_SZ is a
>>> power of 2 */
>>> BUILD_BUG_ON(!is_power_of_2(MSM_GPU_RINGBUFFER_SZ));
>>>
>>> @@ -95,13 +97,19 @@ struct msm_ringbuffer
>>> *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
>>> ring->memptrs = memptrs;
>>> ring->memptrs_iova = memptrs_iova;
>>>
>>> - /* currently managing hangcheck ourselves: */
>>> - sched_timeout = MAX_SCHEDULE_TIMEOUT;
>>> + params.ops = &msm_sched_ops;
>>> + params.submit_wq = NULL; /* Use the system_wq. */
>>> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>> + params.credit_limit = num_hw_submissions;
>>> + params.hang_limit = 0;
>>> + /* currently managing hangcheck ourselves: */
>>> + params.timeout = MAX_SCHEDULE_TIMEOUT;
>>> + params.timeout_wq = NULL; /* Use the system_wq. */
>>> + params.score = NULL;
>>> + params.name = to_msm_bo(ring->bo)->name;
>>> + params.dev = gpu->dev->dev;
>>>
>>> - ret = drm_sched_init(&ring->sched, &msm_sched_ops, NULL,
>>> - DRM_SCHED_PRIORITY_COUNT,
>>> - num_hw_submissions, 0, sched_timeout,
>>> - NULL, NULL, to_msm_bo(ring->bo)-
>>>> name, gpu->dev->dev);
>>> + ret = drm_sched_init(&ring->sched, ¶ms);
>>> if (ret) {
>>> goto fail;
>>> }
>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c
>>> b/drivers/gpu/drm/nouveau/nouveau_sched.c
>>> index 4412f2711fb5..f20c2e612750 100644
>>> --- a/drivers/gpu/drm/nouveau/nouveau_sched.c
>>> +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
>>> @@ -404,9 +404,11 @@ nouveau_sched_init(struct nouveau_sched
>>> *sched, struct nouveau_drm *drm,
>>> {
>>> struct drm_gpu_scheduler *drm_sched = &sched->base;
>>> struct drm_sched_entity *entity = &sched->entity;
>>> - const long timeout =
>>> msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
>>> + struct drm_sched_init_params params;
>>> int ret;
>>>
>>> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
>>> +
>>> if (!wq) {
>>> wq = alloc_workqueue("nouveau_sched_wq_%d", 0,
>>> WQ_MAX_ACTIVE,
>>> current->pid);
>>> @@ -416,10 +418,18 @@ nouveau_sched_init(struct nouveau_sched
>>> *sched, struct nouveau_drm *drm,
>>> sched->wq = wq;
>>> }
>>>
>>> - ret = drm_sched_init(drm_sched, &nouveau_sched_ops, wq,
>>> - NOUVEAU_SCHED_PRIORITY_COUNT,
>>> - credit_limit, 0, timeout,
>>> - NULL, NULL, "nouveau_sched", drm-
>>>> dev->dev);
>>> + params.ops = &nouveau_sched_ops;
>>> + params.submit_wq = wq;
>>> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>> + params.credit_limit = credit_limit;
>>> + params.hang_limit = 0;
>>> + params.timeout =
>>> msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
>>> + params.timeout_wq = NULL; /* Use the system_wq. */
>>> + params.score = NULL;
>>> + params.name = "nouveau_sched";
>>> + params.dev = drm->dev->dev;
>>> +
>>> + ret = drm_sched_init(drm_sched, ¶ms);
>>> if (ret)
>>> goto fail_wq;
>>>
>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
>>> b/drivers/gpu/drm/panfrost/panfrost_job.c
>>> index 9b8e82fb8bc4..6b509ff446b5 100644
>>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>>> @@ -836,10 +836,13 @@ static irqreturn_t
>>> panfrost_job_irq_handler(int irq, void *data)
>>>
>>> int panfrost_job_init(struct panfrost_device *pfdev)
>>> {
>>> + struct drm_sched_init_params params;
>>> struct panfrost_job_slot *js;
>>> unsigned int nentries = 2;
>>> int ret, j;
>>>
>>> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
>>> +
>>> /* All GPUs have two entries per queue, but without
>>> jobchain
>>> * disambiguation stopping the right job in the close path
>>> is tricky,
>>> * so let's just advertise one entry in that case.
>>> @@ -872,16 +875,21 @@ int panfrost_job_init(struct panfrost_device
>>> *pfdev)
>>> if (!pfdev->reset.wq)
>>> return -ENOMEM;
>>>
>>> + params.ops = &panfrost_sched_ops;
>>> + params.submit_wq = NULL; /* Use the system_wq. */
>>> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>> + params.credit_limit = nentries;
>>> + params.hang_limit = 0;
>>> + params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
>>> + params.timeout_wq = pfdev->reset.wq;
>>> + params.score = NULL;
>>> + params.name = "pan_js";
>>> + params.dev = pfdev->dev;
>>> +
>>> for (j = 0; j < NUM_JOB_SLOTS; j++) {
>>> js->queue[j].fence_context =
>>> dma_fence_context_alloc(1);
>>>
>>> - ret = drm_sched_init(&js->queue[j].sched,
>>> - &panfrost_sched_ops, NULL,
>>> - DRM_SCHED_PRIORITY_COUNT,
>>> - nentries, 0,
>>> -
>>> msecs_to_jiffies(JOB_TIMEOUT_MS),
>>> - pfdev->reset.wq,
>>> - NULL, "pan_js", pfdev->dev);
>>> + ret = drm_sched_init(&js->queue[j].sched,
>>> ¶ms);
>>> if (ret) {
>>> dev_err(pfdev->dev, "Failed to create
>>> scheduler: %d.", ret);
>>> goto err_sched;
>>> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c
>>> b/drivers/gpu/drm/panthor/panthor_mmu.c
>>> index a49132f3778b..4362442cbfd8 100644
>>> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
>>> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
>>> @@ -2268,6 +2268,7 @@ panthor_vm_create(struct panthor_device
>>> *ptdev, bool for_mcu,
>>> u64 full_va_range = 1ull << va_bits;
>>> struct drm_gem_object *dummy_gem;
>>> struct drm_gpu_scheduler *sched;
>>> + struct drm_sched_init_params sched_params;
>>> struct io_pgtable_cfg pgtbl_cfg;
>>> u64 mair, min_va, va_range;
>>> struct panthor_vm *vm;
>>> @@ -2284,6 +2285,8 @@ panthor_vm_create(struct panthor_device
>>> *ptdev, bool for_mcu,
>>> goto err_free_vm;
>>> }
>>>
>>> + memset(&sched_params, 0, sizeof(struct
>>> drm_sched_init_params));
>>> +
>>> mutex_init(&vm->heaps.lock);
>>> vm->for_mcu = for_mcu;
>>> vm->ptdev = ptdev;
>>> @@ -2325,11 +2328,18 @@ panthor_vm_create(struct panthor_device
>>> *ptdev, bool for_mcu,
>>> goto err_mm_takedown;
>>> }
>>>
>>> + sched_params.ops = &panthor_vm_bind_ops;
>>> + sched_params.submit_wq = ptdev->mmu->vm.wq;
>>> + sched_params.num_rqs = 1;
>>> + sched_params.credit_limit = 1;
>>> + sched_params.hang_limit = 0;
>>> /* Bind operations are synchronous for now, no timeout
>>> needed. */
>>> - ret = drm_sched_init(&vm->sched, &panthor_vm_bind_ops,
>>> ptdev->mmu->vm.wq,
>>> - 1, 1, 0,
>>> - MAX_SCHEDULE_TIMEOUT, NULL, NULL,
>>> - "panthor-vm-bind", ptdev->base.dev);
>>> + sched_params.timeout = MAX_SCHEDULE_TIMEOUT;
>>> + sched_params.timeout_wq = NULL; /* Use the system_wq. */
>>> + sched_params.score = NULL;
>>> + sched_params.name = "panthor-vm-bind";
>>> + sched_params.dev = ptdev->base.dev;
>>> + ret = drm_sched_init(&vm->sched, &sched_params);
>>> if (ret)
>>> goto err_free_io_pgtable;
>>>
>>> diff --git a/drivers/gpu/drm/panthor/panthor_sched.c
>>> b/drivers/gpu/drm/panthor/panthor_sched.c
>>> index ef4bec7ff9c7..a324346d302f 100644
>>> --- a/drivers/gpu/drm/panthor/panthor_sched.c
>>> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
>>> @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group
>>> *group,
>>> const struct drm_panthor_queue_create *args)
>>> {
>>> struct drm_gpu_scheduler *drm_sched;
>>> + struct drm_sched_init_params sched_params;
>>> struct panthor_queue *queue;
>>> int ret;
>>>
>>> @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group
>>> *group,
>>> if (!queue)
>>> return ERR_PTR(-ENOMEM);
>>>
>>> + memset(&sched_params, 0, sizeof(struct
>>> drm_sched_init_params));
>>> +
>>> queue->fence_ctx.id = dma_fence_context_alloc(1);
>>> spin_lock_init(&queue->fence_ctx.lock);
>>> INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
>>> @@ -3341,17 +3344,23 @@ group_create_queue(struct panthor_group
>>> *group,
>>> if (ret)
>>> goto err_free_queue;
>>>
>>> + sched_params.ops = &panthor_queue_sched_ops;
>>> + sched_params.submit_wq = group->ptdev->scheduler->wq;
>>> + sched_params.num_rqs = 1;
>>> /*
>>> - * Credit limit argument tells us the total number of
>>> instructions
>>> + * The credit limit argument tells us the total number of
>>> instructions
>>> * across all CS slots in the ringbuffer, with some jobs
>>> requiring
>>> * twice as many as others, depending on their profiling
>>> status.
>>> */
>>> - ret = drm_sched_init(&queue->scheduler,
>>> &panthor_queue_sched_ops,
>>> - group->ptdev->scheduler->wq, 1,
>>> - args->ringbuf_size / sizeof(u64),
>>> - 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
>>> - group->ptdev->reset.wq,
>>> - NULL, "panthor-queue", group->ptdev-
>>>> base.dev);
>>> + sched_params.credit_limit = args->ringbuf_size /
>>> sizeof(u64);
>>> + sched_params.hang_limit = 0;
>>> + sched_params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
>>> + sched_params.timeout_wq = group->ptdev->reset.wq;
>>> + sched_params.score = NULL;
>>> + sched_params.name = "panthor-queue";
>>> + sched_params.dev = group->ptdev->base.dev;
>>> +
>>> + ret = drm_sched_init(&queue->scheduler, &sched_params);
>>> if (ret)
>>> goto err_free_queue;
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>> index 57da84908752..27db748a5269 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -1240,40 +1240,25 @@ static void drm_sched_run_job_work(struct
>>> work_struct *w)
>>> * drm_sched_init - Init a gpu scheduler instance
>>> *
>>> * @sched: scheduler instance
>>> - * @ops: backend operations for this scheduler
>>> - * @submit_wq: workqueue to use for submission. If NULL, an
>>> ordered wq is
>>> - * allocated and used
>>> - * @num_rqs: number of runqueues, one for each priority, up to
>>> DRM_SCHED_PRIORITY_COUNT
>>> - * @credit_limit: the number of credits this scheduler can hold
>>> from all jobs
>>> - * @hang_limit: number of times to allow a job to hang before
>>> dropping it
>>> - * @timeout: timeout value in jiffies for the scheduler
>>> - * @timeout_wq: workqueue to use for timeout work. If NULL, the
>>> system_wq is
>>> - * used
>>> - * @score: optional score atomic shared with other schedulers
>>> - * @name: name used for debugging
>>> - * @dev: target &struct device
>>> + * @params: scheduler initialization parameters
>>> *
>>> * Return 0 on success, otherwise error code.
>>> */
>>> int drm_sched_init(struct drm_gpu_scheduler *sched,
>>> - const struct drm_sched_backend_ops *ops,
>>> - struct workqueue_struct *submit_wq,
>>> - u32 num_rqs, u32 credit_limit, unsigned int
>>> hang_limit,
>>> - long timeout, struct workqueue_struct
>>> *timeout_wq,
>>> - atomic_t *score, const char *name, struct
>>> device *dev)
>>> + const struct drm_sched_init_params *params)
>>> {
>>> int i;
>>>
>>> - sched->ops = ops;
>>> - sched->credit_limit = credit_limit;
>>> - sched->name = name;
>>> - sched->timeout = timeout;
>>> - sched->timeout_wq = timeout_wq ? : system_wq;
>>> - sched->hang_limit = hang_limit;
>>> - sched->score = score ? score : &sched->_score;
>>> - sched->dev = dev;
>>> + sched->ops = params->ops;
>>> + sched->credit_limit = params->credit_limit;
>>> + sched->name = params->name;
>>> + sched->timeout = params->timeout;
>>> + sched->timeout_wq = params->timeout_wq ? : system_wq;
>>> + sched->hang_limit = params->hang_limit;
>>> + sched->score = params->score ? params->score : &sched-
>>>> _score;
>>> + sched->dev = params->dev;
>>>
>>> - if (num_rqs > DRM_SCHED_PRIORITY_COUNT) {
>>> + if (params->num_rqs > DRM_SCHED_PRIORITY_COUNT) {
>>> /* This is a gross violation--tell drivers what
>>> the problem is.
>>> */
>>> drm_err(sched, "%s: num_rqs cannot be greater than
>>> DRM_SCHED_PRIORITY_COUNT\n",
>>> @@ -1288,16 +1273,16 @@ int drm_sched_init(struct drm_gpu_scheduler
>>> *sched,
>>> return 0;
>>> }
>>>
>>> - if (submit_wq) {
>>> - sched->submit_wq = submit_wq;
>>> + if (params->submit_wq) {
>>> + sched->submit_wq = params->submit_wq;
>>> sched->own_submit_wq = false;
>>> } else {
>>> #ifdef CONFIG_LOCKDEP
>>> - sched->submit_wq =
>>> alloc_ordered_workqueue_lockdep_map(name,
>>> -
>>> WQ_MEM_RECLAIM,
>>> -
>>> &drm_sched_lockdep_map);
>>> + sched->submit_wq =
>>> alloc_ordered_workqueue_lockdep_map(
>>> + params->name,
>>> WQ_MEM_RECLAIM,
>>> + &drm_sched_lockdep_map);
>>> #else
>>> - sched->submit_wq = alloc_ordered_workqueue(name,
>>> WQ_MEM_RECLAIM);
>>> + sched->submit_wq = alloc_ordered_workqueue(params-
>>>> name, WQ_MEM_RECLAIM);
>>> #endif
>>> if (!sched->submit_wq)
>>> return -ENOMEM;
>>> @@ -1305,11 +1290,11 @@ int drm_sched_init(struct drm_gpu_scheduler
>>> *sched,
>>> sched->own_submit_wq = true;
>>> }
>>>
>>> - sched->sched_rq = kmalloc_array(num_rqs, sizeof(*sched-
>>>> sched_rq),
>>> + sched->sched_rq = kmalloc_array(params->num_rqs,
>>> sizeof(*sched->sched_rq),
>>> GFP_KERNEL | __GFP_ZERO);
>>> if (!sched->sched_rq)
>>> goto Out_check_own;
>>> - sched->num_rqs = num_rqs;
>>> + sched->num_rqs = params->num_rqs;
>>> for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs;
>>> i++) {
>>> sched->sched_rq[i] = kzalloc(sizeof(*sched-
>>>> sched_rq[i]), GFP_KERNEL);
>>> if (!sched->sched_rq[i])
>>> diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
>>> b/drivers/gpu/drm/v3d/v3d_sched.c
>>> index 99ac4995b5a1..716e6d074d87 100644
>>> --- a/drivers/gpu/drm/v3d/v3d_sched.c
>>> +++ b/drivers/gpu/drm/v3d/v3d_sched.c
>>> @@ -814,67 +814,124 @@ static const struct drm_sched_backend_ops
>>> v3d_cpu_sched_ops = {
>>> .free_job = v3d_cpu_job_free
>>> };
>>>
>>> +/*
>>> + * v3d's scheduler instances are all identical, except for ops and
>>> name.
>>> + */
>>> +static void
>>> +v3d_common_sched_init(struct drm_sched_init_params *params, struct
>>> device *dev)
>>> +{
>>> + memset(params, 0, sizeof(struct drm_sched_init_params));
>>> +
>>> + params->submit_wq = NULL; /* Use the system_wq. */
>>> + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>> + params->credit_limit = 1;
>>> + params->hang_limit = 0;
>>> + params->timeout = msecs_to_jiffies(500);
>>> + params->timeout_wq = NULL; /* Use the system_wq. */
>>> + params->score = NULL;
>>> + params->dev = dev;
>>> +}
>>> +
>>> +static int
>>> +v3d_bin_sched_init(struct v3d_dev *v3d)
>>> +{
>>> + struct drm_sched_init_params params;
>>> +
>>> + v3d_common_sched_init(¶ms, v3d->drm.dev);
>>> + params.ops = &v3d_bin_sched_ops;
>>> + params.name = "v3d_bin";
>>> +
>>> + return drm_sched_init(&v3d->queue[V3D_BIN].sched,
>>> ¶ms);
>>> +}
>>> +
>>> +static int
>>> +v3d_render_sched_init(struct v3d_dev *v3d)
>>> +{
>>> + struct drm_sched_init_params params;
>>> +
>>> + v3d_common_sched_init(¶ms, v3d->drm.dev);
>>> + params.ops = &v3d_render_sched_ops;
>>> + params.name = "v3d_render";
>>> +
>>> + return drm_sched_init(&v3d->queue[V3D_RENDER].sched,
>>> ¶ms);
>>> +}
>>> +
>>> +static int
>>> +v3d_tfu_sched_init(struct v3d_dev *v3d)
>>> +{
>>> + struct drm_sched_init_params params;
>>> +
>>> + v3d_common_sched_init(¶ms, v3d->drm.dev);
>>> + params.ops = &v3d_tfu_sched_ops;
>>> + params.name = "v3d_tfu";
>>> +
>>> + return drm_sched_init(&v3d->queue[V3D_TFU].sched,
>>> ¶ms);
>>> +}
>>> +
>>> +static int
>>> +v3d_csd_sched_init(struct v3d_dev *v3d)
>>> +{
>>> + struct drm_sched_init_params params;
>>> +
>>> + v3d_common_sched_init(¶ms, v3d->drm.dev);
>>> + params.ops = &v3d_csd_sched_ops;
>>> + params.name = "v3d_csd";
>>> +
>>> + return drm_sched_init(&v3d->queue[V3D_CSD].sched,
>>> ¶ms);
>>> +}
>>> +
>>> +static int
>>> +v3d_cache_sched_init(struct v3d_dev *v3d)
>>> +{
>>> + struct drm_sched_init_params params;
>>> +
>>> + v3d_common_sched_init(¶ms, v3d->drm.dev);
>>> + params.ops = &v3d_cache_clean_sched_ops;
>>> + params.name = "v3d_cache_clean";
>>> +
>>> + return drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched,
>>> ¶ms);
>>> +}
>>> +
>>> +static int
>>> +v3d_cpu_sched_init(struct v3d_dev *v3d)
>>> +{
>>> + struct drm_sched_init_params params;
>>> +
>>> + v3d_common_sched_init(¶ms, v3d->drm.dev);
>>> + params.ops = &v3d_cpu_sched_ops;
>>> + params.name = "v3d_cpu";
>>> +
>>> + return drm_sched_init(&v3d->queue[V3D_CPU].sched,
>>> ¶ms);
>>> +}
>>> +
>>> int
>>> v3d_sched_init(struct v3d_dev *v3d)
>>> {
>>> - int hw_jobs_limit = 1;
>>> - int job_hang_limit = 0;
>>> - int hang_limit_ms = 500;
>>> int ret;
>>>
>>> - ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
>>> - &v3d_bin_sched_ops, NULL,
>>> - DRM_SCHED_PRIORITY_COUNT,
>>> - hw_jobs_limit, job_hang_limit,
>>> - msecs_to_jiffies(hang_limit_ms),
>>> NULL,
>>> - NULL, "v3d_bin", v3d->drm.dev);
>>> + ret = v3d_bin_sched_init(v3d);
>>> if (ret)
>>> return ret;
>>>
>>> - ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
>>> - &v3d_render_sched_ops, NULL,
>>> - DRM_SCHED_PRIORITY_COUNT,
>>> - hw_jobs_limit, job_hang_limit,
>>> - msecs_to_jiffies(hang_limit_ms),
>>> NULL,
>>> - NULL, "v3d_render", v3d->drm.dev);
>>> + ret = v3d_render_sched_init(v3d);
>>> if (ret)
>>> goto fail;
>>>
>>> - ret = drm_sched_init(&v3d->queue[V3D_TFU].sched,
>>> - &v3d_tfu_sched_ops, NULL,
>>> - DRM_SCHED_PRIORITY_COUNT,
>>> - hw_jobs_limit, job_hang_limit,
>>> - msecs_to_jiffies(hang_limit_ms),
>>> NULL,
>>> - NULL, "v3d_tfu", v3d->drm.dev);
>>> + ret = v3d_tfu_sched_init(v3d);
>>> if (ret)
>>> goto fail;
>>>
>>> if (v3d_has_csd(v3d)) {
>>> - ret = drm_sched_init(&v3d->queue[V3D_CSD].sched,
>>> - &v3d_csd_sched_ops, NULL,
>>> - DRM_SCHED_PRIORITY_COUNT,
>>> - hw_jobs_limit,
>>> job_hang_limit,
>>> -
>>> msecs_to_jiffies(hang_limit_ms), NULL,
>>> - NULL, "v3d_csd", v3d-
>>>> drm.dev);
>>> + ret = v3d_csd_sched_init(v3d);
>>> if (ret)
>>> goto fail;
>>>
>>> - ret = drm_sched_init(&v3d-
>>>> queue[V3D_CACHE_CLEAN].sched,
>>> - &v3d_cache_clean_sched_ops,
>>> NULL,
>>> - DRM_SCHED_PRIORITY_COUNT,
>>> - hw_jobs_limit,
>>> job_hang_limit,
>>> -
>>> msecs_to_jiffies(hang_limit_ms), NULL,
>>> - NULL, "v3d_cache_clean", v3d-
>>>> drm.dev);
>>> + ret = v3d_cache_sched_init(v3d);
>>> if (ret)
>>> goto fail;
>>> }
>>>
>>> - ret = drm_sched_init(&v3d->queue[V3D_CPU].sched,
>>> - &v3d_cpu_sched_ops, NULL,
>>> - DRM_SCHED_PRIORITY_COUNT,
>>> - 1, job_hang_limit,
>>> - msecs_to_jiffies(hang_limit_ms),
>>> NULL,
>>> - NULL, "v3d_cpu", v3d->drm.dev);
>>> + ret = v3d_cpu_sched_init(v3d);
>>> if (ret)
>>> goto fail;
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_execlist.c
>>> b/drivers/gpu/drm/xe/xe_execlist.c
>>> index a8c416a48812..7f29b7f04af4 100644
>>> --- a/drivers/gpu/drm/xe/xe_execlist.c
>>> +++ b/drivers/gpu/drm/xe/xe_execlist.c
>>> @@ -332,10 +332,13 @@ static const struct drm_sched_backend_ops
>>> drm_sched_ops = {
>>> static int execlist_exec_queue_init(struct xe_exec_queue *q)
>>> {
>>> struct drm_gpu_scheduler *sched;
>>> + struct drm_sched_init_params params;
>>> struct xe_execlist_exec_queue *exl;
>>> struct xe_device *xe = gt_to_xe(q->gt);
>>> int err;
>>>
>>> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
>>> +
>>> xe_assert(xe, !xe_device_uc_enabled(xe));
>>>
>>> drm_info(&xe->drm, "Enabling execlist submission (GuC
>>> submission disabled)\n");
>>> @@ -346,11 +349,18 @@ static int execlist_exec_queue_init(struct
>>> xe_exec_queue *q)
>>>
>>> exl->q = q;
>>>
>>> - err = drm_sched_init(&exl->sched, &drm_sched_ops, NULL, 1,
>>> - q->lrc[0]->ring.size /
>>> MAX_JOB_SIZE_BYTES,
>>> - XE_SCHED_HANG_LIMIT,
>>> XE_SCHED_JOB_TIMEOUT,
>>> - NULL, NULL, q->hwe->name,
>>> - gt_to_xe(q->gt)->drm.dev);
>>> + params.ops = &drm_sched_ops;
>>> + params.submit_wq = NULL; /* Use the system_wq. */
>>> + params.num_rqs = 1;
>>> + params.credit_limit = q->lrc[0]->ring.size /
>>> MAX_JOB_SIZE_BYTES;
>>> + params.hang_limit = XE_SCHED_HANG_LIMIT;
>>> + params.timeout = XE_SCHED_JOB_TIMEOUT;
>>> + params.timeout_wq = NULL; /* Use the system_wq. */
>>> + params.score = NULL;
>>> + params.name = q->hwe->name;
>>> + params.dev = gt_to_xe(q->gt)->drm.dev;
>>> +
>>> + err = drm_sched_init(&exl->sched, ¶ms);
>>> if (err)
>>> goto err_free;
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
>>> b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
>>> index 50361b4638f9..2129fee83f25 100644
>>> --- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
>>> +++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
>>> @@ -63,13 +63,26 @@ int xe_sched_init(struct xe_gpu_scheduler
>>> *sched,
>>> atomic_t *score, const char *name,
>>> struct device *dev)
>>> {
>>> + struct drm_sched_init_params params;
>>> +
>>> sched->ops = xe_ops;
>>> INIT_LIST_HEAD(&sched->msgs);
>>> INIT_WORK(&sched->work_process_msg,
>>> xe_sched_process_msg_work);
>>>
>>> - return drm_sched_init(&sched->base, ops, submit_wq, 1,
>>> hw_submission,
>>> - hang_limit, timeout, timeout_wq,
>>> score, name,
>>> - dev);
>>> + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
>>> +
>>> + params.ops = ops;
>>> + params.submit_wq = submit_wq;
>>> + params.num_rqs = 1;
>>> + params.credit_limit = hw_submission;
>>> + params.hang_limit = hang_limit;
>>> + params.timeout = timeout;
>>> + params.timeout_wq = timeout_wq;
>>> + params.score = score;
>>> + params.name = name;
>>> + params.dev = dev;
>>> +
>>> + return drm_sched_init(&sched->base, ¶ms);
>>> }
>>>
>>> void xe_sched_fini(struct xe_gpu_scheduler *sched)
>>> diff --git a/include/drm/gpu_scheduler.h
>>> b/include/drm/gpu_scheduler.h
>>> index 95e17504e46a..1a834ef43862 100644
>>> --- a/include/drm/gpu_scheduler.h
>>> +++ b/include/drm/gpu_scheduler.h
>>> @@ -553,12 +553,37 @@ struct drm_gpu_scheduler {
>>> struct device *dev;
>>> };
>>>
>>> +/**
>>> + * struct drm_sched_init_params - parameters for initializing a
>>> DRM GPU scheduler
>>> + *
>>> + * @ops: backend operations provided by the driver
>>> + * @submit_wq: workqueue to use for submission. If NULL, an
>>> ordered wq is
>>> + * allocated and used
>>> + * @num_rqs: Number of run-queues. This is at most
>>> DRM_SCHED_PRIORITY_COUNT,
>>> + * as there's usually one run-queue per priority, but
>>> could be less.
>>> + * @credit_limit: the number of credits this scheduler can hold
>>> from all jobs
>>> + * @hang_limit: number of times to allow a job to hang before
>>> dropping it
>>> + * @timeout: timeout value in jiffies for the scheduler
>>> + * @timeout_wq: workqueue to use for timeout work. If NULL, the
>>> system_wq is
>>> + * used
>>> + * @score: optional score atomic shared with other schedulers
>>> + * @name: name used for debugging
>>> + * @dev: associated device. Used for debugging
>>> + */
>>> +struct drm_sched_init_params {
>>> + const struct drm_sched_backend_ops *ops;
>>> + struct workqueue_struct *submit_wq;
>>> + struct workqueue_struct *timeout_wq;
>>> + u32 num_rqs, credit_limit;
>>> + unsigned int hang_limit;
>>> + long timeout;
>>> + atomic_t *score;
>>> + const char *name;
>>> + struct device *dev;
>>> +};
>>> +
>>> int drm_sched_init(struct drm_gpu_scheduler *sched,
>>> - const struct drm_sched_backend_ops *ops,
>>> - struct workqueue_struct *submit_wq,
>>> - u32 num_rqs, u32 credit_limit, unsigned int
>>> hang_limit,
>>> - long timeout, struct workqueue_struct
>>> *timeout_wq,
>>> - atomic_t *score, const char *name, struct
>>> device *dev);
>>> + const struct drm_sched_init_params *params);
>>>
>>> void drm_sched_fini(struct drm_gpu_scheduler *sched);
>>> int drm_sched_job_init(struct drm_sched_job *job,
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 15:06 ` Christian König
@ 2025-01-22 15:23 ` Philipp Stanner
2025-01-22 15:37 ` Christian König
2025-01-22 15:29 ` Matthew Brost
1 sibling, 1 reply; 35+ messages in thread
From: Philipp Stanner @ 2025-01-22 15:23 UTC (permalink / raw)
To: Christian König, Philipp Stanner, Alex Deucher, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Danilo Krummrich, Boris Brezillon,
Rob Herring, Steven Price, Liviu Dudau, Luben Tuikov,
Matthew Brost, Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li
Cc: amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
On Wed, 2025-01-22 at 16:06 +0100, Christian König wrote:
> Am 22.01.25 um 15:48 schrieb Philipp Stanner:
> > On Wed, 2025-01-22 at 15:34 +0100, Christian König wrote:
> > > Am 22.01.25 um 15:08 schrieb Philipp Stanner:
> > > > drm_sched_init() has a great many parameters and upcoming new
> > > > functionality for the scheduler might add even more. Generally,
> > > > the
> > > > great number of parameters reduces readability and has already
> > > > caused
> > > > one missnaming in:
> > > >
> > > > commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in
> > > > nouveau_sched_init()").
> > > >
> > > > Introduce a new struct for the scheduler init parameters and
> > > > port
> > > > all
> > > > users.
> > > >
> > > > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > > > ---
> > > > Howdy,
> > > >
> > > > I have a patch-series in the pipe that will add a `flags`
> > > > argument
> > > > to
> > > > drm_sched_init(). I thought it would be wise to first rework
> > > > the
> > > > API as
> > > > detailed in this patch. It's really a lot of parameters by now,
> > > > and
> > > > I
> > > > would expect that it might get more and more over the years for
> > > > special
> > > > use cases etc.
> > > >
> > > > Regards,
> > > > P.
> > > > ---
> > > > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +++-
> > > > drivers/gpu/drm/etnaviv/etnaviv_sched.c | 20 ++-
> > > > drivers/gpu/drm/imagination/pvr_queue.c | 21 +++-
> > > > drivers/gpu/drm/lima/lima_sched.c | 21 +++-
> > > > drivers/gpu/drm/msm/msm_ringbuffer.c | 22 ++--
> > > > drivers/gpu/drm/nouveau/nouveau_sched.c | 20 ++-
> > > > drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++--
> > > > drivers/gpu/drm/panthor/panthor_mmu.c | 18 ++-
> > > > drivers/gpu/drm/panthor/panthor_sched.c | 23 ++--
> > > > drivers/gpu/drm/scheduler/sched_main.c | 53 +++-----
> > > > drivers/gpu/drm/v3d/v3d_sched.c | 135
> > > > +++++++++++++++-
> > > > -----
> > > > drivers/gpu/drm/xe/xe_execlist.c | 20 ++-
> > > > drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++-
> > > > include/drm/gpu_scheduler.h | 35 +++++-
> > > > 14 files changed, 311 insertions(+), 139 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > > > index cd4fac120834..c1f03eb5f5ea 100644
> > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > > > @@ -2821,6 +2821,9 @@ static int
> > > > amdgpu_device_init_schedulers(struct amdgpu_device *adev)
> > > > {
> > > > long timeout;
> > > > int r, i;
> > > > + struct drm_sched_init_params params;
> > > Please keep the reverse xmas tree ordering for variable
> > > declaration.
> > > E.g. long lines first and variables like "i" and "r" last.
> > Okay dokay
> >
> > > Apart from that looks like a good idea to me.
> > >
> > >
> > > > +
> > > > + memset(¶ms, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > >
> > > > for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
> > > > struct amdgpu_ring *ring = adev->rings[i];
> > > > @@ -2844,12 +2847,18 @@ static int
> > > > amdgpu_device_init_schedulers(struct amdgpu_device *adev)
> > > > break;
> > > > }
> > > >
> > > > - r = drm_sched_init(&ring->sched,
> > > > &amdgpu_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - ring->num_hw_submission, 0,
> > > > - timeout, adev-
> > > > >reset_domain-
> > > > > wq,
> > > > - ring->sched_score, ring-
> > > > >name,
> > > > - adev->dev);
> > > > + params.ops = &amdgpu_sched_ops;
> > > > + params.submit_wq = NULL; /* Use the system_wq.
> > > > */
> > > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params.credit_limit = ring->num_hw_submission;
> > > > + params.hang_limit = 0;
> > > Could we please remove the hang limit as first step to get this
> > > awful
> > > feature deprecated?
> > Remove it? From the struct you mean?
> >
> > We can mark it as deprecated in the docstring of the new struct.
> > That's
> > what you mean, don't you?
>
> No, the function using this parameter already deprecated. What I
> meant
> is to start to completely remove this feature.
>
> The hang_limit basically says how often the scheduler should try to
> run
> a job over and over again before giving up.
Agreed, it should be removed.
But let me do that in a separate patch after this one is merged, and
just hint at the deprecation in the arg in the struct for now; it's
kind of unrelated to the init()-rework I'm doing here, ack?
>
> And we already agreed that trying the same thing over and over again
> and
> expecting different results is the definition of insanity :)
I'll quote you (and Einstein) with that if I ever give a presentation
about the scheduler ;p
P.
>
> So my suggestion is to drop the parameter and drop the job as soon as
> it
> caused a timeout.
>
> Regards,
> Christian.
>
> >
> > P.
> >
> > > Thanks,
> > > Christian.
> > >
> > > > + params.timeout = timeout;
> > > > + params.timeout_wq = adev->reset_domain->wq;
> > > > + params.score = ring->sched_score;
> > > > + params.name = ring->name;
> > > > + params.dev = adev->dev;
> > > > +
> > > > + r = drm_sched_init(&ring->sched, ¶ms);
> > > > if (r) {
> > > > DRM_ERROR("Failed to create scheduler
> > > > on
> > > > ring %s.\n",
> > > > ring->name);
> > > > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > > > b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > > > index 5b67eda122db..7d8517f1963e 100644
> > > > --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > > > +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > > > @@ -145,12 +145,22 @@ int etnaviv_sched_push_job(struct
> > > > etnaviv_gem_submit *submit)
> > > > int etnaviv_sched_init(struct etnaviv_gpu *gpu)
> > > > {
> > > > int ret;
> > > > + struct drm_sched_init_params params;
> > > >
> > > > - ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops,
> > > > NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - etnaviv_hw_jobs_limit,
> > > > etnaviv_job_hang_limit,
> > > > - msecs_to_jiffies(500), NULL,
> > > > NULL,
> > > > - dev_name(gpu->dev), gpu->dev);
> > > > + memset(¶ms, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > + params.ops = &etnaviv_sched_ops;
> > > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params.credit_limit = etnaviv_hw_jobs_limit;
> > > > + params.hang_limit = etnaviv_job_hang_limit;
> > > > + params.timeout = msecs_to_jiffies(500);
> > > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > > + params.score = NULL;
> > > > + params.name = dev_name(gpu->dev);
> > > > + params.dev = gpu->dev;
> > > > +
> > > > + ret = drm_sched_init(&gpu->sched, ¶ms);
> > > > if (ret)
> > > > return ret;
> > > >
> > > > diff --git a/drivers/gpu/drm/imagination/pvr_queue.c
> > > > b/drivers/gpu/drm/imagination/pvr_queue.c
> > > > index c4f08432882b..03a2ce1a88e7 100644
> > > > --- a/drivers/gpu/drm/imagination/pvr_queue.c
> > > > +++ b/drivers/gpu/drm/imagination/pvr_queue.c
> > > > @@ -1211,10 +1211,13 @@ struct pvr_queue
> > > > *pvr_queue_create(struct
> > > > pvr_context *ctx,
> > > > };
> > > > struct pvr_device *pvr_dev = ctx->pvr_dev;
> > > > struct drm_gpu_scheduler *sched;
> > > > + struct drm_sched_init_params sched_params;
> > > > struct pvr_queue *queue;
> > > > int ctx_state_size, err;
> > > > void *cpu_map;
> > > >
> > > > + memset(&sched_params, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > if (WARN_ON(type >= sizeof(props)))
> > > > return ERR_PTR(-EINVAL);
> > > >
> > > > @@ -1282,12 +1285,18 @@ struct pvr_queue
> > > > *pvr_queue_create(struct
> > > > pvr_context *ctx,
> > > >
> > > > queue->timeline_ufo.value = cpu_map;
> > > >
> > > > - err = drm_sched_init(&queue->scheduler,
> > > > - &pvr_queue_sched_ops,
> > > > - pvr_dev->sched_wq, 1, 64 * 1024,
> > > > 1,
> > > > - msecs_to_jiffies(500),
> > > > - pvr_dev->sched_wq, NULL, "pvr-
> > > > queue",
> > > > - pvr_dev->base.dev);
> > > > + sched_params.ops = &pvr_queue_sched_ops;
> > > > + sched_params.submit_wq = pvr_dev->sched_wq;
> > > > + sched_params.num_rqs = 1;
> > > > + sched_params.credit_limit = 64 * 1024;
> > > > + sched_params.hang_limit = 1;
> > > > + sched_params.timeout = msecs_to_jiffies(500);
> > > > + sched_params.timeout_wq = pvr_dev->sched_wq;
> > > > + sched_params.score = NULL;
> > > > + sched_params.name = "pvr-queue";
> > > > + sched_params.dev = pvr_dev->base.dev;
> > > > +
> > > > + err = drm_sched_init(&queue->scheduler,
> > > > &sched_params);
> > > > if (err)
> > > > goto err_release_ufo;
> > > >
> > > > diff --git a/drivers/gpu/drm/lima/lima_sched.c
> > > > b/drivers/gpu/drm/lima/lima_sched.c
> > > > index b40c90e97d7e..a64c50fb6d1e 100644
> > > > --- a/drivers/gpu/drm/lima/lima_sched.c
> > > > +++ b/drivers/gpu/drm/lima/lima_sched.c
> > > > @@ -513,20 +513,29 @@ static void
> > > > lima_sched_recover_work(struct
> > > > work_struct *work)
> > > >
> > > > int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const
> > > > char
> > > > *name)
> > > > {
> > > > + struct drm_sched_init_params params;
> > > > unsigned int timeout = lima_sched_timeout_ms > 0 ?
> > > > lima_sched_timeout_ms : 10000;
> > > >
> > > > + memset(¶ms, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > pipe->fence_context = dma_fence_context_alloc(1);
> > > > spin_lock_init(&pipe->fence_lock);
> > > >
> > > > INIT_WORK(&pipe->recover_work,
> > > > lima_sched_recover_work);
> > > >
> > > > - return drm_sched_init(&pipe->base, &lima_sched_ops,
> > > > NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - 1,
> > > > - lima_job_hang_limit,
> > > > - msecs_to_jiffies(timeout), NULL,
> > > > - NULL, name, pipe->ldev->dev);
> > > > + params.ops = &lima_sched_ops;
> > > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params.credit_limit = 1;
> > > > + params.hang_limit = lima_job_hang_limit;
> > > > + params.timeout = msecs_to_jiffies(timeout);
> > > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > > + params.score = NULL;
> > > > + params.name = name;
> > > > + params.dev = pipe->ldev->dev;
> > > > +
> > > > + return drm_sched_init(&pipe->base, ¶ms);
> > > > }
> > > >
> > > > void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
> > > > diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c
> > > > b/drivers/gpu/drm/msm/msm_ringbuffer.c
> > > > index c803556a8f64..49a2c7422dc6 100644
> > > > --- a/drivers/gpu/drm/msm/msm_ringbuffer.c
> > > > +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
> > > > @@ -59,11 +59,13 @@ static const struct drm_sched_backend_ops
> > > > msm_sched_ops = {
> > > > struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu
> > > > *gpu,
> > > > int id,
> > > > void *memptrs, uint64_t memptrs_iova)
> > > > {
> > > > + struct drm_sched_init_params params;
> > > > struct msm_ringbuffer *ring;
> > > > - long sched_timeout;
> > > > char name[32];
> > > > int ret;
> > > >
> > > > + memset(¶ms, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > /* We assume everywhere that MSM_GPU_RINGBUFFER_SZ is
> > > > a
> > > > power of 2 */
> > > > BUILD_BUG_ON(!is_power_of_2(MSM_GPU_RINGBUFFER_SZ));
> > > >
> > > > @@ -95,13 +97,19 @@ struct msm_ringbuffer
> > > > *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
> > > > ring->memptrs = memptrs;
> > > > ring->memptrs_iova = memptrs_iova;
> > > >
> > > > - /* currently managing hangcheck ourselves: */
> > > > - sched_timeout = MAX_SCHEDULE_TIMEOUT;
> > > > + params.ops = &msm_sched_ops;
> > > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params.credit_limit = num_hw_submissions;
> > > > + params.hang_limit = 0;
> > > > + /* currently managing hangcheck ourselves: */
> > > > + params.timeout = MAX_SCHEDULE_TIMEOUT;
> > > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > > + params.score = NULL;
> > > > + params.name = to_msm_bo(ring->bo)->name;
> > > > + params.dev = gpu->dev->dev;
> > > >
> > > > - ret = drm_sched_init(&ring->sched, &msm_sched_ops,
> > > > NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - num_hw_submissions, 0,
> > > > sched_timeout,
> > > > - NULL, NULL, to_msm_bo(ring->bo)-
> > > > > name, gpu->dev->dev);
> > > > + ret = drm_sched_init(&ring->sched, ¶ms);
> > > > if (ret) {
> > > > goto fail;
> > > > }
> > > > diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c
> > > > b/drivers/gpu/drm/nouveau/nouveau_sched.c
> > > > index 4412f2711fb5..f20c2e612750 100644
> > > > --- a/drivers/gpu/drm/nouveau/nouveau_sched.c
> > > > +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
> > > > @@ -404,9 +404,11 @@ nouveau_sched_init(struct nouveau_sched
> > > > *sched, struct nouveau_drm *drm,
> > > > {
> > > > struct drm_gpu_scheduler *drm_sched = &sched->base;
> > > > struct drm_sched_entity *entity = &sched->entity;
> > > > - const long timeout =
> > > > msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
> > > > + struct drm_sched_init_params params;
> > > > int ret;
> > > >
> > > > + memset(¶ms, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > if (!wq) {
> > > > wq = alloc_workqueue("nouveau_sched_wq_%d", 0,
> > > > WQ_MAX_ACTIVE,
> > > > current->pid);
> > > > @@ -416,10 +418,18 @@ nouveau_sched_init(struct nouveau_sched
> > > > *sched, struct nouveau_drm *drm,
> > > > sched->wq = wq;
> > > > }
> > > >
> > > > - ret = drm_sched_init(drm_sched, &nouveau_sched_ops,
> > > > wq,
> > > > - NOUVEAU_SCHED_PRIORITY_COUNT,
> > > > - credit_limit, 0, timeout,
> > > > - NULL, NULL, "nouveau_sched", drm-
> > > > > dev->dev);
> > > > + params.ops = &nouveau_sched_ops;
> > > > + params.submit_wq = wq;
> > > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params.credit_limit = credit_limit;
> > > > + params.hang_limit = 0;
> > > > + params.timeout =
> > > > msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
> > > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > > + params.score = NULL;
> > > > + params.name = "nouveau_sched";
> > > > + params.dev = drm->dev->dev;
> > > > +
> > > > + ret = drm_sched_init(drm_sched, ¶ms);
> > > > if (ret)
> > > > goto fail_wq;
> > > >
> > > > diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > b/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > index 9b8e82fb8bc4..6b509ff446b5 100644
> > > > --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > @@ -836,10 +836,13 @@ static irqreturn_t
> > > > panfrost_job_irq_handler(int irq, void *data)
> > > >
> > > > int panfrost_job_init(struct panfrost_device *pfdev)
> > > > {
> > > > + struct drm_sched_init_params params;
> > > > struct panfrost_job_slot *js;
> > > > unsigned int nentries = 2;
> > > > int ret, j;
> > > >
> > > > + memset(¶ms, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > /* All GPUs have two entries per queue, but without
> > > > jobchain
> > > > * disambiguation stopping the right job in the close
> > > > path
> > > > is tricky,
> > > > * so let's just advertise one entry in that case.
> > > > @@ -872,16 +875,21 @@ int panfrost_job_init(struct
> > > > panfrost_device
> > > > *pfdev)
> > > > if (!pfdev->reset.wq)
> > > > return -ENOMEM;
> > > >
> > > > + params.ops = &panfrost_sched_ops;
> > > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params.credit_limit = nentries;
> > > > + params.hang_limit = 0;
> > > > + params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
> > > > + params.timeout_wq = pfdev->reset.wq;
> > > > + params.score = NULL;
> > > > + params.name = "pan_js";
> > > > + params.dev = pfdev->dev;
> > > > +
> > > > for (j = 0; j < NUM_JOB_SLOTS; j++) {
> > > > js->queue[j].fence_context =
> > > > dma_fence_context_alloc(1);
> > > >
> > > > - ret = drm_sched_init(&js->queue[j].sched,
> > > > - &panfrost_sched_ops,
> > > > NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - nentries, 0,
> > > > -
> > > > msecs_to_jiffies(JOB_TIMEOUT_MS),
> > > > - pfdev->reset.wq,
> > > > - NULL, "pan_js", pfdev-
> > > > >dev);
> > > > + ret = drm_sched_init(&js->queue[j].sched,
> > > > ¶ms);
> > > > if (ret) {
> > > > dev_err(pfdev->dev, "Failed to create
> > > > scheduler: %d.", ret);
> > > > goto err_sched;
> > > > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c
> > > > b/drivers/gpu/drm/panthor/panthor_mmu.c
> > > > index a49132f3778b..4362442cbfd8 100644
> > > > --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> > > > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> > > > @@ -2268,6 +2268,7 @@ panthor_vm_create(struct panthor_device
> > > > *ptdev, bool for_mcu,
> > > > u64 full_va_range = 1ull << va_bits;
> > > > struct drm_gem_object *dummy_gem;
> > > > struct drm_gpu_scheduler *sched;
> > > > + struct drm_sched_init_params sched_params;
> > > > struct io_pgtable_cfg pgtbl_cfg;
> > > > u64 mair, min_va, va_range;
> > > > struct panthor_vm *vm;
> > > > @@ -2284,6 +2285,8 @@ panthor_vm_create(struct panthor_device
> > > > *ptdev, bool for_mcu,
> > > > goto err_free_vm;
> > > > }
> > > >
> > > > + memset(&sched_params, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > mutex_init(&vm->heaps.lock);
> > > > vm->for_mcu = for_mcu;
> > > > vm->ptdev = ptdev;
> > > > @@ -2325,11 +2328,18 @@ panthor_vm_create(struct panthor_device
> > > > *ptdev, bool for_mcu,
> > > > goto err_mm_takedown;
> > > > }
> > > >
> > > > + sched_params.ops = &panthor_vm_bind_ops;
> > > > + sched_params.submit_wq = ptdev->mmu->vm.wq;
> > > > + sched_params.num_rqs = 1;
> > > > + sched_params.credit_limit = 1;
> > > > + sched_params.hang_limit = 0;
> > > > /* Bind operations are synchronous for now, no timeout
> > > > needed. */
> > > > - ret = drm_sched_init(&vm->sched, &panthor_vm_bind_ops,
> > > > ptdev->mmu->vm.wq,
> > > > - 1, 1, 0,
> > > > - MAX_SCHEDULE_TIMEOUT, NULL, NULL,
> > > > - "panthor-vm-bind", ptdev-
> > > > >base.dev);
> > > > + sched_params.timeout = MAX_SCHEDULE_TIMEOUT;
> > > > + sched_params.timeout_wq = NULL; /* Use the system_wq.
> > > > */
> > > > + sched_params.score = NULL;
> > > > + sched_params.name = "panthor-vm-bind";
> > > > + sched_params.dev = ptdev->base.dev;
> > > > + ret = drm_sched_init(&vm->sched, &sched_params);
> > > > if (ret)
> > > > goto err_free_io_pgtable;
> > > >
> > > > diff --git a/drivers/gpu/drm/panthor/panthor_sched.c
> > > > b/drivers/gpu/drm/panthor/panthor_sched.c
> > > > index ef4bec7ff9c7..a324346d302f 100644
> > > > --- a/drivers/gpu/drm/panthor/panthor_sched.c
> > > > +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> > > > @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group
> > > > *group,
> > > > const struct drm_panthor_queue_create
> > > > *args)
> > > > {
> > > > struct drm_gpu_scheduler *drm_sched;
> > > > + struct drm_sched_init_params sched_params;
> > > > struct panthor_queue *queue;
> > > > int ret;
> > > >
> > > > @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group
> > > > *group,
> > > > if (!queue)
> > > > return ERR_PTR(-ENOMEM);
> > > >
> > > > + memset(&sched_params, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > queue->fence_ctx.id = dma_fence_context_alloc(1);
> > > > spin_lock_init(&queue->fence_ctx.lock);
> > > > INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
> > > > @@ -3341,17 +3344,23 @@ group_create_queue(struct panthor_group
> > > > *group,
> > > > if (ret)
> > > > goto err_free_queue;
> > > >
> > > > + sched_params.ops = &panthor_queue_sched_ops;
> > > > + sched_params.submit_wq = group->ptdev->scheduler->wq;
> > > > + sched_params.num_rqs = 1;
> > > > /*
> > > > - * Credit limit argument tells us the total number of
> > > > instructions
> > > > + * The credit limit argument tells us the total number
> > > > of
> > > > instructions
> > > > * across all CS slots in the ringbuffer, with some
> > > > jobs
> > > > requiring
> > > > * twice as many as others, depending on their
> > > > profiling
> > > > status.
> > > > */
> > > > - ret = drm_sched_init(&queue->scheduler,
> > > > &panthor_queue_sched_ops,
> > > > - group->ptdev->scheduler->wq, 1,
> > > > - args->ringbuf_size / sizeof(u64),
> > > > - 0,
> > > > msecs_to_jiffies(JOB_TIMEOUT_MS),
> > > > - group->ptdev->reset.wq,
> > > > - NULL, "panthor-queue", group-
> > > > >ptdev-
> > > > > base.dev);
> > > > + sched_params.credit_limit = args->ringbuf_size /
> > > > sizeof(u64);
> > > > + sched_params.hang_limit = 0;
> > > > + sched_params.timeout =
> > > > msecs_to_jiffies(JOB_TIMEOUT_MS);
> > > > + sched_params.timeout_wq = group->ptdev->reset.wq;
> > > > + sched_params.score = NULL;
> > > > + sched_params.name = "panthor-queue";
> > > > + sched_params.dev = group->ptdev->base.dev;
> > > > +
> > > > + ret = drm_sched_init(&queue->scheduler,
> > > > &sched_params);
> > > > if (ret)
> > > > goto err_free_queue;
> > > >
> > > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > > > b/drivers/gpu/drm/scheduler/sched_main.c
> > > > index 57da84908752..27db748a5269 100644
> > > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > > @@ -1240,40 +1240,25 @@ static void
> > > > drm_sched_run_job_work(struct
> > > > work_struct *w)
> > > > * drm_sched_init - Init a gpu scheduler instance
> > > > *
> > > > * @sched: scheduler instance
> > > > - * @ops: backend operations for this scheduler
> > > > - * @submit_wq: workqueue to use for submission. If NULL, an
> > > > ordered wq is
> > > > - * allocated and used
> > > > - * @num_rqs: number of runqueues, one for each priority, up to
> > > > DRM_SCHED_PRIORITY_COUNT
> > > > - * @credit_limit: the number of credits this scheduler can
> > > > hold
> > > > from all jobs
> > > > - * @hang_limit: number of times to allow a job to hang before
> > > > dropping it
> > > > - * @timeout: timeout value in jiffies for the scheduler
> > > > - * @timeout_wq: workqueue to use for timeout work. If NULL,
> > > > the
> > > > system_wq is
> > > > - * used
> > > > - * @score: optional score atomic shared with other schedulers
> > > > - * @name: name used for debugging
> > > > - * @dev: target &struct device
> > > > + * @params: scheduler initialization parameters
> > > > *
> > > > * Return 0 on success, otherwise error code.
> > > > */
> > > > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > > - const struct drm_sched_backend_ops *ops,
> > > > - struct workqueue_struct *submit_wq,
> > > > - u32 num_rqs, u32 credit_limit, unsigned int
> > > > hang_limit,
> > > > - long timeout, struct workqueue_struct
> > > > *timeout_wq,
> > > > - atomic_t *score, const char *name, struct
> > > > device *dev)
> > > > + const struct drm_sched_init_params *params)
> > > > {
> > > > int i;
> > > >
> > > > - sched->ops = ops;
> > > > - sched->credit_limit = credit_limit;
> > > > - sched->name = name;
> > > > - sched->timeout = timeout;
> > > > - sched->timeout_wq = timeout_wq ? : system_wq;
> > > > - sched->hang_limit = hang_limit;
> > > > - sched->score = score ? score : &sched->_score;
> > > > - sched->dev = dev;
> > > > + sched->ops = params->ops;
> > > > + sched->credit_limit = params->credit_limit;
> > > > + sched->name = params->name;
> > > > + sched->timeout = params->timeout;
> > > > + sched->timeout_wq = params->timeout_wq ? : system_wq;
> > > > + sched->hang_limit = params->hang_limit;
> > > > + sched->score = params->score ? params->score : &sched-
> > > > > _score;
> > > > + sched->dev = params->dev;
> > > >
> > > > - if (num_rqs > DRM_SCHED_PRIORITY_COUNT) {
> > > > + if (params->num_rqs > DRM_SCHED_PRIORITY_COUNT) {
> > > > /* This is a gross violation--tell drivers
> > > > what
> > > > the problem is.
> > > > */
> > > > drm_err(sched, "%s: num_rqs cannot be greater
> > > > than
> > > > DRM_SCHED_PRIORITY_COUNT\n",
> > > > @@ -1288,16 +1273,16 @@ int drm_sched_init(struct
> > > > drm_gpu_scheduler
> > > > *sched,
> > > > return 0;
> > > > }
> > > >
> > > > - if (submit_wq) {
> > > > - sched->submit_wq = submit_wq;
> > > > + if (params->submit_wq) {
> > > > + sched->submit_wq = params->submit_wq;
> > > > sched->own_submit_wq = false;
> > > > } else {
> > > > #ifdef CONFIG_LOCKDEP
> > > > - sched->submit_wq =
> > > > alloc_ordered_workqueue_lockdep_map(name,
> > > > -
> > > >
> > > > WQ_MEM_RECLAIM,
> > > > -
> > > >
> > > > &drm_sched_lockdep_map);
> > > > + sched->submit_wq =
> > > > alloc_ordered_workqueue_lockdep_map(
> > > > + params->name,
> > > > WQ_MEM_RECLAIM,
> > > > + &drm_sched_lockdep_map
> > > > );
> > > > #else
> > > > - sched->submit_wq =
> > > > alloc_ordered_workqueue(name,
> > > > WQ_MEM_RECLAIM);
> > > > + sched->submit_wq =
> > > > alloc_ordered_workqueue(params-
> > > > > name, WQ_MEM_RECLAIM);
> > > > #endif
> > > > if (!sched->submit_wq)
> > > > return -ENOMEM;
> > > > @@ -1305,11 +1290,11 @@ int drm_sched_init(struct
> > > > drm_gpu_scheduler
> > > > *sched,
> > > > sched->own_submit_wq = true;
> > > > }
> > > >
> > > > - sched->sched_rq = kmalloc_array(num_rqs,
> > > > sizeof(*sched-
> > > > > sched_rq),
> > > > + sched->sched_rq = kmalloc_array(params->num_rqs,
> > > > sizeof(*sched->sched_rq),
> > > > GFP_KERNEL |
> > > > __GFP_ZERO);
> > > > if (!sched->sched_rq)
> > > > goto Out_check_own;
> > > > - sched->num_rqs = num_rqs;
> > > > + sched->num_rqs = params->num_rqs;
> > > > for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched-
> > > > >num_rqs;
> > > > i++) {
> > > > sched->sched_rq[i] = kzalloc(sizeof(*sched-
> > > > > sched_rq[i]), GFP_KERNEL);
> > > > if (!sched->sched_rq[i])
> > > > diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
> > > > b/drivers/gpu/drm/v3d/v3d_sched.c
> > > > index 99ac4995b5a1..716e6d074d87 100644
> > > > --- a/drivers/gpu/drm/v3d/v3d_sched.c
> > > > +++ b/drivers/gpu/drm/v3d/v3d_sched.c
> > > > @@ -814,67 +814,124 @@ static const struct
> > > > drm_sched_backend_ops
> > > > v3d_cpu_sched_ops = {
> > > > .free_job = v3d_cpu_job_free
> > > > };
> > > >
> > > > +/*
> > > > + * v3d's scheduler instances are all identical, except for ops
> > > > and
> > > > name.
> > > > + */
> > > > +static void
> > > > +v3d_common_sched_init(struct drm_sched_init_params *params,
> > > > struct
> > > > device *dev)
> > > > +{
> > > > + memset(params, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > + params->submit_wq = NULL; /* Use the system_wq. */
> > > > + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params->credit_limit = 1;
> > > > + params->hang_limit = 0;
> > > > + params->timeout = msecs_to_jiffies(500);
> > > > + params->timeout_wq = NULL; /* Use the system_wq. */
> > > > + params->score = NULL;
> > > > + params->dev = dev;
> > > > +}
> > > > +
> > > > +static int
> > > > +v3d_bin_sched_init(struct v3d_dev *v3d)
> > > > +{
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > > + params.ops = &v3d_bin_sched_ops;
> > > > + params.name = "v3d_bin";
> > > > +
> > > > + return drm_sched_init(&v3d->queue[V3D_BIN].sched,
> > > > ¶ms);
> > > > +}
> > > > +
> > > > +static int
> > > > +v3d_render_sched_init(struct v3d_dev *v3d)
> > > > +{
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > > + params.ops = &v3d_render_sched_ops;
> > > > + params.name = "v3d_render";
> > > > +
> > > > + return drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> > > > ¶ms);
> > > > +}
> > > > +
> > > > +static int
> > > > +v3d_tfu_sched_init(struct v3d_dev *v3d)
> > > > +{
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > > + params.ops = &v3d_tfu_sched_ops;
> > > > + params.name = "v3d_tfu";
> > > > +
> > > > + return drm_sched_init(&v3d->queue[V3D_TFU].sched,
> > > > ¶ms);
> > > > +}
> > > > +
> > > > +static int
> > > > +v3d_csd_sched_init(struct v3d_dev *v3d)
> > > > +{
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > > + params.ops = &v3d_csd_sched_ops;
> > > > + params.name = "v3d_csd";
> > > > +
> > > > + return drm_sched_init(&v3d->queue[V3D_CSD].sched,
> > > > ¶ms);
> > > > +}
> > > > +
> > > > +static int
> > > > +v3d_cache_sched_init(struct v3d_dev *v3d)
> > > > +{
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > > + params.ops = &v3d_cache_clean_sched_ops;
> > > > + params.name = "v3d_cache_clean";
> > > > +
> > > > + return drm_sched_init(&v3d-
> > > > >queue[V3D_CACHE_CLEAN].sched,
> > > > ¶ms);
> > > > +}
> > > > +
> > > > +static int
> > > > +v3d_cpu_sched_init(struct v3d_dev *v3d)
> > > > +{
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > > + params.ops = &v3d_cpu_sched_ops;
> > > > + params.name = "v3d_cpu";
> > > > +
> > > > + return drm_sched_init(&v3d->queue[V3D_CPU].sched,
> > > > ¶ms);
> > > > +}
> > > > +
> > > > int
> > > > v3d_sched_init(struct v3d_dev *v3d)
> > > > {
> > > > - int hw_jobs_limit = 1;
> > > > - int job_hang_limit = 0;
> > > > - int hang_limit_ms = 500;
> > > > int ret;
> > > >
> > > > - ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
> > > > - &v3d_bin_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - hw_jobs_limit, job_hang_limit,
> > > > - msecs_to_jiffies(hang_limit_ms),
> > > > NULL,
> > > > - NULL, "v3d_bin", v3d->drm.dev);
> > > > + ret = v3d_bin_sched_init(v3d);
> > > > if (ret)
> > > > return ret;
> > > >
> > > > - ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> > > > - &v3d_render_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - hw_jobs_limit, job_hang_limit,
> > > > - msecs_to_jiffies(hang_limit_ms),
> > > > NULL,
> > > > - NULL, "v3d_render", v3d-
> > > > >drm.dev);
> > > > + ret = v3d_render_sched_init(v3d);
> > > > if (ret)
> > > > goto fail;
> > > >
> > > > - ret = drm_sched_init(&v3d->queue[V3D_TFU].sched,
> > > > - &v3d_tfu_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - hw_jobs_limit, job_hang_limit,
> > > > - msecs_to_jiffies(hang_limit_ms),
> > > > NULL,
> > > > - NULL, "v3d_tfu", v3d->drm.dev);
> > > > + ret = v3d_tfu_sched_init(v3d);
> > > > if (ret)
> > > > goto fail;
> > > >
> > > > if (v3d_has_csd(v3d)) {
> > > > - ret = drm_sched_init(&v3d-
> > > > >queue[V3D_CSD].sched,
> > > > - &v3d_csd_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - hw_jobs_limit,
> > > > job_hang_limit,
> > > > -
> > > > msecs_to_jiffies(hang_limit_ms), NULL,
> > > > - NULL, "v3d_csd", v3d-
> > > > > drm.dev);
> > > > + ret = v3d_csd_sched_init(v3d);
> > > > if (ret)
> > > > goto fail;
> > > >
> > > > - ret = drm_sched_init(&v3d-
> > > > > queue[V3D_CACHE_CLEAN].sched,
> > > > -
> > > > &v3d_cache_clean_sched_ops,
> > > > NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - hw_jobs_limit,
> > > > job_hang_limit,
> > > > -
> > > > msecs_to_jiffies(hang_limit_ms), NULL,
> > > > - NULL, "v3d_cache_clean",
> > > > v3d-
> > > > > drm.dev);
> > > > + ret = v3d_cache_sched_init(v3d);
> > > > if (ret)
> > > > goto fail;
> > > > }
> > > >
> > > > - ret = drm_sched_init(&v3d->queue[V3D_CPU].sched,
> > > > - &v3d_cpu_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - 1, job_hang_limit,
> > > > - msecs_to_jiffies(hang_limit_ms),
> > > > NULL,
> > > > - NULL, "v3d_cpu", v3d->drm.dev);
> > > > + ret = v3d_cpu_sched_init(v3d);
> > > > if (ret)
> > > > goto fail;
> > > >
> > > > diff --git a/drivers/gpu/drm/xe/xe_execlist.c
> > > > b/drivers/gpu/drm/xe/xe_execlist.c
> > > > index a8c416a48812..7f29b7f04af4 100644
> > > > --- a/drivers/gpu/drm/xe/xe_execlist.c
> > > > +++ b/drivers/gpu/drm/xe/xe_execlist.c
> > > > @@ -332,10 +332,13 @@ static const struct drm_sched_backend_ops
> > > > drm_sched_ops = {
> > > > static int execlist_exec_queue_init(struct xe_exec_queue *q)
> > > > {
> > > > struct drm_gpu_scheduler *sched;
> > > > + struct drm_sched_init_params params;
> > > > struct xe_execlist_exec_queue *exl;
> > > > struct xe_device *xe = gt_to_xe(q->gt);
> > > > int err;
> > > >
> > > > + memset(¶ms, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > xe_assert(xe, !xe_device_uc_enabled(xe));
> > > >
> > > > drm_info(&xe->drm, "Enabling execlist submission (GuC
> > > > submission disabled)\n");
> > > > @@ -346,11 +349,18 @@ static int
> > > > execlist_exec_queue_init(struct
> > > > xe_exec_queue *q)
> > > >
> > > > exl->q = q;
> > > >
> > > > - err = drm_sched_init(&exl->sched, &drm_sched_ops,
> > > > NULL, 1,
> > > > - q->lrc[0]->ring.size /
> > > > MAX_JOB_SIZE_BYTES,
> > > > - XE_SCHED_HANG_LIMIT,
> > > > XE_SCHED_JOB_TIMEOUT,
> > > > - NULL, NULL, q->hwe->name,
> > > > - gt_to_xe(q->gt)->drm.dev);
> > > > + params.ops = &drm_sched_ops;
> > > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > > + params.num_rqs = 1;
> > > > + params.credit_limit = q->lrc[0]->ring.size /
> > > > MAX_JOB_SIZE_BYTES;
> > > > + params.hang_limit = XE_SCHED_HANG_LIMIT;
> > > > + params.timeout = XE_SCHED_JOB_TIMEOUT;
> > > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > > + params.score = NULL;
> > > > + params.name = q->hwe->name;
> > > > + params.dev = gt_to_xe(q->gt)->drm.dev;
> > > > +
> > > > + err = drm_sched_init(&exl->sched, ¶ms);
> > > > if (err)
> > > > goto err_free;
> > > >
> > > > diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > > > b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > > > index 50361b4638f9..2129fee83f25 100644
> > > > --- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > > > +++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > > > @@ -63,13 +63,26 @@ int xe_sched_init(struct xe_gpu_scheduler
> > > > *sched,
> > > > atomic_t *score, const char *name,
> > > > struct device *dev)
> > > > {
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > sched->ops = xe_ops;
> > > > INIT_LIST_HEAD(&sched->msgs);
> > > > INIT_WORK(&sched->work_process_msg,
> > > > xe_sched_process_msg_work);
> > > >
> > > > - return drm_sched_init(&sched->base, ops, submit_wq, 1,
> > > > hw_submission,
> > > > - hang_limit, timeout, timeout_wq,
> > > > score, name,
> > > > - dev);
> > > > + memset(¶ms, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > + params.ops = ops;
> > > > + params.submit_wq = submit_wq;
> > > > + params.num_rqs = 1;
> > > > + params.credit_limit = hw_submission;
> > > > + params.hang_limit = hang_limit;
> > > > + params.timeout = timeout;
> > > > + params.timeout_wq = timeout_wq;
> > > > + params.score = score;
> > > > + params.name = name;
> > > > + params.dev = dev;
> > > > +
> > > > + return drm_sched_init(&sched->base, ¶ms);
> > > > }
> > > >
> > > > void xe_sched_fini(struct xe_gpu_scheduler *sched)
> > > > diff --git a/include/drm/gpu_scheduler.h
> > > > b/include/drm/gpu_scheduler.h
> > > > index 95e17504e46a..1a834ef43862 100644
> > > > --- a/include/drm/gpu_scheduler.h
> > > > +++ b/include/drm/gpu_scheduler.h
> > > > @@ -553,12 +553,37 @@ struct drm_gpu_scheduler {
> > > > struct device *dev;
> > > > };
> > > >
> > > > +/**
> > > > + * struct drm_sched_init_params - parameters for initializing
> > > > a
> > > > DRM GPU scheduler
> > > > + *
> > > > + * @ops: backend operations provided by the driver
> > > > + * @submit_wq: workqueue to use for submission. If NULL, an
> > > > ordered wq is
> > > > + * allocated and used
> > > > + * @num_rqs: Number of run-queues. This is at most
> > > > DRM_SCHED_PRIORITY_COUNT,
> > > > + * as there's usually one run-queue per priority,
> > > > but
> > > > could be less.
> > > > + * @credit_limit: the number of credits this scheduler can
> > > > hold
> > > > from all jobs
> > > > + * @hang_limit: number of times to allow a job to hang before
> > > > dropping it
> > > > + * @timeout: timeout value in jiffies for the scheduler
> > > > + * @timeout_wq: workqueue to use for timeout work. If NULL,
> > > > the
> > > > system_wq is
> > > > + * used
> > > > + * @score: optional score atomic shared with other schedulers
> > > > + * @name: name used for debugging
> > > > + * @dev: associated device. Used for debugging
> > > > + */
> > > > +struct drm_sched_init_params {
> > > > + const struct drm_sched_backend_ops *ops;
> > > > + struct workqueue_struct *submit_wq;
> > > > + struct workqueue_struct *timeout_wq;
> > > > + u32 num_rqs, credit_limit;
> > > > + unsigned int hang_limit;
> > > > + long timeout;
> > > > + atomic_t *score;
> > > > + const char *name;
> > > > + struct device *dev;
> > > > +};
> > > > +
> > > > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > > - const struct drm_sched_backend_ops *ops,
> > > > - struct workqueue_struct *submit_wq,
> > > > - u32 num_rqs, u32 credit_limit, unsigned int
> > > > hang_limit,
> > > > - long timeout, struct workqueue_struct
> > > > *timeout_wq,
> > > > - atomic_t *score, const char *name, struct
> > > > device *dev);
> > > > + const struct drm_sched_init_params *params);
> > > >
> > > > void drm_sched_fini(struct drm_gpu_scheduler *sched);
> > > > int drm_sched_job_init(struct drm_sched_job *job,
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 15:06 ` Christian König
2025-01-22 15:23 ` Philipp Stanner
@ 2025-01-22 15:29 ` Matthew Brost
1 sibling, 0 replies; 35+ messages in thread
From: Matthew Brost @ 2025-01-22 15:29 UTC (permalink / raw)
To: Christian König
Cc: Philipp Stanner, Philipp Stanner, Alex Deucher, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Danilo Krummrich, Boris Brezillon,
Rob Herring, Steven Price, Liviu Dudau, Luben Tuikov, Melissa Wen,
Maíra Canal, Lucas De Marchi, Thomas Hellström,
Rodrigo Vivi, Sunil Khatri, Lijo Lazar, Mario Limonciello, Ma Jun,
Yunxiang Li, amd-gfx, dri-devel, linux-kernel, etnaviv, lima,
linux-arm-msm, freedreno, nouveau, intel-xe
On Wed, Jan 22, 2025 at 04:06:10PM +0100, Christian König wrote:
> Am 22.01.25 um 15:48 schrieb Philipp Stanner:
> > On Wed, 2025-01-22 at 15:34 +0100, Christian König wrote:
> > > Am 22.01.25 um 15:08 schrieb Philipp Stanner:
> > > > drm_sched_init() has a great many parameters and upcoming new
> > > > functionality for the scheduler might add even more. Generally, the
> > > > great number of parameters reduces readability and has already
> > > > caused
> > > > one missnaming in:
> > > >
> > > > commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in
> > > > nouveau_sched_init()").
> > > >
> > > > Introduce a new struct for the scheduler init parameters and port
> > > > all
> > > > users.
> > > >
> > > > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > > > ---
> > > > Howdy,
> > > >
> > > > I have a patch-series in the pipe that will add a `flags` argument
> > > > to
> > > > drm_sched_init(). I thought it would be wise to first rework the
> > > > API as
> > > > detailed in this patch. It's really a lot of parameters by now, and
> > > > I
> > > > would expect that it might get more and more over the years for
> > > > special
> > > > use cases etc.
> > > >
> > > > Regards,
> > > > P.
> > > > ---
> > > > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +++-
> > > > drivers/gpu/drm/etnaviv/etnaviv_sched.c | 20 ++-
> > > > drivers/gpu/drm/imagination/pvr_queue.c | 21 +++-
> > > > drivers/gpu/drm/lima/lima_sched.c | 21 +++-
> > > > drivers/gpu/drm/msm/msm_ringbuffer.c | 22 ++--
> > > > drivers/gpu/drm/nouveau/nouveau_sched.c | 20 ++-
> > > > drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++--
> > > > drivers/gpu/drm/panthor/panthor_mmu.c | 18 ++-
> > > > drivers/gpu/drm/panthor/panthor_sched.c | 23 ++--
> > > > drivers/gpu/drm/scheduler/sched_main.c | 53 +++-----
> > > > drivers/gpu/drm/v3d/v3d_sched.c | 135 +++++++++++++++-
> > > > -----
> > > > drivers/gpu/drm/xe/xe_execlist.c | 20 ++-
> > > > drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++-
> > > > include/drm/gpu_scheduler.h | 35 +++++-
> > > > 14 files changed, 311 insertions(+), 139 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > > > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > > > index cd4fac120834..c1f03eb5f5ea 100644
> > > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > > > @@ -2821,6 +2821,9 @@ static int
> > > > amdgpu_device_init_schedulers(struct amdgpu_device *adev)
> > > > {
> > > > long timeout;
> > > > int r, i;
> > > > + struct drm_sched_init_params params;
> > > Please keep the reverse xmas tree ordering for variable declaration.
> > > E.g. long lines first and variables like "i" and "r" last.
> > Okay dokay
> >
> > > Apart from that looks like a good idea to me.
> > >
> > >
> > > > +
> > > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > > for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
> > > > struct amdgpu_ring *ring = adev->rings[i];
> > > > @@ -2844,12 +2847,18 @@ static int
> > > > amdgpu_device_init_schedulers(struct amdgpu_device *adev)
> > > > break;
> > > > }
> > > > - r = drm_sched_init(&ring->sched,
> > > > &amdgpu_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - ring->num_hw_submission, 0,
> > > > - timeout, adev->reset_domain-
> > > > > wq,
> > > > - ring->sched_score, ring->name,
> > > > - adev->dev);
> > > > + params.ops = &amdgpu_sched_ops;
> > > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params.credit_limit = ring->num_hw_submission;
> > > > + params.hang_limit = 0;
> > > Could we please remove the hang limit as first step to get this awful
> > > feature deprecated?
> > Remove it? From the struct you mean?
> >
> > We can mark it as deprecated in the docstring of the new struct. That's
> > what you mean, don't you?
>
> No, the function using this parameter already deprecated. What I meant is to
> start to completely remove this feature.
>
> The hang_limit basically says how often the scheduler should try to run a
> job over and over again before giving up.
>
> And we already agreed that trying the same thing over and over again and
> expecting different results is the definition of insanity :)
>
> So my suggestion is to drop the parameter and drop the job as soon as it
> caused a timeout.
>
In Xe we take this further, if a job hangs we ban the entire queue. So
from our end hand limit is useless.
Matt
> Regards,
> Christian.
>
> >
> > P.
> >
> > > Thanks,
> > > Christian.
> > >
> > > > + params.timeout = timeout;
> > > > + params.timeout_wq = adev->reset_domain->wq;
> > > > + params.score = ring->sched_score;
> > > > + params.name = ring->name;
> > > > + params.dev = adev->dev;
> > > > +
> > > > + r = drm_sched_init(&ring->sched, ¶ms);
> > > > if (r) {
> > > > DRM_ERROR("Failed to create scheduler on
> > > > ring %s.\n",
> > > > ring->name);
> > > > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > > > b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > > > index 5b67eda122db..7d8517f1963e 100644
> > > > --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > > > +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
> > > > @@ -145,12 +145,22 @@ int etnaviv_sched_push_job(struct
> > > > etnaviv_gem_submit *submit)
> > > > int etnaviv_sched_init(struct etnaviv_gpu *gpu)
> > > > {
> > > > int ret;
> > > > + struct drm_sched_init_params params;
> > > > - ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops,
> > > > NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - etnaviv_hw_jobs_limit,
> > > > etnaviv_job_hang_limit,
> > > > - msecs_to_jiffies(500), NULL, NULL,
> > > > - dev_name(gpu->dev), gpu->dev);
> > > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > > +
> > > > + params.ops = &etnaviv_sched_ops;
> > > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params.credit_limit = etnaviv_hw_jobs_limit;
> > > > + params.hang_limit = etnaviv_job_hang_limit;
> > > > + params.timeout = msecs_to_jiffies(500);
> > > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > > + params.score = NULL;
> > > > + params.name = dev_name(gpu->dev);
> > > > + params.dev = gpu->dev;
> > > > +
> > > > + ret = drm_sched_init(&gpu->sched, ¶ms);
> > > > if (ret)
> > > > return ret;
> > > > diff --git a/drivers/gpu/drm/imagination/pvr_queue.c
> > > > b/drivers/gpu/drm/imagination/pvr_queue.c
> > > > index c4f08432882b..03a2ce1a88e7 100644
> > > > --- a/drivers/gpu/drm/imagination/pvr_queue.c
> > > > +++ b/drivers/gpu/drm/imagination/pvr_queue.c
> > > > @@ -1211,10 +1211,13 @@ struct pvr_queue *pvr_queue_create(struct
> > > > pvr_context *ctx,
> > > > };
> > > > struct pvr_device *pvr_dev = ctx->pvr_dev;
> > > > struct drm_gpu_scheduler *sched;
> > > > + struct drm_sched_init_params sched_params;
> > > > struct pvr_queue *queue;
> > > > int ctx_state_size, err;
> > > > void *cpu_map;
> > > > + memset(&sched_params, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > if (WARN_ON(type >= sizeof(props)))
> > > > return ERR_PTR(-EINVAL);
> > > > @@ -1282,12 +1285,18 @@ struct pvr_queue *pvr_queue_create(struct
> > > > pvr_context *ctx,
> > > > queue->timeline_ufo.value = cpu_map;
> > > > - err = drm_sched_init(&queue->scheduler,
> > > > - &pvr_queue_sched_ops,
> > > > - pvr_dev->sched_wq, 1, 64 * 1024, 1,
> > > > - msecs_to_jiffies(500),
> > > > - pvr_dev->sched_wq, NULL, "pvr-queue",
> > > > - pvr_dev->base.dev);
> > > > + sched_params.ops = &pvr_queue_sched_ops;
> > > > + sched_params.submit_wq = pvr_dev->sched_wq;
> > > > + sched_params.num_rqs = 1;
> > > > + sched_params.credit_limit = 64 * 1024;
> > > > + sched_params.hang_limit = 1;
> > > > + sched_params.timeout = msecs_to_jiffies(500);
> > > > + sched_params.timeout_wq = pvr_dev->sched_wq;
> > > > + sched_params.score = NULL;
> > > > + sched_params.name = "pvr-queue";
> > > > + sched_params.dev = pvr_dev->base.dev;
> > > > +
> > > > + err = drm_sched_init(&queue->scheduler, &sched_params);
> > > > if (err)
> > > > goto err_release_ufo;
> > > > diff --git a/drivers/gpu/drm/lima/lima_sched.c
> > > > b/drivers/gpu/drm/lima/lima_sched.c
> > > > index b40c90e97d7e..a64c50fb6d1e 100644
> > > > --- a/drivers/gpu/drm/lima/lima_sched.c
> > > > +++ b/drivers/gpu/drm/lima/lima_sched.c
> > > > @@ -513,20 +513,29 @@ static void lima_sched_recover_work(struct
> > > > work_struct *work)
> > > > int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const char
> > > > *name)
> > > > {
> > > > + struct drm_sched_init_params params;
> > > > unsigned int timeout = lima_sched_timeout_ms > 0 ?
> > > > lima_sched_timeout_ms : 10000;
> > > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > > +
> > > > pipe->fence_context = dma_fence_context_alloc(1);
> > > > spin_lock_init(&pipe->fence_lock);
> > > > INIT_WORK(&pipe->recover_work, lima_sched_recover_work);
> > > > - return drm_sched_init(&pipe->base, &lima_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - 1,
> > > > - lima_job_hang_limit,
> > > > - msecs_to_jiffies(timeout), NULL,
> > > > - NULL, name, pipe->ldev->dev);
> > > > + params.ops = &lima_sched_ops;
> > > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params.credit_limit = 1;
> > > > + params.hang_limit = lima_job_hang_limit;
> > > > + params.timeout = msecs_to_jiffies(timeout);
> > > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > > + params.score = NULL;
> > > > + params.name = name;
> > > > + params.dev = pipe->ldev->dev;
> > > > +
> > > > + return drm_sched_init(&pipe->base, ¶ms);
> > > > }
> > > > void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
> > > > diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c
> > > > b/drivers/gpu/drm/msm/msm_ringbuffer.c
> > > > index c803556a8f64..49a2c7422dc6 100644
> > > > --- a/drivers/gpu/drm/msm/msm_ringbuffer.c
> > > > +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
> > > > @@ -59,11 +59,13 @@ static const struct drm_sched_backend_ops
> > > > msm_sched_ops = {
> > > > struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu,
> > > > int id,
> > > > void *memptrs, uint64_t memptrs_iova)
> > > > {
> > > > + struct drm_sched_init_params params;
> > > > struct msm_ringbuffer *ring;
> > > > - long sched_timeout;
> > > > char name[32];
> > > > int ret;
> > > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > > +
> > > > /* We assume everywhere that MSM_GPU_RINGBUFFER_SZ is a
> > > > power of 2 */
> > > > BUILD_BUG_ON(!is_power_of_2(MSM_GPU_RINGBUFFER_SZ));
> > > > @@ -95,13 +97,19 @@ struct msm_ringbuffer
> > > > *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
> > > > ring->memptrs = memptrs;
> > > > ring->memptrs_iova = memptrs_iova;
> > > > - /* currently managing hangcheck ourselves: */
> > > > - sched_timeout = MAX_SCHEDULE_TIMEOUT;
> > > > + params.ops = &msm_sched_ops;
> > > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params.credit_limit = num_hw_submissions;
> > > > + params.hang_limit = 0;
> > > > + /* currently managing hangcheck ourselves: */
> > > > + params.timeout = MAX_SCHEDULE_TIMEOUT;
> > > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > > + params.score = NULL;
> > > > + params.name = to_msm_bo(ring->bo)->name;
> > > > + params.dev = gpu->dev->dev;
> > > > - ret = drm_sched_init(&ring->sched, &msm_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - num_hw_submissions, 0, sched_timeout,
> > > > - NULL, NULL, to_msm_bo(ring->bo)-
> > > > > name, gpu->dev->dev);
> > > > + ret = drm_sched_init(&ring->sched, ¶ms);
> > > > if (ret) {
> > > > goto fail;
> > > > }
> > > > diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c
> > > > b/drivers/gpu/drm/nouveau/nouveau_sched.c
> > > > index 4412f2711fb5..f20c2e612750 100644
> > > > --- a/drivers/gpu/drm/nouveau/nouveau_sched.c
> > > > +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
> > > > @@ -404,9 +404,11 @@ nouveau_sched_init(struct nouveau_sched
> > > > *sched, struct nouveau_drm *drm,
> > > > {
> > > > struct drm_gpu_scheduler *drm_sched = &sched->base;
> > > > struct drm_sched_entity *entity = &sched->entity;
> > > > - const long timeout =
> > > > msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
> > > > + struct drm_sched_init_params params;
> > > > int ret;
> > > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > > +
> > > > if (!wq) {
> > > > wq = alloc_workqueue("nouveau_sched_wq_%d", 0,
> > > > WQ_MAX_ACTIVE,
> > > > current->pid);
> > > > @@ -416,10 +418,18 @@ nouveau_sched_init(struct nouveau_sched
> > > > *sched, struct nouveau_drm *drm,
> > > > sched->wq = wq;
> > > > }
> > > > - ret = drm_sched_init(drm_sched, &nouveau_sched_ops, wq,
> > > > - NOUVEAU_SCHED_PRIORITY_COUNT,
> > > > - credit_limit, 0, timeout,
> > > > - NULL, NULL, "nouveau_sched", drm-
> > > > > dev->dev);
> > > > + params.ops = &nouveau_sched_ops;
> > > > + params.submit_wq = wq;
> > > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params.credit_limit = credit_limit;
> > > > + params.hang_limit = 0;
> > > > + params.timeout =
> > > > msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
> > > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > > + params.score = NULL;
> > > > + params.name = "nouveau_sched";
> > > > + params.dev = drm->dev->dev;
> > > > +
> > > > + ret = drm_sched_init(drm_sched, ¶ms);
> > > > if (ret)
> > > > goto fail_wq;
> > > > diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > b/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > index 9b8e82fb8bc4..6b509ff446b5 100644
> > > > --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> > > > @@ -836,10 +836,13 @@ static irqreturn_t
> > > > panfrost_job_irq_handler(int irq, void *data)
> > > > int panfrost_job_init(struct panfrost_device *pfdev)
> > > > {
> > > > + struct drm_sched_init_params params;
> > > > struct panfrost_job_slot *js;
> > > > unsigned int nentries = 2;
> > > > int ret, j;
> > > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > > +
> > > > /* All GPUs have two entries per queue, but without
> > > > jobchain
> > > > * disambiguation stopping the right job in the close path
> > > > is tricky,
> > > > * so let's just advertise one entry in that case.
> > > > @@ -872,16 +875,21 @@ int panfrost_job_init(struct panfrost_device
> > > > *pfdev)
> > > > if (!pfdev->reset.wq)
> > > > return -ENOMEM;
> > > > + params.ops = &panfrost_sched_ops;
> > > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > > + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params.credit_limit = nentries;
> > > > + params.hang_limit = 0;
> > > > + params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
> > > > + params.timeout_wq = pfdev->reset.wq;
> > > > + params.score = NULL;
> > > > + params.name = "pan_js";
> > > > + params.dev = pfdev->dev;
> > > > +
> > > > for (j = 0; j < NUM_JOB_SLOTS; j++) {
> > > > js->queue[j].fence_context =
> > > > dma_fence_context_alloc(1);
> > > > - ret = drm_sched_init(&js->queue[j].sched,
> > > > - &panfrost_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - nentries, 0,
> > > > -
> > > > msecs_to_jiffies(JOB_TIMEOUT_MS),
> > > > - pfdev->reset.wq,
> > > > - NULL, "pan_js", pfdev->dev);
> > > > + ret = drm_sched_init(&js->queue[j].sched,
> > > > ¶ms);
> > > > if (ret) {
> > > > dev_err(pfdev->dev, "Failed to create
> > > > scheduler: %d.", ret);
> > > > goto err_sched;
> > > > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c
> > > > b/drivers/gpu/drm/panthor/panthor_mmu.c
> > > > index a49132f3778b..4362442cbfd8 100644
> > > > --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> > > > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> > > > @@ -2268,6 +2268,7 @@ panthor_vm_create(struct panthor_device
> > > > *ptdev, bool for_mcu,
> > > > u64 full_va_range = 1ull << va_bits;
> > > > struct drm_gem_object *dummy_gem;
> > > > struct drm_gpu_scheduler *sched;
> > > > + struct drm_sched_init_params sched_params;
> > > > struct io_pgtable_cfg pgtbl_cfg;
> > > > u64 mair, min_va, va_range;
> > > > struct panthor_vm *vm;
> > > > @@ -2284,6 +2285,8 @@ panthor_vm_create(struct panthor_device
> > > > *ptdev, bool for_mcu,
> > > > goto err_free_vm;
> > > > }
> > > > + memset(&sched_params, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > mutex_init(&vm->heaps.lock);
> > > > vm->for_mcu = for_mcu;
> > > > vm->ptdev = ptdev;
> > > > @@ -2325,11 +2328,18 @@ panthor_vm_create(struct panthor_device
> > > > *ptdev, bool for_mcu,
> > > > goto err_mm_takedown;
> > > > }
> > > > + sched_params.ops = &panthor_vm_bind_ops;
> > > > + sched_params.submit_wq = ptdev->mmu->vm.wq;
> > > > + sched_params.num_rqs = 1;
> > > > + sched_params.credit_limit = 1;
> > > > + sched_params.hang_limit = 0;
> > > > /* Bind operations are synchronous for now, no timeout
> > > > needed. */
> > > > - ret = drm_sched_init(&vm->sched, &panthor_vm_bind_ops,
> > > > ptdev->mmu->vm.wq,
> > > > - 1, 1, 0,
> > > > - MAX_SCHEDULE_TIMEOUT, NULL, NULL,
> > > > - "panthor-vm-bind", ptdev->base.dev);
> > > > + sched_params.timeout = MAX_SCHEDULE_TIMEOUT;
> > > > + sched_params.timeout_wq = NULL; /* Use the system_wq. */
> > > > + sched_params.score = NULL;
> > > > + sched_params.name = "panthor-vm-bind";
> > > > + sched_params.dev = ptdev->base.dev;
> > > > + ret = drm_sched_init(&vm->sched, &sched_params);
> > > > if (ret)
> > > > goto err_free_io_pgtable;
> > > > diff --git a/drivers/gpu/drm/panthor/panthor_sched.c
> > > > b/drivers/gpu/drm/panthor/panthor_sched.c
> > > > index ef4bec7ff9c7..a324346d302f 100644
> > > > --- a/drivers/gpu/drm/panthor/panthor_sched.c
> > > > +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> > > > @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group
> > > > *group,
> > > > const struct drm_panthor_queue_create *args)
> > > > {
> > > > struct drm_gpu_scheduler *drm_sched;
> > > > + struct drm_sched_init_params sched_params;
> > > > struct panthor_queue *queue;
> > > > int ret;
> > > > @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group
> > > > *group,
> > > > if (!queue)
> > > > return ERR_PTR(-ENOMEM);
> > > > + memset(&sched_params, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > queue->fence_ctx.id = dma_fence_context_alloc(1);
> > > > spin_lock_init(&queue->fence_ctx.lock);
> > > > INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
> > > > @@ -3341,17 +3344,23 @@ group_create_queue(struct panthor_group
> > > > *group,
> > > > if (ret)
> > > > goto err_free_queue;
> > > > + sched_params.ops = &panthor_queue_sched_ops;
> > > > + sched_params.submit_wq = group->ptdev->scheduler->wq;
> > > > + sched_params.num_rqs = 1;
> > > > /*
> > > > - * Credit limit argument tells us the total number of
> > > > instructions
> > > > + * The credit limit argument tells us the total number of
> > > > instructions
> > > > * across all CS slots in the ringbuffer, with some jobs
> > > > requiring
> > > > * twice as many as others, depending on their profiling
> > > > status.
> > > > */
> > > > - ret = drm_sched_init(&queue->scheduler,
> > > > &panthor_queue_sched_ops,
> > > > - group->ptdev->scheduler->wq, 1,
> > > > - args->ringbuf_size / sizeof(u64),
> > > > - 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
> > > > - group->ptdev->reset.wq,
> > > > - NULL, "panthor-queue", group->ptdev-
> > > > > base.dev);
> > > > + sched_params.credit_limit = args->ringbuf_size /
> > > > sizeof(u64);
> > > > + sched_params.hang_limit = 0;
> > > > + sched_params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
> > > > + sched_params.timeout_wq = group->ptdev->reset.wq;
> > > > + sched_params.score = NULL;
> > > > + sched_params.name = "panthor-queue";
> > > > + sched_params.dev = group->ptdev->base.dev;
> > > > +
> > > > + ret = drm_sched_init(&queue->scheduler, &sched_params);
> > > > if (ret)
> > > > goto err_free_queue;
> > > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > > > b/drivers/gpu/drm/scheduler/sched_main.c
> > > > index 57da84908752..27db748a5269 100644
> > > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > > @@ -1240,40 +1240,25 @@ static void drm_sched_run_job_work(struct
> > > > work_struct *w)
> > > > * drm_sched_init - Init a gpu scheduler instance
> > > > *
> > > > * @sched: scheduler instance
> > > > - * @ops: backend operations for this scheduler
> > > > - * @submit_wq: workqueue to use for submission. If NULL, an
> > > > ordered wq is
> > > > - * allocated and used
> > > > - * @num_rqs: number of runqueues, one for each priority, up to
> > > > DRM_SCHED_PRIORITY_COUNT
> > > > - * @credit_limit: the number of credits this scheduler can hold
> > > > from all jobs
> > > > - * @hang_limit: number of times to allow a job to hang before
> > > > dropping it
> > > > - * @timeout: timeout value in jiffies for the scheduler
> > > > - * @timeout_wq: workqueue to use for timeout work. If NULL, the
> > > > system_wq is
> > > > - * used
> > > > - * @score: optional score atomic shared with other schedulers
> > > > - * @name: name used for debugging
> > > > - * @dev: target &struct device
> > > > + * @params: scheduler initialization parameters
> > > > *
> > > > * Return 0 on success, otherwise error code.
> > > > */
> > > > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > > - const struct drm_sched_backend_ops *ops,
> > > > - struct workqueue_struct *submit_wq,
> > > > - u32 num_rqs, u32 credit_limit, unsigned int
> > > > hang_limit,
> > > > - long timeout, struct workqueue_struct
> > > > *timeout_wq,
> > > > - atomic_t *score, const char *name, struct
> > > > device *dev)
> > > > + const struct drm_sched_init_params *params)
> > > > {
> > > > int i;
> > > > - sched->ops = ops;
> > > > - sched->credit_limit = credit_limit;
> > > > - sched->name = name;
> > > > - sched->timeout = timeout;
> > > > - sched->timeout_wq = timeout_wq ? : system_wq;
> > > > - sched->hang_limit = hang_limit;
> > > > - sched->score = score ? score : &sched->_score;
> > > > - sched->dev = dev;
> > > > + sched->ops = params->ops;
> > > > + sched->credit_limit = params->credit_limit;
> > > > + sched->name = params->name;
> > > > + sched->timeout = params->timeout;
> > > > + sched->timeout_wq = params->timeout_wq ? : system_wq;
> > > > + sched->hang_limit = params->hang_limit;
> > > > + sched->score = params->score ? params->score : &sched-
> > > > > _score;
> > > > + sched->dev = params->dev;
> > > > - if (num_rqs > DRM_SCHED_PRIORITY_COUNT) {
> > > > + if (params->num_rqs > DRM_SCHED_PRIORITY_COUNT) {
> > > > /* This is a gross violation--tell drivers what
> > > > the problem is.
> > > > */
> > > > drm_err(sched, "%s: num_rqs cannot be greater than
> > > > DRM_SCHED_PRIORITY_COUNT\n",
> > > > @@ -1288,16 +1273,16 @@ int drm_sched_init(struct drm_gpu_scheduler
> > > > *sched,
> > > > return 0;
> > > > }
> > > > - if (submit_wq) {
> > > > - sched->submit_wq = submit_wq;
> > > > + if (params->submit_wq) {
> > > > + sched->submit_wq = params->submit_wq;
> > > > sched->own_submit_wq = false;
> > > > } else {
> > > > #ifdef CONFIG_LOCKDEP
> > > > - sched->submit_wq =
> > > > alloc_ordered_workqueue_lockdep_map(name,
> > > > -
> > > > WQ_MEM_RECLAIM,
> > > > -
> > > > &drm_sched_lockdep_map);
> > > > + sched->submit_wq =
> > > > alloc_ordered_workqueue_lockdep_map(
> > > > + params->name,
> > > > WQ_MEM_RECLAIM,
> > > > + &drm_sched_lockdep_map);
> > > > #else
> > > > - sched->submit_wq = alloc_ordered_workqueue(name,
> > > > WQ_MEM_RECLAIM);
> > > > + sched->submit_wq = alloc_ordered_workqueue(params-
> > > > > name, WQ_MEM_RECLAIM);
> > > > #endif
> > > > if (!sched->submit_wq)
> > > > return -ENOMEM;
> > > > @@ -1305,11 +1290,11 @@ int drm_sched_init(struct drm_gpu_scheduler
> > > > *sched,
> > > > sched->own_submit_wq = true;
> > > > }
> > > > - sched->sched_rq = kmalloc_array(num_rqs, sizeof(*sched-
> > > > > sched_rq),
> > > > + sched->sched_rq = kmalloc_array(params->num_rqs,
> > > > sizeof(*sched->sched_rq),
> > > > GFP_KERNEL | __GFP_ZERO);
> > > > if (!sched->sched_rq)
> > > > goto Out_check_own;
> > > > - sched->num_rqs = num_rqs;
> > > > + sched->num_rqs = params->num_rqs;
> > > > for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs;
> > > > i++) {
> > > > sched->sched_rq[i] = kzalloc(sizeof(*sched-
> > > > > sched_rq[i]), GFP_KERNEL);
> > > > if (!sched->sched_rq[i])
> > > > diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
> > > > b/drivers/gpu/drm/v3d/v3d_sched.c
> > > > index 99ac4995b5a1..716e6d074d87 100644
> > > > --- a/drivers/gpu/drm/v3d/v3d_sched.c
> > > > +++ b/drivers/gpu/drm/v3d/v3d_sched.c
> > > > @@ -814,67 +814,124 @@ static const struct drm_sched_backend_ops
> > > > v3d_cpu_sched_ops = {
> > > > .free_job = v3d_cpu_job_free
> > > > };
> > > > +/*
> > > > + * v3d's scheduler instances are all identical, except for ops and
> > > > name.
> > > > + */
> > > > +static void
> > > > +v3d_common_sched_init(struct drm_sched_init_params *params, struct
> > > > device *dev)
> > > > +{
> > > > + memset(params, 0, sizeof(struct drm_sched_init_params));
> > > > +
> > > > + params->submit_wq = NULL; /* Use the system_wq. */
> > > > + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params->credit_limit = 1;
> > > > + params->hang_limit = 0;
> > > > + params->timeout = msecs_to_jiffies(500);
> > > > + params->timeout_wq = NULL; /* Use the system_wq. */
> > > > + params->score = NULL;
> > > > + params->dev = dev;
> > > > +}
> > > > +
> > > > +static int
> > > > +v3d_bin_sched_init(struct v3d_dev *v3d)
> > > > +{
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > > + params.ops = &v3d_bin_sched_ops;
> > > > + params.name = "v3d_bin";
> > > > +
> > > > + return drm_sched_init(&v3d->queue[V3D_BIN].sched,
> > > > ¶ms);
> > > > +}
> > > > +
> > > > +static int
> > > > +v3d_render_sched_init(struct v3d_dev *v3d)
> > > > +{
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > > + params.ops = &v3d_render_sched_ops;
> > > > + params.name = "v3d_render";
> > > > +
> > > > + return drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> > > > ¶ms);
> > > > +}
> > > > +
> > > > +static int
> > > > +v3d_tfu_sched_init(struct v3d_dev *v3d)
> > > > +{
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > > + params.ops = &v3d_tfu_sched_ops;
> > > > + params.name = "v3d_tfu";
> > > > +
> > > > + return drm_sched_init(&v3d->queue[V3D_TFU].sched,
> > > > ¶ms);
> > > > +}
> > > > +
> > > > +static int
> > > > +v3d_csd_sched_init(struct v3d_dev *v3d)
> > > > +{
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > > + params.ops = &v3d_csd_sched_ops;
> > > > + params.name = "v3d_csd";
> > > > +
> > > > + return drm_sched_init(&v3d->queue[V3D_CSD].sched,
> > > > ¶ms);
> > > > +}
> > > > +
> > > > +static int
> > > > +v3d_cache_sched_init(struct v3d_dev *v3d)
> > > > +{
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > > + params.ops = &v3d_cache_clean_sched_ops;
> > > > + params.name = "v3d_cache_clean";
> > > > +
> > > > + return drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched,
> > > > ¶ms);
> > > > +}
> > > > +
> > > > +static int
> > > > +v3d_cpu_sched_init(struct v3d_dev *v3d)
> > > > +{
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > > + params.ops = &v3d_cpu_sched_ops;
> > > > + params.name = "v3d_cpu";
> > > > +
> > > > + return drm_sched_init(&v3d->queue[V3D_CPU].sched,
> > > > ¶ms);
> > > > +}
> > > > +
> > > > int
> > > > v3d_sched_init(struct v3d_dev *v3d)
> > > > {
> > > > - int hw_jobs_limit = 1;
> > > > - int job_hang_limit = 0;
> > > > - int hang_limit_ms = 500;
> > > > int ret;
> > > > - ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
> > > > - &v3d_bin_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - hw_jobs_limit, job_hang_limit,
> > > > - msecs_to_jiffies(hang_limit_ms),
> > > > NULL,
> > > > - NULL, "v3d_bin", v3d->drm.dev);
> > > > + ret = v3d_bin_sched_init(v3d);
> > > > if (ret)
> > > > return ret;
> > > > - ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> > > > - &v3d_render_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - hw_jobs_limit, job_hang_limit,
> > > > - msecs_to_jiffies(hang_limit_ms),
> > > > NULL,
> > > > - NULL, "v3d_render", v3d->drm.dev);
> > > > + ret = v3d_render_sched_init(v3d);
> > > > if (ret)
> > > > goto fail;
> > > > - ret = drm_sched_init(&v3d->queue[V3D_TFU].sched,
> > > > - &v3d_tfu_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - hw_jobs_limit, job_hang_limit,
> > > > - msecs_to_jiffies(hang_limit_ms),
> > > > NULL,
> > > > - NULL, "v3d_tfu", v3d->drm.dev);
> > > > + ret = v3d_tfu_sched_init(v3d);
> > > > if (ret)
> > > > goto fail;
> > > > if (v3d_has_csd(v3d)) {
> > > > - ret = drm_sched_init(&v3d->queue[V3D_CSD].sched,
> > > > - &v3d_csd_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - hw_jobs_limit,
> > > > job_hang_limit,
> > > > -
> > > > msecs_to_jiffies(hang_limit_ms), NULL,
> > > > - NULL, "v3d_csd", v3d-
> > > > > drm.dev);
> > > > + ret = v3d_csd_sched_init(v3d);
> > > > if (ret)
> > > > goto fail;
> > > > - ret = drm_sched_init(&v3d-
> > > > > queue[V3D_CACHE_CLEAN].sched,
> > > > - &v3d_cache_clean_sched_ops,
> > > > NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - hw_jobs_limit,
> > > > job_hang_limit,
> > > > -
> > > > msecs_to_jiffies(hang_limit_ms), NULL,
> > > > - NULL, "v3d_cache_clean", v3d-
> > > > > drm.dev);
> > > > + ret = v3d_cache_sched_init(v3d);
> > > > if (ret)
> > > > goto fail;
> > > > }
> > > > - ret = drm_sched_init(&v3d->queue[V3D_CPU].sched,
> > > > - &v3d_cpu_sched_ops, NULL,
> > > > - DRM_SCHED_PRIORITY_COUNT,
> > > > - 1, job_hang_limit,
> > > > - msecs_to_jiffies(hang_limit_ms),
> > > > NULL,
> > > > - NULL, "v3d_cpu", v3d->drm.dev);
> > > > + ret = v3d_cpu_sched_init(v3d);
> > > > if (ret)
> > > > goto fail;
> > > > diff --git a/drivers/gpu/drm/xe/xe_execlist.c
> > > > b/drivers/gpu/drm/xe/xe_execlist.c
> > > > index a8c416a48812..7f29b7f04af4 100644
> > > > --- a/drivers/gpu/drm/xe/xe_execlist.c
> > > > +++ b/drivers/gpu/drm/xe/xe_execlist.c
> > > > @@ -332,10 +332,13 @@ static const struct drm_sched_backend_ops
> > > > drm_sched_ops = {
> > > > static int execlist_exec_queue_init(struct xe_exec_queue *q)
> > > > {
> > > > struct drm_gpu_scheduler *sched;
> > > > + struct drm_sched_init_params params;
> > > > struct xe_execlist_exec_queue *exl;
> > > > struct xe_device *xe = gt_to_xe(q->gt);
> > > > int err;
> > > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > > +
> > > > xe_assert(xe, !xe_device_uc_enabled(xe));
> > > > drm_info(&xe->drm, "Enabling execlist submission (GuC
> > > > submission disabled)\n");
> > > > @@ -346,11 +349,18 @@ static int execlist_exec_queue_init(struct
> > > > xe_exec_queue *q)
> > > > exl->q = q;
> > > > - err = drm_sched_init(&exl->sched, &drm_sched_ops, NULL, 1,
> > > > - q->lrc[0]->ring.size /
> > > > MAX_JOB_SIZE_BYTES,
> > > > - XE_SCHED_HANG_LIMIT,
> > > > XE_SCHED_JOB_TIMEOUT,
> > > > - NULL, NULL, q->hwe->name,
> > > > - gt_to_xe(q->gt)->drm.dev);
> > > > + params.ops = &drm_sched_ops;
> > > > + params.submit_wq = NULL; /* Use the system_wq. */
> > > > + params.num_rqs = 1;
> > > > + params.credit_limit = q->lrc[0]->ring.size /
> > > > MAX_JOB_SIZE_BYTES;
> > > > + params.hang_limit = XE_SCHED_HANG_LIMIT;
> > > > + params.timeout = XE_SCHED_JOB_TIMEOUT;
> > > > + params.timeout_wq = NULL; /* Use the system_wq. */
> > > > + params.score = NULL;
> > > > + params.name = q->hwe->name;
> > > > + params.dev = gt_to_xe(q->gt)->drm.dev;
> > > > +
> > > > + err = drm_sched_init(&exl->sched, ¶ms);
> > > > if (err)
> > > > goto err_free;
> > > > diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > > > b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > > > index 50361b4638f9..2129fee83f25 100644
> > > > --- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > > > +++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
> > > > @@ -63,13 +63,26 @@ int xe_sched_init(struct xe_gpu_scheduler
> > > > *sched,
> > > > atomic_t *score, const char *name,
> > > > struct device *dev)
> > > > {
> > > > + struct drm_sched_init_params params;
> > > > +
> > > > sched->ops = xe_ops;
> > > > INIT_LIST_HEAD(&sched->msgs);
> > > > INIT_WORK(&sched->work_process_msg,
> > > > xe_sched_process_msg_work);
> > > > - return drm_sched_init(&sched->base, ops, submit_wq, 1,
> > > > hw_submission,
> > > > - hang_limit, timeout, timeout_wq,
> > > > score, name,
> > > > - dev);
> > > > + memset(¶ms, 0, sizeof(struct drm_sched_init_params));
> > > > +
> > > > + params.ops = ops;
> > > > + params.submit_wq = submit_wq;
> > > > + params.num_rqs = 1;
> > > > + params.credit_limit = hw_submission;
> > > > + params.hang_limit = hang_limit;
> > > > + params.timeout = timeout;
> > > > + params.timeout_wq = timeout_wq;
> > > > + params.score = score;
> > > > + params.name = name;
> > > > + params.dev = dev;
> > > > +
> > > > + return drm_sched_init(&sched->base, ¶ms);
> > > > }
> > > > void xe_sched_fini(struct xe_gpu_scheduler *sched)
> > > > diff --git a/include/drm/gpu_scheduler.h
> > > > b/include/drm/gpu_scheduler.h
> > > > index 95e17504e46a..1a834ef43862 100644
> > > > --- a/include/drm/gpu_scheduler.h
> > > > +++ b/include/drm/gpu_scheduler.h
> > > > @@ -553,12 +553,37 @@ struct drm_gpu_scheduler {
> > > > struct device *dev;
> > > > };
> > > > +/**
> > > > + * struct drm_sched_init_params - parameters for initializing a
> > > > DRM GPU scheduler
> > > > + *
> > > > + * @ops: backend operations provided by the driver
> > > > + * @submit_wq: workqueue to use for submission. If NULL, an
> > > > ordered wq is
> > > > + * allocated and used
> > > > + * @num_rqs: Number of run-queues. This is at most
> > > > DRM_SCHED_PRIORITY_COUNT,
> > > > + * as there's usually one run-queue per priority, but
> > > > could be less.
> > > > + * @credit_limit: the number of credits this scheduler can hold
> > > > from all jobs
> > > > + * @hang_limit: number of times to allow a job to hang before
> > > > dropping it
> > > > + * @timeout: timeout value in jiffies for the scheduler
> > > > + * @timeout_wq: workqueue to use for timeout work. If NULL, the
> > > > system_wq is
> > > > + * used
> > > > + * @score: optional score atomic shared with other schedulers
> > > > + * @name: name used for debugging
> > > > + * @dev: associated device. Used for debugging
> > > > + */
> > > > +struct drm_sched_init_params {
> > > > + const struct drm_sched_backend_ops *ops;
> > > > + struct workqueue_struct *submit_wq;
> > > > + struct workqueue_struct *timeout_wq;
> > > > + u32 num_rqs, credit_limit;
> > > > + unsigned int hang_limit;
> > > > + long timeout;
> > > > + atomic_t *score;
> > > > + const char *name;
> > > > + struct device *dev;
> > > > +};
> > > > +
> > > > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > > - const struct drm_sched_backend_ops *ops,
> > > > - struct workqueue_struct *submit_wq,
> > > > - u32 num_rqs, u32 credit_limit, unsigned int
> > > > hang_limit,
> > > > - long timeout, struct workqueue_struct
> > > > *timeout_wq,
> > > > - atomic_t *score, const char *name, struct
> > > > device *dev);
> > > > + const struct drm_sched_init_params *params);
> > > > void drm_sched_fini(struct drm_gpu_scheduler *sched);
> > > > int drm_sched_job_init(struct drm_sched_job *job,
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 15:23 ` Philipp Stanner
@ 2025-01-22 15:37 ` Christian König
0 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2025-01-22 15:37 UTC (permalink / raw)
To: Philipp Stanner, Philipp Stanner, Alex Deucher, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Danilo Krummrich, Boris Brezillon,
Rob Herring, Steven Price, Liviu Dudau, Luben Tuikov,
Matthew Brost, Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li
Cc: amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
Am 22.01.25 um 16:23 schrieb Philipp Stanner:
> On Wed, 2025-01-22 at 16:06 +0100, Christian König wrote:
>> Am 22.01.25 um 15:48 schrieb Philipp Stanner:
>>> On Wed, 2025-01-22 at 15:34 +0100, Christian König wrote:
>>>> Am 22.01.25 um 15:08 schrieb Philipp Stanner:
>>>>> drm_sched_init() has a great many parameters and upcoming new
>>>>> functionality for the scheduler might add even more. Generally,
>>>>> the
>>>>> great number of parameters reduces readability and has already
>>>>> caused
>>>>> one missnaming in:
>>>>>
>>>>> commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in
>>>>> nouveau_sched_init()").
>>>>>
>>>>> Introduce a new struct for the scheduler init parameters and
>>>>> port
>>>>> all
>>>>> users.
>>>>>
>>>>> Signed-off-by: Philipp Stanner <phasta@kernel.org>
>>>>> ---
>>>>> Howdy,
>>>>>
>>>>> I have a patch-series in the pipe that will add a `flags`
>>>>> argument
>>>>> to
>>>>> drm_sched_init(). I thought it would be wise to first rework
>>>>> the
>>>>> API as
>>>>> detailed in this patch. It's really a lot of parameters by now,
>>>>> and
>>>>> I
>>>>> would expect that it might get more and more over the years for
>>>>> special
>>>>> use cases etc.
>>>>>
>>>>> Regards,
>>>>> P.
>>>>> ---
>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +++-
>>>>> drivers/gpu/drm/etnaviv/etnaviv_sched.c | 20 ++-
>>>>> drivers/gpu/drm/imagination/pvr_queue.c | 21 +++-
>>>>> drivers/gpu/drm/lima/lima_sched.c | 21 +++-
>>>>> drivers/gpu/drm/msm/msm_ringbuffer.c | 22 ++--
>>>>> drivers/gpu/drm/nouveau/nouveau_sched.c | 20 ++-
>>>>> drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++--
>>>>> drivers/gpu/drm/panthor/panthor_mmu.c | 18 ++-
>>>>> drivers/gpu/drm/panthor/panthor_sched.c | 23 ++--
>>>>> drivers/gpu/drm/scheduler/sched_main.c | 53 +++-----
>>>>> drivers/gpu/drm/v3d/v3d_sched.c | 135
>>>>> +++++++++++++++-
>>>>> -----
>>>>> drivers/gpu/drm/xe/xe_execlist.c | 20 ++-
>>>>> drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++-
>>>>> include/drm/gpu_scheduler.h | 35 +++++-
>>>>> 14 files changed, 311 insertions(+), 139 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>>>> index cd4fac120834..c1f03eb5f5ea 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>>>> @@ -2821,6 +2821,9 @@ static int
>>>>> amdgpu_device_init_schedulers(struct amdgpu_device *adev)
>>>>> {
>>>>> long timeout;
>>>>> int r, i;
>>>>> + struct drm_sched_init_params params;
>>>> Please keep the reverse xmas tree ordering for variable
>>>> declaration.
>>>> E.g. long lines first and variables like "i" and "r" last.
>>> Okay dokay
>>>
>>>> Apart from that looks like a good idea to me.
>>>>
>>>>
>>>>> +
>>>>> + memset(¶ms, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>>
>>>>> for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
>>>>> struct amdgpu_ring *ring = adev->rings[i];
>>>>> @@ -2844,12 +2847,18 @@ static int
>>>>> amdgpu_device_init_schedulers(struct amdgpu_device *adev)
>>>>> break;
>>>>> }
>>>>>
>>>>> - r = drm_sched_init(&ring->sched,
>>>>> &amdgpu_sched_ops, NULL,
>>>>> - DRM_SCHED_PRIORITY_COUNT,
>>>>> - ring->num_hw_submission, 0,
>>>>> - timeout, adev-
>>>>>> reset_domain-
>>>>>> wq,
>>>>> - ring->sched_score, ring-
>>>>>> name,
>>>>> - adev->dev);
>>>>> + params.ops = &amdgpu_sched_ops;
>>>>> + params.submit_wq = NULL; /* Use the system_wq.
>>>>> */
>>>>> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>>>> + params.credit_limit = ring->num_hw_submission;
>>>>> + params.hang_limit = 0;
>>>> Could we please remove the hang limit as first step to get this
>>>> awful
>>>> feature deprecated?
>>> Remove it? From the struct you mean?
>>>
>>> We can mark it as deprecated in the docstring of the new struct.
>>> That's
>>> what you mean, don't you?
>> No, the function using this parameter already deprecated. What I
>> meant
>> is to start to completely remove this feature.
>>
>> The hang_limit basically says how often the scheduler should try to
>> run
>> a job over and over again before giving up.
> Agreed, it should be removed.
>
> But let me do that in a separate patch after this one is merged, and
> just hint at the deprecation in the arg in the struct for now; it's
> kind of unrelated to the init()-rework I'm doing here, ack?
Works for me.
Regards,
Christian.
>
>> And we already agreed that trying the same thing over and over again
>> and
>> expecting different results is the definition of insanity :)
> I'll quote you (and Einstein) with that if I ever give a presentation
> about the scheduler ;p
>
> P.
>
>> So my suggestion is to drop the parameter and drop the job as soon as
>> it
>> caused a timeout.
>>
>> Regards,
>> Christian.
>>
>>> P.
>>>
>>>> Thanks,
>>>> Christian.
>>>>
>>>>> + params.timeout = timeout;
>>>>> + params.timeout_wq = adev->reset_domain->wq;
>>>>> + params.score = ring->sched_score;
>>>>> + params.name = ring->name;
>>>>> + params.dev = adev->dev;
>>>>> +
>>>>> + r = drm_sched_init(&ring->sched, ¶ms);
>>>>> if (r) {
>>>>> DRM_ERROR("Failed to create scheduler
>>>>> on
>>>>> ring %s.\n",
>>>>> ring->name);
>>>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>>>> b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>>>> index 5b67eda122db..7d8517f1963e 100644
>>>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>>>> @@ -145,12 +145,22 @@ int etnaviv_sched_push_job(struct
>>>>> etnaviv_gem_submit *submit)
>>>>> int etnaviv_sched_init(struct etnaviv_gpu *gpu)
>>>>> {
>>>>> int ret;
>>>>> + struct drm_sched_init_params params;
>>>>>
>>>>> - ret = drm_sched_init(&gpu->sched, &etnaviv_sched_ops,
>>>>> NULL,
>>>>> - DRM_SCHED_PRIORITY_COUNT,
>>>>> - etnaviv_hw_jobs_limit,
>>>>> etnaviv_job_hang_limit,
>>>>> - msecs_to_jiffies(500), NULL,
>>>>> NULL,
>>>>> - dev_name(gpu->dev), gpu->dev);
>>>>> + memset(¶ms, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>> +
>>>>> + params.ops = &etnaviv_sched_ops;
>>>>> + params.submit_wq = NULL; /* Use the system_wq. */
>>>>> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>>>> + params.credit_limit = etnaviv_hw_jobs_limit;
>>>>> + params.hang_limit = etnaviv_job_hang_limit;
>>>>> + params.timeout = msecs_to_jiffies(500);
>>>>> + params.timeout_wq = NULL; /* Use the system_wq. */
>>>>> + params.score = NULL;
>>>>> + params.name = dev_name(gpu->dev);
>>>>> + params.dev = gpu->dev;
>>>>> +
>>>>> + ret = drm_sched_init(&gpu->sched, ¶ms);
>>>>> if (ret)
>>>>> return ret;
>>>>>
>>>>> diff --git a/drivers/gpu/drm/imagination/pvr_queue.c
>>>>> b/drivers/gpu/drm/imagination/pvr_queue.c
>>>>> index c4f08432882b..03a2ce1a88e7 100644
>>>>> --- a/drivers/gpu/drm/imagination/pvr_queue.c
>>>>> +++ b/drivers/gpu/drm/imagination/pvr_queue.c
>>>>> @@ -1211,10 +1211,13 @@ struct pvr_queue
>>>>> *pvr_queue_create(struct
>>>>> pvr_context *ctx,
>>>>> };
>>>>> struct pvr_device *pvr_dev = ctx->pvr_dev;
>>>>> struct drm_gpu_scheduler *sched;
>>>>> + struct drm_sched_init_params sched_params;
>>>>> struct pvr_queue *queue;
>>>>> int ctx_state_size, err;
>>>>> void *cpu_map;
>>>>>
>>>>> + memset(&sched_params, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>> +
>>>>> if (WARN_ON(type >= sizeof(props)))
>>>>> return ERR_PTR(-EINVAL);
>>>>>
>>>>> @@ -1282,12 +1285,18 @@ struct pvr_queue
>>>>> *pvr_queue_create(struct
>>>>> pvr_context *ctx,
>>>>>
>>>>> queue->timeline_ufo.value = cpu_map;
>>>>>
>>>>> - err = drm_sched_init(&queue->scheduler,
>>>>> - &pvr_queue_sched_ops,
>>>>> - pvr_dev->sched_wq, 1, 64 * 1024,
>>>>> 1,
>>>>> - msecs_to_jiffies(500),
>>>>> - pvr_dev->sched_wq, NULL, "pvr-
>>>>> queue",
>>>>> - pvr_dev->base.dev);
>>>>> + sched_params.ops = &pvr_queue_sched_ops;
>>>>> + sched_params.submit_wq = pvr_dev->sched_wq;
>>>>> + sched_params.num_rqs = 1;
>>>>> + sched_params.credit_limit = 64 * 1024;
>>>>> + sched_params.hang_limit = 1;
>>>>> + sched_params.timeout = msecs_to_jiffies(500);
>>>>> + sched_params.timeout_wq = pvr_dev->sched_wq;
>>>>> + sched_params.score = NULL;
>>>>> + sched_params.name = "pvr-queue";
>>>>> + sched_params.dev = pvr_dev->base.dev;
>>>>> +
>>>>> + err = drm_sched_init(&queue->scheduler,
>>>>> &sched_params);
>>>>> if (err)
>>>>> goto err_release_ufo;
>>>>>
>>>>> diff --git a/drivers/gpu/drm/lima/lima_sched.c
>>>>> b/drivers/gpu/drm/lima/lima_sched.c
>>>>> index b40c90e97d7e..a64c50fb6d1e 100644
>>>>> --- a/drivers/gpu/drm/lima/lima_sched.c
>>>>> +++ b/drivers/gpu/drm/lima/lima_sched.c
>>>>> @@ -513,20 +513,29 @@ static void
>>>>> lima_sched_recover_work(struct
>>>>> work_struct *work)
>>>>>
>>>>> int lima_sched_pipe_init(struct lima_sched_pipe *pipe, const
>>>>> char
>>>>> *name)
>>>>> {
>>>>> + struct drm_sched_init_params params;
>>>>> unsigned int timeout = lima_sched_timeout_ms > 0 ?
>>>>> lima_sched_timeout_ms : 10000;
>>>>>
>>>>> + memset(¶ms, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>> +
>>>>> pipe->fence_context = dma_fence_context_alloc(1);
>>>>> spin_lock_init(&pipe->fence_lock);
>>>>>
>>>>> INIT_WORK(&pipe->recover_work,
>>>>> lima_sched_recover_work);
>>>>>
>>>>> - return drm_sched_init(&pipe->base, &lima_sched_ops,
>>>>> NULL,
>>>>> - DRM_SCHED_PRIORITY_COUNT,
>>>>> - 1,
>>>>> - lima_job_hang_limit,
>>>>> - msecs_to_jiffies(timeout), NULL,
>>>>> - NULL, name, pipe->ldev->dev);
>>>>> + params.ops = &lima_sched_ops;
>>>>> + params.submit_wq = NULL; /* Use the system_wq. */
>>>>> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>>>> + params.credit_limit = 1;
>>>>> + params.hang_limit = lima_job_hang_limit;
>>>>> + params.timeout = msecs_to_jiffies(timeout);
>>>>> + params.timeout_wq = NULL; /* Use the system_wq. */
>>>>> + params.score = NULL;
>>>>> + params.name = name;
>>>>> + params.dev = pipe->ldev->dev;
>>>>> +
>>>>> + return drm_sched_init(&pipe->base, ¶ms);
>>>>> }
>>>>>
>>>>> void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
>>>>> diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c
>>>>> b/drivers/gpu/drm/msm/msm_ringbuffer.c
>>>>> index c803556a8f64..49a2c7422dc6 100644
>>>>> --- a/drivers/gpu/drm/msm/msm_ringbuffer.c
>>>>> +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
>>>>> @@ -59,11 +59,13 @@ static const struct drm_sched_backend_ops
>>>>> msm_sched_ops = {
>>>>> struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu
>>>>> *gpu,
>>>>> int id,
>>>>> void *memptrs, uint64_t memptrs_iova)
>>>>> {
>>>>> + struct drm_sched_init_params params;
>>>>> struct msm_ringbuffer *ring;
>>>>> - long sched_timeout;
>>>>> char name[32];
>>>>> int ret;
>>>>>
>>>>> + memset(¶ms, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>> +
>>>>> /* We assume everywhere that MSM_GPU_RINGBUFFER_SZ is
>>>>> a
>>>>> power of 2 */
>>>>> BUILD_BUG_ON(!is_power_of_2(MSM_GPU_RINGBUFFER_SZ));
>>>>>
>>>>> @@ -95,13 +97,19 @@ struct msm_ringbuffer
>>>>> *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
>>>>> ring->memptrs = memptrs;
>>>>> ring->memptrs_iova = memptrs_iova;
>>>>>
>>>>> - /* currently managing hangcheck ourselves: */
>>>>> - sched_timeout = MAX_SCHEDULE_TIMEOUT;
>>>>> + params.ops = &msm_sched_ops;
>>>>> + params.submit_wq = NULL; /* Use the system_wq. */
>>>>> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>>>> + params.credit_limit = num_hw_submissions;
>>>>> + params.hang_limit = 0;
>>>>> + /* currently managing hangcheck ourselves: */
>>>>> + params.timeout = MAX_SCHEDULE_TIMEOUT;
>>>>> + params.timeout_wq = NULL; /* Use the system_wq. */
>>>>> + params.score = NULL;
>>>>> + params.name = to_msm_bo(ring->bo)->name;
>>>>> + params.dev = gpu->dev->dev;
>>>>>
>>>>> - ret = drm_sched_init(&ring->sched, &msm_sched_ops,
>>>>> NULL,
>>>>> - DRM_SCHED_PRIORITY_COUNT,
>>>>> - num_hw_submissions, 0,
>>>>> sched_timeout,
>>>>> - NULL, NULL, to_msm_bo(ring->bo)-
>>>>>> name, gpu->dev->dev);
>>>>> + ret = drm_sched_init(&ring->sched, ¶ms);
>>>>> if (ret) {
>>>>> goto fail;
>>>>> }
>>>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c
>>>>> b/drivers/gpu/drm/nouveau/nouveau_sched.c
>>>>> index 4412f2711fb5..f20c2e612750 100644
>>>>> --- a/drivers/gpu/drm/nouveau/nouveau_sched.c
>>>>> +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
>>>>> @@ -404,9 +404,11 @@ nouveau_sched_init(struct nouveau_sched
>>>>> *sched, struct nouveau_drm *drm,
>>>>> {
>>>>> struct drm_gpu_scheduler *drm_sched = &sched->base;
>>>>> struct drm_sched_entity *entity = &sched->entity;
>>>>> - const long timeout =
>>>>> msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
>>>>> + struct drm_sched_init_params params;
>>>>> int ret;
>>>>>
>>>>> + memset(¶ms, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>> +
>>>>> if (!wq) {
>>>>> wq = alloc_workqueue("nouveau_sched_wq_%d", 0,
>>>>> WQ_MAX_ACTIVE,
>>>>> current->pid);
>>>>> @@ -416,10 +418,18 @@ nouveau_sched_init(struct nouveau_sched
>>>>> *sched, struct nouveau_drm *drm,
>>>>> sched->wq = wq;
>>>>> }
>>>>>
>>>>> - ret = drm_sched_init(drm_sched, &nouveau_sched_ops,
>>>>> wq,
>>>>> - NOUVEAU_SCHED_PRIORITY_COUNT,
>>>>> - credit_limit, 0, timeout,
>>>>> - NULL, NULL, "nouveau_sched", drm-
>>>>>> dev->dev);
>>>>> + params.ops = &nouveau_sched_ops;
>>>>> + params.submit_wq = wq;
>>>>> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>>>> + params.credit_limit = credit_limit;
>>>>> + params.hang_limit = 0;
>>>>> + params.timeout =
>>>>> msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
>>>>> + params.timeout_wq = NULL; /* Use the system_wq. */
>>>>> + params.score = NULL;
>>>>> + params.name = "nouveau_sched";
>>>>> + params.dev = drm->dev->dev;
>>>>> +
>>>>> + ret = drm_sched_init(drm_sched, ¶ms);
>>>>> if (ret)
>>>>> goto fail_wq;
>>>>>
>>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>> b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>> index 9b8e82fb8bc4..6b509ff446b5 100644
>>>>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>> @@ -836,10 +836,13 @@ static irqreturn_t
>>>>> panfrost_job_irq_handler(int irq, void *data)
>>>>>
>>>>> int panfrost_job_init(struct panfrost_device *pfdev)
>>>>> {
>>>>> + struct drm_sched_init_params params;
>>>>> struct panfrost_job_slot *js;
>>>>> unsigned int nentries = 2;
>>>>> int ret, j;
>>>>>
>>>>> + memset(¶ms, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>> +
>>>>> /* All GPUs have two entries per queue, but without
>>>>> jobchain
>>>>> * disambiguation stopping the right job in the close
>>>>> path
>>>>> is tricky,
>>>>> * so let's just advertise one entry in that case.
>>>>> @@ -872,16 +875,21 @@ int panfrost_job_init(struct
>>>>> panfrost_device
>>>>> *pfdev)
>>>>> if (!pfdev->reset.wq)
>>>>> return -ENOMEM;
>>>>>
>>>>> + params.ops = &panfrost_sched_ops;
>>>>> + params.submit_wq = NULL; /* Use the system_wq. */
>>>>> + params.num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>>>> + params.credit_limit = nentries;
>>>>> + params.hang_limit = 0;
>>>>> + params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
>>>>> + params.timeout_wq = pfdev->reset.wq;
>>>>> + params.score = NULL;
>>>>> + params.name = "pan_js";
>>>>> + params.dev = pfdev->dev;
>>>>> +
>>>>> for (j = 0; j < NUM_JOB_SLOTS; j++) {
>>>>> js->queue[j].fence_context =
>>>>> dma_fence_context_alloc(1);
>>>>>
>>>>> - ret = drm_sched_init(&js->queue[j].sched,
>>>>> - &panfrost_sched_ops,
>>>>> NULL,
>>>>> - DRM_SCHED_PRIORITY_COUNT,
>>>>> - nentries, 0,
>>>>> -
>>>>> msecs_to_jiffies(JOB_TIMEOUT_MS),
>>>>> - pfdev->reset.wq,
>>>>> - NULL, "pan_js", pfdev-
>>>>>> dev);
>>>>> + ret = drm_sched_init(&js->queue[j].sched,
>>>>> ¶ms);
>>>>> if (ret) {
>>>>> dev_err(pfdev->dev, "Failed to create
>>>>> scheduler: %d.", ret);
>>>>> goto err_sched;
>>>>> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c
>>>>> b/drivers/gpu/drm/panthor/panthor_mmu.c
>>>>> index a49132f3778b..4362442cbfd8 100644
>>>>> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
>>>>> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
>>>>> @@ -2268,6 +2268,7 @@ panthor_vm_create(struct panthor_device
>>>>> *ptdev, bool for_mcu,
>>>>> u64 full_va_range = 1ull << va_bits;
>>>>> struct drm_gem_object *dummy_gem;
>>>>> struct drm_gpu_scheduler *sched;
>>>>> + struct drm_sched_init_params sched_params;
>>>>> struct io_pgtable_cfg pgtbl_cfg;
>>>>> u64 mair, min_va, va_range;
>>>>> struct panthor_vm *vm;
>>>>> @@ -2284,6 +2285,8 @@ panthor_vm_create(struct panthor_device
>>>>> *ptdev, bool for_mcu,
>>>>> goto err_free_vm;
>>>>> }
>>>>>
>>>>> + memset(&sched_params, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>> +
>>>>> mutex_init(&vm->heaps.lock);
>>>>> vm->for_mcu = for_mcu;
>>>>> vm->ptdev = ptdev;
>>>>> @@ -2325,11 +2328,18 @@ panthor_vm_create(struct panthor_device
>>>>> *ptdev, bool for_mcu,
>>>>> goto err_mm_takedown;
>>>>> }
>>>>>
>>>>> + sched_params.ops = &panthor_vm_bind_ops;
>>>>> + sched_params.submit_wq = ptdev->mmu->vm.wq;
>>>>> + sched_params.num_rqs = 1;
>>>>> + sched_params.credit_limit = 1;
>>>>> + sched_params.hang_limit = 0;
>>>>> /* Bind operations are synchronous for now, no timeout
>>>>> needed. */
>>>>> - ret = drm_sched_init(&vm->sched, &panthor_vm_bind_ops,
>>>>> ptdev->mmu->vm.wq,
>>>>> - 1, 1, 0,
>>>>> - MAX_SCHEDULE_TIMEOUT, NULL, NULL,
>>>>> - "panthor-vm-bind", ptdev-
>>>>>> base.dev);
>>>>> + sched_params.timeout = MAX_SCHEDULE_TIMEOUT;
>>>>> + sched_params.timeout_wq = NULL; /* Use the system_wq.
>>>>> */
>>>>> + sched_params.score = NULL;
>>>>> + sched_params.name = "panthor-vm-bind";
>>>>> + sched_params.dev = ptdev->base.dev;
>>>>> + ret = drm_sched_init(&vm->sched, &sched_params);
>>>>> if (ret)
>>>>> goto err_free_io_pgtable;
>>>>>
>>>>> diff --git a/drivers/gpu/drm/panthor/panthor_sched.c
>>>>> b/drivers/gpu/drm/panthor/panthor_sched.c
>>>>> index ef4bec7ff9c7..a324346d302f 100644
>>>>> --- a/drivers/gpu/drm/panthor/panthor_sched.c
>>>>> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
>>>>> @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group
>>>>> *group,
>>>>> const struct drm_panthor_queue_create
>>>>> *args)
>>>>> {
>>>>> struct drm_gpu_scheduler *drm_sched;
>>>>> + struct drm_sched_init_params sched_params;
>>>>> struct panthor_queue *queue;
>>>>> int ret;
>>>>>
>>>>> @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group
>>>>> *group,
>>>>> if (!queue)
>>>>> return ERR_PTR(-ENOMEM);
>>>>>
>>>>> + memset(&sched_params, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>> +
>>>>> queue->fence_ctx.id = dma_fence_context_alloc(1);
>>>>> spin_lock_init(&queue->fence_ctx.lock);
>>>>> INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
>>>>> @@ -3341,17 +3344,23 @@ group_create_queue(struct panthor_group
>>>>> *group,
>>>>> if (ret)
>>>>> goto err_free_queue;
>>>>>
>>>>> + sched_params.ops = &panthor_queue_sched_ops;
>>>>> + sched_params.submit_wq = group->ptdev->scheduler->wq;
>>>>> + sched_params.num_rqs = 1;
>>>>> /*
>>>>> - * Credit limit argument tells us the total number of
>>>>> instructions
>>>>> + * The credit limit argument tells us the total number
>>>>> of
>>>>> instructions
>>>>> * across all CS slots in the ringbuffer, with some
>>>>> jobs
>>>>> requiring
>>>>> * twice as many as others, depending on their
>>>>> profiling
>>>>> status.
>>>>> */
>>>>> - ret = drm_sched_init(&queue->scheduler,
>>>>> &panthor_queue_sched_ops,
>>>>> - group->ptdev->scheduler->wq, 1,
>>>>> - args->ringbuf_size / sizeof(u64),
>>>>> - 0,
>>>>> msecs_to_jiffies(JOB_TIMEOUT_MS),
>>>>> - group->ptdev->reset.wq,
>>>>> - NULL, "panthor-queue", group-
>>>>>> ptdev-
>>>>>> base.dev);
>>>>> + sched_params.credit_limit = args->ringbuf_size /
>>>>> sizeof(u64);
>>>>> + sched_params.hang_limit = 0;
>>>>> + sched_params.timeout =
>>>>> msecs_to_jiffies(JOB_TIMEOUT_MS);
>>>>> + sched_params.timeout_wq = group->ptdev->reset.wq;
>>>>> + sched_params.score = NULL;
>>>>> + sched_params.name = "panthor-queue";
>>>>> + sched_params.dev = group->ptdev->base.dev;
>>>>> +
>>>>> + ret = drm_sched_init(&queue->scheduler,
>>>>> &sched_params);
>>>>> if (ret)
>>>>> goto err_free_queue;
>>>>>
>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
>>>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>>>> index 57da84908752..27db748a5269 100644
>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>>> @@ -1240,40 +1240,25 @@ static void
>>>>> drm_sched_run_job_work(struct
>>>>> work_struct *w)
>>>>> * drm_sched_init - Init a gpu scheduler instance
>>>>> *
>>>>> * @sched: scheduler instance
>>>>> - * @ops: backend operations for this scheduler
>>>>> - * @submit_wq: workqueue to use for submission. If NULL, an
>>>>> ordered wq is
>>>>> - * allocated and used
>>>>> - * @num_rqs: number of runqueues, one for each priority, up to
>>>>> DRM_SCHED_PRIORITY_COUNT
>>>>> - * @credit_limit: the number of credits this scheduler can
>>>>> hold
>>>>> from all jobs
>>>>> - * @hang_limit: number of times to allow a job to hang before
>>>>> dropping it
>>>>> - * @timeout: timeout value in jiffies for the scheduler
>>>>> - * @timeout_wq: workqueue to use for timeout work. If NULL,
>>>>> the
>>>>> system_wq is
>>>>> - * used
>>>>> - * @score: optional score atomic shared with other schedulers
>>>>> - * @name: name used for debugging
>>>>> - * @dev: target &struct device
>>>>> + * @params: scheduler initialization parameters
>>>>> *
>>>>> * Return 0 on success, otherwise error code.
>>>>> */
>>>>> int drm_sched_init(struct drm_gpu_scheduler *sched,
>>>>> - const struct drm_sched_backend_ops *ops,
>>>>> - struct workqueue_struct *submit_wq,
>>>>> - u32 num_rqs, u32 credit_limit, unsigned int
>>>>> hang_limit,
>>>>> - long timeout, struct workqueue_struct
>>>>> *timeout_wq,
>>>>> - atomic_t *score, const char *name, struct
>>>>> device *dev)
>>>>> + const struct drm_sched_init_params *params)
>>>>> {
>>>>> int i;
>>>>>
>>>>> - sched->ops = ops;
>>>>> - sched->credit_limit = credit_limit;
>>>>> - sched->name = name;
>>>>> - sched->timeout = timeout;
>>>>> - sched->timeout_wq = timeout_wq ? : system_wq;
>>>>> - sched->hang_limit = hang_limit;
>>>>> - sched->score = score ? score : &sched->_score;
>>>>> - sched->dev = dev;
>>>>> + sched->ops = params->ops;
>>>>> + sched->credit_limit = params->credit_limit;
>>>>> + sched->name = params->name;
>>>>> + sched->timeout = params->timeout;
>>>>> + sched->timeout_wq = params->timeout_wq ? : system_wq;
>>>>> + sched->hang_limit = params->hang_limit;
>>>>> + sched->score = params->score ? params->score : &sched-
>>>>>> _score;
>>>>> + sched->dev = params->dev;
>>>>>
>>>>> - if (num_rqs > DRM_SCHED_PRIORITY_COUNT) {
>>>>> + if (params->num_rqs > DRM_SCHED_PRIORITY_COUNT) {
>>>>> /* This is a gross violation--tell drivers
>>>>> what
>>>>> the problem is.
>>>>> */
>>>>> drm_err(sched, "%s: num_rqs cannot be greater
>>>>> than
>>>>> DRM_SCHED_PRIORITY_COUNT\n",
>>>>> @@ -1288,16 +1273,16 @@ int drm_sched_init(struct
>>>>> drm_gpu_scheduler
>>>>> *sched,
>>>>> return 0;
>>>>> }
>>>>>
>>>>> - if (submit_wq) {
>>>>> - sched->submit_wq = submit_wq;
>>>>> + if (params->submit_wq) {
>>>>> + sched->submit_wq = params->submit_wq;
>>>>> sched->own_submit_wq = false;
>>>>> } else {
>>>>> #ifdef CONFIG_LOCKDEP
>>>>> - sched->submit_wq =
>>>>> alloc_ordered_workqueue_lockdep_map(name,
>>>>> -
>>>>>
>>>>> WQ_MEM_RECLAIM,
>>>>> -
>>>>>
>>>>> &drm_sched_lockdep_map);
>>>>> + sched->submit_wq =
>>>>> alloc_ordered_workqueue_lockdep_map(
>>>>> + params->name,
>>>>> WQ_MEM_RECLAIM,
>>>>> + &drm_sched_lockdep_map
>>>>> );
>>>>> #else
>>>>> - sched->submit_wq =
>>>>> alloc_ordered_workqueue(name,
>>>>> WQ_MEM_RECLAIM);
>>>>> + sched->submit_wq =
>>>>> alloc_ordered_workqueue(params-
>>>>>> name, WQ_MEM_RECLAIM);
>>>>> #endif
>>>>> if (!sched->submit_wq)
>>>>> return -ENOMEM;
>>>>> @@ -1305,11 +1290,11 @@ int drm_sched_init(struct
>>>>> drm_gpu_scheduler
>>>>> *sched,
>>>>> sched->own_submit_wq = true;
>>>>> }
>>>>>
>>>>> - sched->sched_rq = kmalloc_array(num_rqs,
>>>>> sizeof(*sched-
>>>>>> sched_rq),
>>>>> + sched->sched_rq = kmalloc_array(params->num_rqs,
>>>>> sizeof(*sched->sched_rq),
>>>>> GFP_KERNEL |
>>>>> __GFP_ZERO);
>>>>> if (!sched->sched_rq)
>>>>> goto Out_check_own;
>>>>> - sched->num_rqs = num_rqs;
>>>>> + sched->num_rqs = params->num_rqs;
>>>>> for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched-
>>>>>> num_rqs;
>>>>> i++) {
>>>>> sched->sched_rq[i] = kzalloc(sizeof(*sched-
>>>>>> sched_rq[i]), GFP_KERNEL);
>>>>> if (!sched->sched_rq[i])
>>>>> diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
>>>>> b/drivers/gpu/drm/v3d/v3d_sched.c
>>>>> index 99ac4995b5a1..716e6d074d87 100644
>>>>> --- a/drivers/gpu/drm/v3d/v3d_sched.c
>>>>> +++ b/drivers/gpu/drm/v3d/v3d_sched.c
>>>>> @@ -814,67 +814,124 @@ static const struct
>>>>> drm_sched_backend_ops
>>>>> v3d_cpu_sched_ops = {
>>>>> .free_job = v3d_cpu_job_free
>>>>> };
>>>>>
>>>>> +/*
>>>>> + * v3d's scheduler instances are all identical, except for ops
>>>>> and
>>>>> name.
>>>>> + */
>>>>> +static void
>>>>> +v3d_common_sched_init(struct drm_sched_init_params *params,
>>>>> struct
>>>>> device *dev)
>>>>> +{
>>>>> + memset(params, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>> +
>>>>> + params->submit_wq = NULL; /* Use the system_wq. */
>>>>> + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>>>> + params->credit_limit = 1;
>>>>> + params->hang_limit = 0;
>>>>> + params->timeout = msecs_to_jiffies(500);
>>>>> + params->timeout_wq = NULL; /* Use the system_wq. */
>>>>> + params->score = NULL;
>>>>> + params->dev = dev;
>>>>> +}
>>>>> +
>>>>> +static int
>>>>> +v3d_bin_sched_init(struct v3d_dev *v3d)
>>>>> +{
>>>>> + struct drm_sched_init_params params;
>>>>> +
>>>>> + v3d_common_sched_init(¶ms, v3d->drm.dev);
>>>>> + params.ops = &v3d_bin_sched_ops;
>>>>> + params.name = "v3d_bin";
>>>>> +
>>>>> + return drm_sched_init(&v3d->queue[V3D_BIN].sched,
>>>>> ¶ms);
>>>>> +}
>>>>> +
>>>>> +static int
>>>>> +v3d_render_sched_init(struct v3d_dev *v3d)
>>>>> +{
>>>>> + struct drm_sched_init_params params;
>>>>> +
>>>>> + v3d_common_sched_init(¶ms, v3d->drm.dev);
>>>>> + params.ops = &v3d_render_sched_ops;
>>>>> + params.name = "v3d_render";
>>>>> +
>>>>> + return drm_sched_init(&v3d->queue[V3D_RENDER].sched,
>>>>> ¶ms);
>>>>> +}
>>>>> +
>>>>> +static int
>>>>> +v3d_tfu_sched_init(struct v3d_dev *v3d)
>>>>> +{
>>>>> + struct drm_sched_init_params params;
>>>>> +
>>>>> + v3d_common_sched_init(¶ms, v3d->drm.dev);
>>>>> + params.ops = &v3d_tfu_sched_ops;
>>>>> + params.name = "v3d_tfu";
>>>>> +
>>>>> + return drm_sched_init(&v3d->queue[V3D_TFU].sched,
>>>>> ¶ms);
>>>>> +}
>>>>> +
>>>>> +static int
>>>>> +v3d_csd_sched_init(struct v3d_dev *v3d)
>>>>> +{
>>>>> + struct drm_sched_init_params params;
>>>>> +
>>>>> + v3d_common_sched_init(¶ms, v3d->drm.dev);
>>>>> + params.ops = &v3d_csd_sched_ops;
>>>>> + params.name = "v3d_csd";
>>>>> +
>>>>> + return drm_sched_init(&v3d->queue[V3D_CSD].sched,
>>>>> ¶ms);
>>>>> +}
>>>>> +
>>>>> +static int
>>>>> +v3d_cache_sched_init(struct v3d_dev *v3d)
>>>>> +{
>>>>> + struct drm_sched_init_params params;
>>>>> +
>>>>> + v3d_common_sched_init(¶ms, v3d->drm.dev);
>>>>> + params.ops = &v3d_cache_clean_sched_ops;
>>>>> + params.name = "v3d_cache_clean";
>>>>> +
>>>>> + return drm_sched_init(&v3d-
>>>>>> queue[V3D_CACHE_CLEAN].sched,
>>>>> ¶ms);
>>>>> +}
>>>>> +
>>>>> +static int
>>>>> +v3d_cpu_sched_init(struct v3d_dev *v3d)
>>>>> +{
>>>>> + struct drm_sched_init_params params;
>>>>> +
>>>>> + v3d_common_sched_init(¶ms, v3d->drm.dev);
>>>>> + params.ops = &v3d_cpu_sched_ops;
>>>>> + params.name = "v3d_cpu";
>>>>> +
>>>>> + return drm_sched_init(&v3d->queue[V3D_CPU].sched,
>>>>> ¶ms);
>>>>> +}
>>>>> +
>>>>> int
>>>>> v3d_sched_init(struct v3d_dev *v3d)
>>>>> {
>>>>> - int hw_jobs_limit = 1;
>>>>> - int job_hang_limit = 0;
>>>>> - int hang_limit_ms = 500;
>>>>> int ret;
>>>>>
>>>>> - ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
>>>>> - &v3d_bin_sched_ops, NULL,
>>>>> - DRM_SCHED_PRIORITY_COUNT,
>>>>> - hw_jobs_limit, job_hang_limit,
>>>>> - msecs_to_jiffies(hang_limit_ms),
>>>>> NULL,
>>>>> - NULL, "v3d_bin", v3d->drm.dev);
>>>>> + ret = v3d_bin_sched_init(v3d);
>>>>> if (ret)
>>>>> return ret;
>>>>>
>>>>> - ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
>>>>> - &v3d_render_sched_ops, NULL,
>>>>> - DRM_SCHED_PRIORITY_COUNT,
>>>>> - hw_jobs_limit, job_hang_limit,
>>>>> - msecs_to_jiffies(hang_limit_ms),
>>>>> NULL,
>>>>> - NULL, "v3d_render", v3d-
>>>>>> drm.dev);
>>>>> + ret = v3d_render_sched_init(v3d);
>>>>> if (ret)
>>>>> goto fail;
>>>>>
>>>>> - ret = drm_sched_init(&v3d->queue[V3D_TFU].sched,
>>>>> - &v3d_tfu_sched_ops, NULL,
>>>>> - DRM_SCHED_PRIORITY_COUNT,
>>>>> - hw_jobs_limit, job_hang_limit,
>>>>> - msecs_to_jiffies(hang_limit_ms),
>>>>> NULL,
>>>>> - NULL, "v3d_tfu", v3d->drm.dev);
>>>>> + ret = v3d_tfu_sched_init(v3d);
>>>>> if (ret)
>>>>> goto fail;
>>>>>
>>>>> if (v3d_has_csd(v3d)) {
>>>>> - ret = drm_sched_init(&v3d-
>>>>>> queue[V3D_CSD].sched,
>>>>> - &v3d_csd_sched_ops, NULL,
>>>>> - DRM_SCHED_PRIORITY_COUNT,
>>>>> - hw_jobs_limit,
>>>>> job_hang_limit,
>>>>> -
>>>>> msecs_to_jiffies(hang_limit_ms), NULL,
>>>>> - NULL, "v3d_csd", v3d-
>>>>>> drm.dev);
>>>>> + ret = v3d_csd_sched_init(v3d);
>>>>> if (ret)
>>>>> goto fail;
>>>>>
>>>>> - ret = drm_sched_init(&v3d-
>>>>>> queue[V3D_CACHE_CLEAN].sched,
>>>>> -
>>>>> &v3d_cache_clean_sched_ops,
>>>>> NULL,
>>>>> - DRM_SCHED_PRIORITY_COUNT,
>>>>> - hw_jobs_limit,
>>>>> job_hang_limit,
>>>>> -
>>>>> msecs_to_jiffies(hang_limit_ms), NULL,
>>>>> - NULL, "v3d_cache_clean",
>>>>> v3d-
>>>>>> drm.dev);
>>>>> + ret = v3d_cache_sched_init(v3d);
>>>>> if (ret)
>>>>> goto fail;
>>>>> }
>>>>>
>>>>> - ret = drm_sched_init(&v3d->queue[V3D_CPU].sched,
>>>>> - &v3d_cpu_sched_ops, NULL,
>>>>> - DRM_SCHED_PRIORITY_COUNT,
>>>>> - 1, job_hang_limit,
>>>>> - msecs_to_jiffies(hang_limit_ms),
>>>>> NULL,
>>>>> - NULL, "v3d_cpu", v3d->drm.dev);
>>>>> + ret = v3d_cpu_sched_init(v3d);
>>>>> if (ret)
>>>>> goto fail;
>>>>>
>>>>> diff --git a/drivers/gpu/drm/xe/xe_execlist.c
>>>>> b/drivers/gpu/drm/xe/xe_execlist.c
>>>>> index a8c416a48812..7f29b7f04af4 100644
>>>>> --- a/drivers/gpu/drm/xe/xe_execlist.c
>>>>> +++ b/drivers/gpu/drm/xe/xe_execlist.c
>>>>> @@ -332,10 +332,13 @@ static const struct drm_sched_backend_ops
>>>>> drm_sched_ops = {
>>>>> static int execlist_exec_queue_init(struct xe_exec_queue *q)
>>>>> {
>>>>> struct drm_gpu_scheduler *sched;
>>>>> + struct drm_sched_init_params params;
>>>>> struct xe_execlist_exec_queue *exl;
>>>>> struct xe_device *xe = gt_to_xe(q->gt);
>>>>> int err;
>>>>>
>>>>> + memset(¶ms, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>> +
>>>>> xe_assert(xe, !xe_device_uc_enabled(xe));
>>>>>
>>>>> drm_info(&xe->drm, "Enabling execlist submission (GuC
>>>>> submission disabled)\n");
>>>>> @@ -346,11 +349,18 @@ static int
>>>>> execlist_exec_queue_init(struct
>>>>> xe_exec_queue *q)
>>>>>
>>>>> exl->q = q;
>>>>>
>>>>> - err = drm_sched_init(&exl->sched, &drm_sched_ops,
>>>>> NULL, 1,
>>>>> - q->lrc[0]->ring.size /
>>>>> MAX_JOB_SIZE_BYTES,
>>>>> - XE_SCHED_HANG_LIMIT,
>>>>> XE_SCHED_JOB_TIMEOUT,
>>>>> - NULL, NULL, q->hwe->name,
>>>>> - gt_to_xe(q->gt)->drm.dev);
>>>>> + params.ops = &drm_sched_ops;
>>>>> + params.submit_wq = NULL; /* Use the system_wq. */
>>>>> + params.num_rqs = 1;
>>>>> + params.credit_limit = q->lrc[0]->ring.size /
>>>>> MAX_JOB_SIZE_BYTES;
>>>>> + params.hang_limit = XE_SCHED_HANG_LIMIT;
>>>>> + params.timeout = XE_SCHED_JOB_TIMEOUT;
>>>>> + params.timeout_wq = NULL; /* Use the system_wq. */
>>>>> + params.score = NULL;
>>>>> + params.name = q->hwe->name;
>>>>> + params.dev = gt_to_xe(q->gt)->drm.dev;
>>>>> +
>>>>> + err = drm_sched_init(&exl->sched, ¶ms);
>>>>> if (err)
>>>>> goto err_free;
>>>>>
>>>>> diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
>>>>> b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
>>>>> index 50361b4638f9..2129fee83f25 100644
>>>>> --- a/drivers/gpu/drm/xe/xe_gpu_scheduler.c
>>>>> +++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.c
>>>>> @@ -63,13 +63,26 @@ int xe_sched_init(struct xe_gpu_scheduler
>>>>> *sched,
>>>>> atomic_t *score, const char *name,
>>>>> struct device *dev)
>>>>> {
>>>>> + struct drm_sched_init_params params;
>>>>> +
>>>>> sched->ops = xe_ops;
>>>>> INIT_LIST_HEAD(&sched->msgs);
>>>>> INIT_WORK(&sched->work_process_msg,
>>>>> xe_sched_process_msg_work);
>>>>>
>>>>> - return drm_sched_init(&sched->base, ops, submit_wq, 1,
>>>>> hw_submission,
>>>>> - hang_limit, timeout, timeout_wq,
>>>>> score, name,
>>>>> - dev);
>>>>> + memset(¶ms, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>> +
>>>>> + params.ops = ops;
>>>>> + params.submit_wq = submit_wq;
>>>>> + params.num_rqs = 1;
>>>>> + params.credit_limit = hw_submission;
>>>>> + params.hang_limit = hang_limit;
>>>>> + params.timeout = timeout;
>>>>> + params.timeout_wq = timeout_wq;
>>>>> + params.score = score;
>>>>> + params.name = name;
>>>>> + params.dev = dev;
>>>>> +
>>>>> + return drm_sched_init(&sched->base, ¶ms);
>>>>> }
>>>>>
>>>>> void xe_sched_fini(struct xe_gpu_scheduler *sched)
>>>>> diff --git a/include/drm/gpu_scheduler.h
>>>>> b/include/drm/gpu_scheduler.h
>>>>> index 95e17504e46a..1a834ef43862 100644
>>>>> --- a/include/drm/gpu_scheduler.h
>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>> @@ -553,12 +553,37 @@ struct drm_gpu_scheduler {
>>>>> struct device *dev;
>>>>> };
>>>>>
>>>>> +/**
>>>>> + * struct drm_sched_init_params - parameters for initializing
>>>>> a
>>>>> DRM GPU scheduler
>>>>> + *
>>>>> + * @ops: backend operations provided by the driver
>>>>> + * @submit_wq: workqueue to use for submission. If NULL, an
>>>>> ordered wq is
>>>>> + * allocated and used
>>>>> + * @num_rqs: Number of run-queues. This is at most
>>>>> DRM_SCHED_PRIORITY_COUNT,
>>>>> + * as there's usually one run-queue per priority,
>>>>> but
>>>>> could be less.
>>>>> + * @credit_limit: the number of credits this scheduler can
>>>>> hold
>>>>> from all jobs
>>>>> + * @hang_limit: number of times to allow a job to hang before
>>>>> dropping it
>>>>> + * @timeout: timeout value in jiffies for the scheduler
>>>>> + * @timeout_wq: workqueue to use for timeout work. If NULL,
>>>>> the
>>>>> system_wq is
>>>>> + * used
>>>>> + * @score: optional score atomic shared with other schedulers
>>>>> + * @name: name used for debugging
>>>>> + * @dev: associated device. Used for debugging
>>>>> + */
>>>>> +struct drm_sched_init_params {
>>>>> + const struct drm_sched_backend_ops *ops;
>>>>> + struct workqueue_struct *submit_wq;
>>>>> + struct workqueue_struct *timeout_wq;
>>>>> + u32 num_rqs, credit_limit;
>>>>> + unsigned int hang_limit;
>>>>> + long timeout;
>>>>> + atomic_t *score;
>>>>> + const char *name;
>>>>> + struct device *dev;
>>>>> +};
>>>>> +
>>>>> int drm_sched_init(struct drm_gpu_scheduler *sched,
>>>>> - const struct drm_sched_backend_ops *ops,
>>>>> - struct workqueue_struct *submit_wq,
>>>>> - u32 num_rqs, u32 credit_limit, unsigned int
>>>>> hang_limit,
>>>>> - long timeout, struct workqueue_struct
>>>>> *timeout_wq,
>>>>> - atomic_t *score, const char *name, struct
>>>>> device *dev);
>>>>> + const struct drm_sched_init_params *params);
>>>>>
>>>>> void drm_sched_fini(struct drm_gpu_scheduler *sched);
>>>>> int drm_sched_job_init(struct drm_sched_job *job,
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
2025-01-22 14:30 ` Danilo Krummrich
2025-01-22 14:34 ` Christian König
@ 2025-01-22 15:51 ` Boris Brezillon
2025-01-22 16:14 ` Tvrtko Ursulin
2025-01-22 17:16 ` Boris Brezillon
` (9 subsequent siblings)
12 siblings, 1 reply; 35+ messages in thread
From: Boris Brezillon @ 2025-01-22 15:51 UTC (permalink / raw)
To: Philipp Stanner
Cc: Alex Deucher, Christian König, Xinhui Pan, David Airlie,
Simona Vetter, Lucas Stach, Russell King, Christian Gmeiner,
Frank Binns, Matt Coster, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, Karol Herbst,
Lyude Paul, Danilo Krummrich, Rob Herring, Steven Price,
Liviu Dudau, Luben Tuikov, Matthew Brost, Philipp Stanner,
Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li, amd-gfx, dri-devel,
linux-kernel, etnaviv, lima, linux-arm-msm, freedreno, nouveau,
intel-xe
On Wed, 22 Jan 2025 15:08:20 +0100
Philipp Stanner <phasta@kernel.org> wrote:
> --- a/drivers/gpu/drm/panthor/panthor_sched.c
> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group *group,
> const struct drm_panthor_queue_create *args)
> {
> struct drm_gpu_scheduler *drm_sched;
> + struct drm_sched_init_params sched_params;
nit: Could we use a struct initializer instead of a
memset(0)+field-assignment?
struct drm_sched_init_params sched_params = {
.ops = &panthor_queue_sched_ops,
.submit_wq = group->ptdev->scheduler->wq,
.num_rqs = 1,
.credit_limit = args->ringbuf_size / sizeof(u64),
.hang_limit = 0,
.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS),
.timeout_wq = group->ptdev->reset.wq,
.name = "panthor-queue",
.dev = group->ptdev->base.dev,
};
The same comment applies the panfrost changes BTW.
> struct panthor_queue *queue;
> int ret;
>
> @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group *group,
> if (!queue)
> return ERR_PTR(-ENOMEM);
>
> + memset(&sched_params, 0, sizeof(struct drm_sched_init_params));
> +
> queue->fence_ctx.id = dma_fence_context_alloc(1);
> spin_lock_init(&queue->fence_ctx.lock);
> INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
> @@ -3341,17 +3344,23 @@ group_create_queue(struct panthor_group *group,
> if (ret)
> goto err_free_queue;
>
> + sched_params.ops = &panthor_queue_sched_ops;
> + sched_params.submit_wq = group->ptdev->scheduler->wq;
> + sched_params.num_rqs = 1;
> /*
> - * Credit limit argument tells us the total number of instructions
> + * The credit limit argument tells us the total number of instructions
> * across all CS slots in the ringbuffer, with some jobs requiring
> * twice as many as others, depending on their profiling status.
> */
> - ret = drm_sched_init(&queue->scheduler, &panthor_queue_sched_ops,
> - group->ptdev->scheduler->wq, 1,
> - args->ringbuf_size / sizeof(u64),
> - 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
> - group->ptdev->reset.wq,
> - NULL, "panthor-queue", group->ptdev->base.dev);
> + sched_params.credit_limit = args->ringbuf_size / sizeof(u64);
> + sched_params.hang_limit = 0;
> + sched_params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
> + sched_params.timeout_wq = group->ptdev->reset.wq;
> + sched_params.score = NULL;
> + sched_params.name = "panthor-queue";
> + sched_params.dev = group->ptdev->base.dev;
> +
> + ret = drm_sched_init(&queue->scheduler, &sched_params);
> if (ret)
> goto err_free_queue;
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 15:51 ` Boris Brezillon
@ 2025-01-22 16:14 ` Tvrtko Ursulin
2025-01-22 17:04 ` Boris Brezillon
0 siblings, 1 reply; 35+ messages in thread
From: Tvrtko Ursulin @ 2025-01-22 16:14 UTC (permalink / raw)
To: Boris Brezillon, Philipp Stanner
Cc: Alex Deucher, Christian König, Xinhui Pan, David Airlie,
Simona Vetter, Lucas Stach, Russell King, Christian Gmeiner,
Frank Binns, Matt Coster, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, Karol Herbst,
Lyude Paul, Danilo Krummrich, Rob Herring, Steven Price,
Liviu Dudau, Luben Tuikov, Matthew Brost, Philipp Stanner,
Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li, amd-gfx, dri-devel,
linux-kernel, etnaviv, lima, linux-arm-msm, freedreno, nouveau,
intel-xe
On 22/01/2025 15:51, Boris Brezillon wrote:
> On Wed, 22 Jan 2025 15:08:20 +0100
> Philipp Stanner <phasta@kernel.org> wrote:
>
>> --- a/drivers/gpu/drm/panthor/panthor_sched.c
>> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
>> @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group *group,
>> const struct drm_panthor_queue_create *args)
>> {
>> struct drm_gpu_scheduler *drm_sched;
>> + struct drm_sched_init_params sched_params;
>
> nit: Could we use a struct initializer instead of a
> memset(0)+field-assignment?
>
> struct drm_sched_init_params sched_params = {
> .ops = &panthor_queue_sched_ops,
> .submit_wq = group->ptdev->scheduler->wq,
> .num_rqs = 1,
> .credit_limit = args->ringbuf_size / sizeof(u64),
> .hang_limit = 0,
> .timeout = msecs_to_jiffies(JOB_TIMEOUT_MS),
> .timeout_wq = group->ptdev->reset.wq,
> .name = "panthor-queue",
> .dev = group->ptdev->base.dev,
> };
+1 on this as a general approach for the whole series. And I'd drop the
explicit zeros and NULLs. Memsets could then go too.
Regards,
Tvrtko
>
> The same comment applies the panfrost changes BTW.
>
>> struct panthor_queue *queue;
>> int ret;
>>
>> @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group *group,
>> if (!queue)
>> return ERR_PTR(-ENOMEM);
>>
>> + memset(&sched_params, 0, sizeof(struct drm_sched_init_params));
>> +
>> queue->fence_ctx.id = dma_fence_context_alloc(1);
>> spin_lock_init(&queue->fence_ctx.lock);
>> INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
>> @@ -3341,17 +3344,23 @@ group_create_queue(struct panthor_group *group,
>> if (ret)
>> goto err_free_queue;
>>
>> + sched_params.ops = &panthor_queue_sched_ops;
>> + sched_params.submit_wq = group->ptdev->scheduler->wq;
>> + sched_params.num_rqs = 1;
>> /*
>> - * Credit limit argument tells us the total number of instructions
>> + * The credit limit argument tells us the total number of instructions
>> * across all CS slots in the ringbuffer, with some jobs requiring
>> * twice as many as others, depending on their profiling status.
>> */
>> - ret = drm_sched_init(&queue->scheduler, &panthor_queue_sched_ops,
>> - group->ptdev->scheduler->wq, 1,
>> - args->ringbuf_size / sizeof(u64),
>> - 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
>> - group->ptdev->reset.wq,
>> - NULL, "panthor-queue", group->ptdev->base.dev);
>> + sched_params.credit_limit = args->ringbuf_size / sizeof(u64);
>> + sched_params.hang_limit = 0;
>> + sched_params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
>> + sched_params.timeout_wq = group->ptdev->reset.wq;
>> + sched_params.score = NULL;
>> + sched_params.name = "panthor-queue";
>> + sched_params.dev = group->ptdev->base.dev;
>> +
>> + ret = drm_sched_init(&queue->scheduler, &sched_params);
>> if (ret)
>> goto err_free_queue;
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 16:14 ` Tvrtko Ursulin
@ 2025-01-22 17:04 ` Boris Brezillon
2025-01-23 4:37 ` Matthew Brost
0 siblings, 1 reply; 35+ messages in thread
From: Boris Brezillon @ 2025-01-22 17:04 UTC (permalink / raw)
To: Tvrtko Ursulin
Cc: Philipp Stanner, Alex Deucher, Christian König, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Danilo Krummrich, Rob Herring,
Steven Price, Liviu Dudau, Luben Tuikov, Matthew Brost,
Philipp Stanner, Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li, amd-gfx, dri-devel,
linux-kernel, etnaviv, lima, linux-arm-msm, freedreno, nouveau,
intel-xe
On Wed, 22 Jan 2025 16:14:59 +0000
Tvrtko Ursulin <tursulin@ursulin.net> wrote:
> On 22/01/2025 15:51, Boris Brezillon wrote:
> > On Wed, 22 Jan 2025 15:08:20 +0100
> > Philipp Stanner <phasta@kernel.org> wrote:
> >
> >> --- a/drivers/gpu/drm/panthor/panthor_sched.c
> >> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> >> @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group *group,
> >> const struct drm_panthor_queue_create *args)
> >> {
> >> struct drm_gpu_scheduler *drm_sched;
> >> + struct drm_sched_init_params sched_params;
> >
> > nit: Could we use a struct initializer instead of a
> > memset(0)+field-assignment?
> >
> > struct drm_sched_init_params sched_params = {
Actually, you can even make it const if it's not modified after the
declaration.
> > .ops = &panthor_queue_sched_ops,
> > .submit_wq = group->ptdev->scheduler->wq,
> > .num_rqs = 1,
> > .credit_limit = args->ringbuf_size / sizeof(u64),
> > .hang_limit = 0,
> > .timeout = msecs_to_jiffies(JOB_TIMEOUT_MS),
> > .timeout_wq = group->ptdev->reset.wq,
> > .name = "panthor-queue",
> > .dev = group->ptdev->base.dev,
> > };
>
> +1 on this as a general approach for the whole series. And I'd drop the
> explicit zeros and NULLs. Memsets could then go too.
>
> Regards,
>
> Tvrtko
>
> >
> > The same comment applies the panfrost changes BTW.
> >
> >> struct panthor_queue *queue;
> >> int ret;
> >>
> >> @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group *group,
> >> if (!queue)
> >> return ERR_PTR(-ENOMEM);
> >>
> >> + memset(&sched_params, 0, sizeof(struct drm_sched_init_params));
> >> +
> >> queue->fence_ctx.id = dma_fence_context_alloc(1);
> >> spin_lock_init(&queue->fence_ctx.lock);
> >> INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
> >> @@ -3341,17 +3344,23 @@ group_create_queue(struct panthor_group *group,
> >> if (ret)
> >> goto err_free_queue;
> >>
> >> + sched_params.ops = &panthor_queue_sched_ops;
> >> + sched_params.submit_wq = group->ptdev->scheduler->wq;
> >> + sched_params.num_rqs = 1;
> >> /*
> >> - * Credit limit argument tells us the total number of instructions
> >> + * The credit limit argument tells us the total number of instructions
> >> * across all CS slots in the ringbuffer, with some jobs requiring
> >> * twice as many as others, depending on their profiling status.
> >> */
> >> - ret = drm_sched_init(&queue->scheduler, &panthor_queue_sched_ops,
> >> - group->ptdev->scheduler->wq, 1,
> >> - args->ringbuf_size / sizeof(u64),
> >> - 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
> >> - group->ptdev->reset.wq,
> >> - NULL, "panthor-queue", group->ptdev->base.dev);
> >> + sched_params.credit_limit = args->ringbuf_size / sizeof(u64);
> >> + sched_params.hang_limit = 0;
> >> + sched_params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
> >> + sched_params.timeout_wq = group->ptdev->reset.wq;
> >> + sched_params.score = NULL;
> >> + sched_params.name = "panthor-queue";
> >> + sched_params.dev = group->ptdev->base.dev;
> >> +
> >> + ret = drm_sched_init(&queue->scheduler, &sched_params);
> >> if (ret)
> >> goto err_free_queue;
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
` (2 preceding siblings ...)
2025-01-22 15:51 ` Boris Brezillon
@ 2025-01-22 17:16 ` Boris Brezillon
2025-01-23 7:33 ` Philipp Stanner
2025-01-22 22:07 ` Maíra Canal
` (8 subsequent siblings)
12 siblings, 1 reply; 35+ messages in thread
From: Boris Brezillon @ 2025-01-22 17:16 UTC (permalink / raw)
To: Philipp Stanner
Cc: Alex Deucher, Christian König, Xinhui Pan, David Airlie,
Simona Vetter, Lucas Stach, Russell King, Christian Gmeiner,
Frank Binns, Matt Coster, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, Karol Herbst,
Lyude Paul, Danilo Krummrich, Rob Herring, Steven Price,
Liviu Dudau, Luben Tuikov, Matthew Brost, Philipp Stanner,
Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li, amd-gfx, dri-devel,
linux-kernel, etnaviv, lima, linux-arm-msm, freedreno, nouveau,
intel-xe
On Wed, 22 Jan 2025 15:08:20 +0100
Philipp Stanner <phasta@kernel.org> wrote:
> int drm_sched_init(struct drm_gpu_scheduler *sched,
> - const struct drm_sched_backend_ops *ops,
> - struct workqueue_struct *submit_wq,
> - u32 num_rqs, u32 credit_limit, unsigned int hang_limit,
> - long timeout, struct workqueue_struct *timeout_wq,
> - atomic_t *score, const char *name, struct device *dev);
> + const struct drm_sched_init_params *params);
Another nit: indenting is messed up here.
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
` (3 preceding siblings ...)
2025-01-22 17:16 ` Boris Brezillon
@ 2025-01-22 22:07 ` Maíra Canal
2025-01-23 8:10 ` Philipp Stanner
2025-01-23 10:55 ` ✓ CI.Patch_applied: success for " Patchwork
` (7 subsequent siblings)
12 siblings, 1 reply; 35+ messages in thread
From: Maíra Canal @ 2025-01-22 22:07 UTC (permalink / raw)
To: Philipp Stanner, Alex Deucher, Christian König, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Danilo Krummrich, Boris Brezillon,
Rob Herring, Steven Price, Liviu Dudau, Luben Tuikov,
Matthew Brost, Philipp Stanner, Melissa Wen, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li
Cc: amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
Hi Philipp,
On 22/01/25 11:08, Philipp Stanner wrote:
> drm_sched_init() has a great many parameters and upcoming new
> functionality for the scheduler might add even more. Generally, the
> great number of parameters reduces readability and has already caused
> one missnaming in:
>
> commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in nouveau_sched_init()").
>
> Introduce a new struct for the scheduler init parameters and port all
> users.
>
> Signed-off-by: Philipp Stanner <phasta@kernel.org>
> ---
> Howdy,
>
> I have a patch-series in the pipe that will add a `flags` argument to
> drm_sched_init(). I thought it would be wise to first rework the API as
> detailed in this patch. It's really a lot of parameters by now, and I
> would expect that it might get more and more over the years for special
> use cases etc.
>
> Regards,
> P.
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +++-
> drivers/gpu/drm/etnaviv/etnaviv_sched.c | 20 ++-
> drivers/gpu/drm/imagination/pvr_queue.c | 21 +++-
> drivers/gpu/drm/lima/lima_sched.c | 21 +++-
> drivers/gpu/drm/msm/msm_ringbuffer.c | 22 ++--
> drivers/gpu/drm/nouveau/nouveau_sched.c | 20 ++-
> drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++--
> drivers/gpu/drm/panthor/panthor_mmu.c | 18 ++-
> drivers/gpu/drm/panthor/panthor_sched.c | 23 ++--
> drivers/gpu/drm/scheduler/sched_main.c | 53 +++-----
> drivers/gpu/drm/v3d/v3d_sched.c | 135 +++++++++++++++------
> drivers/gpu/drm/xe/xe_execlist.c | 20 ++-
> drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++-
> include/drm/gpu_scheduler.h | 35 +++++-
> 14 files changed, 311 insertions(+), 139 deletions(-)
>
[...]
> diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
> index 99ac4995b5a1..716e6d074d87 100644
> --- a/drivers/gpu/drm/v3d/v3d_sched.c
> +++ b/drivers/gpu/drm/v3d/v3d_sched.c
> @@ -814,67 +814,124 @@ static const struct drm_sched_backend_ops v3d_cpu_sched_ops = {
> .free_job = v3d_cpu_job_free
> };
>
> +/*
> + * v3d's scheduler instances are all identical, except for ops and name.
> + */
> +static void
> +v3d_common_sched_init(struct drm_sched_init_params *params, struct device *dev)
> +{
> + memset(params, 0, sizeof(struct drm_sched_init_params));
> +
> + params->submit_wq = NULL; /* Use the system_wq. */
> + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
> + params->credit_limit = 1;
> + params->hang_limit = 0;
> + params->timeout = msecs_to_jiffies(500);
> + params->timeout_wq = NULL; /* Use the system_wq. */
> + params->score = NULL;
> + params->dev = dev;
> +}
Could we use only one function that takes struct v3d_dev *v3d, enum
v3d_queue, and sched_ops as arguments (instead of one function per
queue)? You can get the name of the scheduler by concatenating "v3d_" to
the return of v3d_queue_to_string().
I believe it would make the code much simpler.
Best Regards,
- Maíra
> +
> +static int
> +v3d_bin_sched_init(struct v3d_dev *v3d)
> +{
> + struct drm_sched_init_params params;
> +
> + v3d_common_sched_init(¶ms, v3d->drm.dev);
> + params.ops = &v3d_bin_sched_ops;
> + params.name = "v3d_bin";
> +
> + return drm_sched_init(&v3d->queue[V3D_BIN].sched, ¶ms);
> +}
> +
> +static int
> +v3d_render_sched_init(struct v3d_dev *v3d)
> +{
> + struct drm_sched_init_params params;
> +
> + v3d_common_sched_init(¶ms, v3d->drm.dev);
> + params.ops = &v3d_render_sched_ops;
> + params.name = "v3d_render";
> +
> + return drm_sched_init(&v3d->queue[V3D_RENDER].sched, ¶ms);
> +}
> +
> +static int
> +v3d_tfu_sched_init(struct v3d_dev *v3d)
> +{
> + struct drm_sched_init_params params;
> +
> + v3d_common_sched_init(¶ms, v3d->drm.dev);
> + params.ops = &v3d_tfu_sched_ops;
> + params.name = "v3d_tfu";
> +
> + return drm_sched_init(&v3d->queue[V3D_TFU].sched, ¶ms);
> +}
> +
> +static int
> +v3d_csd_sched_init(struct v3d_dev *v3d)
> +{
> + struct drm_sched_init_params params;
> +
> + v3d_common_sched_init(¶ms, v3d->drm.dev);
> + params.ops = &v3d_csd_sched_ops;
> + params.name = "v3d_csd";
> +
> + return drm_sched_init(&v3d->queue[V3D_CSD].sched, ¶ms);
> +}
> +
> +static int
> +v3d_cache_sched_init(struct v3d_dev *v3d)
> +{
> + struct drm_sched_init_params params;
> +
> + v3d_common_sched_init(¶ms, v3d->drm.dev);
> + params.ops = &v3d_cache_clean_sched_ops;
> + params.name = "v3d_cache_clean";
> +
> + return drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched, ¶ms);
> +}
> +
> +static int
> +v3d_cpu_sched_init(struct v3d_dev *v3d)
> +{
> + struct drm_sched_init_params params;
> +
> + v3d_common_sched_init(¶ms, v3d->drm.dev);
> + params.ops = &v3d_cpu_sched_ops;
> + params.name = "v3d_cpu";
> +
> + return drm_sched_init(&v3d->queue[V3D_CPU].sched, ¶ms);
> +}
> +
> int
> v3d_sched_init(struct v3d_dev *v3d)
> {
> - int hw_jobs_limit = 1;
> - int job_hang_limit = 0;
> - int hang_limit_ms = 500;
> int ret;
>
> - ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
> - &v3d_bin_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - hw_jobs_limit, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_bin", v3d->drm.dev);
> + ret = v3d_bin_sched_init(v3d);
> if (ret)
> return ret;
>
> - ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> - &v3d_render_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - hw_jobs_limit, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_render", v3d->drm.dev);
> + ret = v3d_render_sched_init(v3d);
> if (ret)
> goto fail;
>
> - ret = drm_sched_init(&v3d->queue[V3D_TFU].sched,
> - &v3d_tfu_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - hw_jobs_limit, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_tfu", v3d->drm.dev);
> + ret = v3d_tfu_sched_init(v3d);
> if (ret)
> goto fail;
>
> if (v3d_has_csd(v3d)) {
> - ret = drm_sched_init(&v3d->queue[V3D_CSD].sched,
> - &v3d_csd_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - hw_jobs_limit, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_csd", v3d->drm.dev);
> + ret = v3d_csd_sched_init(v3d);
> if (ret)
> goto fail;
>
> - ret = drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched,
> - &v3d_cache_clean_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - hw_jobs_limit, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_cache_clean", v3d->drm.dev);
> + ret = v3d_cache_sched_init(v3d);
> if (ret)
> goto fail;
> }
>
> - ret = drm_sched_init(&v3d->queue[V3D_CPU].sched,
> - &v3d_cpu_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - 1, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_cpu", v3d->drm.dev);
> + ret = v3d_cpu_sched_init(v3d);
> if (ret)
> goto fail;
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 17:04 ` Boris Brezillon
@ 2025-01-23 4:37 ` Matthew Brost
2025-01-23 7:34 ` Philipp Stanner
0 siblings, 1 reply; 35+ messages in thread
From: Matthew Brost @ 2025-01-23 4:37 UTC (permalink / raw)
To: Boris Brezillon
Cc: Tvrtko Ursulin, Philipp Stanner, Alex Deucher,
Christian König, Xinhui Pan, David Airlie, Simona Vetter,
Lucas Stach, Russell King, Christian Gmeiner, Frank Binns,
Matt Coster, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, Karol Herbst, Lyude Paul,
Danilo Krummrich, Rob Herring, Steven Price, Liviu Dudau,
Luben Tuikov, Philipp Stanner, Melissa Wen, Maíra Canal,
Lucas De Marchi, Thomas Hellström, Rodrigo Vivi,
Sunil Khatri, Lijo Lazar, Mario Limonciello, Ma Jun, Yunxiang Li,
amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
On Wed, Jan 22, 2025 at 06:04:58PM +0100, Boris Brezillon wrote:
> On Wed, 22 Jan 2025 16:14:59 +0000
> Tvrtko Ursulin <tursulin@ursulin.net> wrote:
>
> > On 22/01/2025 15:51, Boris Brezillon wrote:
> > > On Wed, 22 Jan 2025 15:08:20 +0100
> > > Philipp Stanner <phasta@kernel.org> wrote:
> > >
> > >> --- a/drivers/gpu/drm/panthor/panthor_sched.c
> > >> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> > >> @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group *group,
> > >> const struct drm_panthor_queue_create *args)
> > >> {
> > >> struct drm_gpu_scheduler *drm_sched;
> > >> + struct drm_sched_init_params sched_params;
> > >
> > > nit: Could we use a struct initializer instead of a
> > > memset(0)+field-assignment?
> > >
> > > struct drm_sched_init_params sched_params = {
>
> Actually, you can even make it const if it's not modified after the
> declaration.
>
> > > .ops = &panthor_queue_sched_ops,
> > > .submit_wq = group->ptdev->scheduler->wq,
> > > .num_rqs = 1,
> > > .credit_limit = args->ringbuf_size / sizeof(u64),
> > > .hang_limit = 0,
> > > .timeout = msecs_to_jiffies(JOB_TIMEOUT_MS),
> > > .timeout_wq = group->ptdev->reset.wq,
> > > .name = "panthor-queue",
> > > .dev = group->ptdev->base.dev,
> > > };
> >
+2
Matt
> > +1 on this as a general approach for the whole series. And I'd drop the
> > explicit zeros and NULLs. Memsets could then go too.
> >
> > Regards,
> >
> > Tvrtko
> >
> > >
> > > The same comment applies the panfrost changes BTW.
> > >
> > >> struct panthor_queue *queue;
> > >> int ret;
> > >>
> > >> @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group *group,
> > >> if (!queue)
> > >> return ERR_PTR(-ENOMEM);
> > >>
> > >> + memset(&sched_params, 0, sizeof(struct drm_sched_init_params));
> > >> +
> > >> queue->fence_ctx.id = dma_fence_context_alloc(1);
> > >> spin_lock_init(&queue->fence_ctx.lock);
> > >> INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
> > >> @@ -3341,17 +3344,23 @@ group_create_queue(struct panthor_group *group,
> > >> if (ret)
> > >> goto err_free_queue;
> > >>
> > >> + sched_params.ops = &panthor_queue_sched_ops;
> > >> + sched_params.submit_wq = group->ptdev->scheduler->wq;
> > >> + sched_params.num_rqs = 1;
> > >> /*
> > >> - * Credit limit argument tells us the total number of instructions
> > >> + * The credit limit argument tells us the total number of instructions
> > >> * across all CS slots in the ringbuffer, with some jobs requiring
> > >> * twice as many as others, depending on their profiling status.
> > >> */
> > >> - ret = drm_sched_init(&queue->scheduler, &panthor_queue_sched_ops,
> > >> - group->ptdev->scheduler->wq, 1,
> > >> - args->ringbuf_size / sizeof(u64),
> > >> - 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
> > >> - group->ptdev->reset.wq,
> > >> - NULL, "panthor-queue", group->ptdev->base.dev);
> > >> + sched_params.credit_limit = args->ringbuf_size / sizeof(u64);
> > >> + sched_params.hang_limit = 0;
> > >> + sched_params.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS);
> > >> + sched_params.timeout_wq = group->ptdev->reset.wq;
> > >> + sched_params.score = NULL;
> > >> + sched_params.name = "panthor-queue";
> > >> + sched_params.dev = group->ptdev->base.dev;
> > >> +
> > >> + ret = drm_sched_init(&queue->scheduler, &sched_params);
> > >> if (ret)
> > >> goto err_free_queue;
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 17:16 ` Boris Brezillon
@ 2025-01-23 7:33 ` Philipp Stanner
2025-01-23 8:23 ` Boris Brezillon
2025-01-23 9:29 ` Danilo Krummrich
0 siblings, 2 replies; 35+ messages in thread
From: Philipp Stanner @ 2025-01-23 7:33 UTC (permalink / raw)
To: Boris Brezillon, Philipp Stanner
Cc: Alex Deucher, Christian König, Xinhui Pan, David Airlie,
Simona Vetter, Lucas Stach, Russell King, Christian Gmeiner,
Frank Binns, Matt Coster, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, Karol Herbst,
Lyude Paul, Danilo Krummrich, Rob Herring, Steven Price,
Liviu Dudau, Luben Tuikov, Matthew Brost, Melissa Wen,
Maíra Canal, Lucas De Marchi, Thomas Hellström,
Rodrigo Vivi, Sunil Khatri, Lijo Lazar, Mario Limonciello, Ma Jun,
Yunxiang Li, amd-gfx, dri-devel, linux-kernel, etnaviv, lima,
linux-arm-msm, freedreno, nouveau, intel-xe
On Wed, 2025-01-22 at 18:16 +0100, Boris Brezillon wrote:
> On Wed, 22 Jan 2025 15:08:20 +0100
> Philipp Stanner <phasta@kernel.org> wrote:
>
> > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > - const struct drm_sched_backend_ops *ops,
> > - struct workqueue_struct *submit_wq,
> > - u32 num_rqs, u32 credit_limit, unsigned int hang_limit,
> > - long timeout, struct workqueue_struct *timeout_wq,
> > - atomic_t *score, const char *name, struct device *dev);
> > + const struct drm_sched_init_params *params);
>
>
> Another nit: indenting is messed up here.
That was done on purpose.
I never got why so many like to intend to the opening brackets,
because:
1. The kernel coding guide line does not demand it
2. It mixes tabs with spaces
3. It doesn't create an identical level of intendation
4. It wastes huge amount of space and does not solve the problem of
long names, but might even make it worse:
https://elixir.bootlin.com/linux/v6.13-
rc3/source/drivers/gpu/drm/scheduler/sched_main.c#L1296
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-23 4:37 ` Matthew Brost
@ 2025-01-23 7:34 ` Philipp Stanner
0 siblings, 0 replies; 35+ messages in thread
From: Philipp Stanner @ 2025-01-23 7:34 UTC (permalink / raw)
To: Matthew Brost, Boris Brezillon
Cc: Tvrtko Ursulin, Philipp Stanner, Alex Deucher,
Christian König, Xinhui Pan, David Airlie, Simona Vetter,
Lucas Stach, Russell King, Christian Gmeiner, Frank Binns,
Matt Coster, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, Karol Herbst, Lyude Paul,
Danilo Krummrich, Rob Herring, Steven Price, Liviu Dudau,
Luben Tuikov, Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li, amd-gfx, dri-devel,
linux-kernel, etnaviv, lima, linux-arm-msm, freedreno, nouveau,
intel-xe
On Wed, 2025-01-22 at 20:37 -0800, Matthew Brost wrote:
> On Wed, Jan 22, 2025 at 06:04:58PM +0100, Boris Brezillon wrote:
> > On Wed, 22 Jan 2025 16:14:59 +0000
> > Tvrtko Ursulin <tursulin@ursulin.net> wrote:
> >
> > > On 22/01/2025 15:51, Boris Brezillon wrote:
> > > > On Wed, 22 Jan 2025 15:08:20 +0100
> > > > Philipp Stanner <phasta@kernel.org> wrote:
> > > >
> > > > > --- a/drivers/gpu/drm/panthor/panthor_sched.c
> > > > > +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> > > > > @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group
> > > > > *group,
> > > > > const struct drm_panthor_queue_create
> > > > > *args)
> > > > > {
> > > > > struct drm_gpu_scheduler *drm_sched;
> > > > > + struct drm_sched_init_params sched_params;
> > > >
> > > > nit: Could we use a struct initializer instead of a
> > > > memset(0)+field-assignment?
> > > >
> > > > struct drm_sched_init_params sched_params = {
> >
> > Actually, you can even make it const if it's not modified after the
> > declaration.
> >
> > > > .ops = &panthor_queue_sched_ops,
> > > > .submit_wq = group->ptdev->scheduler->wq,
> > > > .num_rqs = 1,
> > > > .credit_limit = args->ringbuf_size /
> > > > sizeof(u64),
> > > > .hang_limit = 0,
> > > > .timeout = msecs_to_jiffies(JOB_TIMEOUT_MS),
> > > > .timeout_wq = group->ptdev->reset.wq,
> > > > .name = "panthor-queue",
> > > > .dev = group->ptdev->base.dev,
> > > > };
> > >
>
> +2
Yup, getting rid of memset() similar to Danilo's suggestion is surely a
good idea.
I personally don't like mixing initialization and declaration when
possible (readability), but having it const is probably a good
argument.
P.
>
> Matt
>
> > > +1 on this as a general approach for the whole series. And I'd
> > > drop the
> > > explicit zeros and NULLs. Memsets could then go too.
> > >
> > > Regards,
> > >
> > > Tvrtko
> > >
> > > >
> > > > The same comment applies the panfrost changes BTW.
> > > >
> > > > > struct panthor_queue *queue;
> > > > > int ret;
> > > > >
> > > > > @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group
> > > > > *group,
> > > > > if (!queue)
> > > > > return ERR_PTR(-ENOMEM);
> > > > >
> > > > > + memset(&sched_params, 0, sizeof(struct
> > > > > drm_sched_init_params));
> > > > > +
> > > > > queue->fence_ctx.id = dma_fence_context_alloc(1);
> > > > > spin_lock_init(&queue->fence_ctx.lock);
> > > > > INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
> > > > > @@ -3341,17 +3344,23 @@ group_create_queue(struct
> > > > > panthor_group *group,
> > > > > if (ret)
> > > > > goto err_free_queue;
> > > > >
> > > > > + sched_params.ops = &panthor_queue_sched_ops;
> > > > > + sched_params.submit_wq = group->ptdev->scheduler-
> > > > > >wq;
> > > > > + sched_params.num_rqs = 1;
> > > > > /*
> > > > > - * Credit limit argument tells us the total number
> > > > > of instructions
> > > > > + * The credit limit argument tells us the total
> > > > > number of instructions
> > > > > * across all CS slots in the ringbuffer, with some
> > > > > jobs requiring
> > > > > * twice as many as others, depending on their
> > > > > profiling status.
> > > > > */
> > > > > - ret = drm_sched_init(&queue->scheduler,
> > > > > &panthor_queue_sched_ops,
> > > > > - group->ptdev->scheduler->wq, 1,
> > > > > - args->ringbuf_size /
> > > > > sizeof(u64),
> > > > > - 0,
> > > > > msecs_to_jiffies(JOB_TIMEOUT_MS),
> > > > > - group->ptdev->reset.wq,
> > > > > - NULL, "panthor-queue", group-
> > > > > >ptdev->base.dev);
> > > > > + sched_params.credit_limit = args->ringbuf_size /
> > > > > sizeof(u64);
> > > > > + sched_params.hang_limit = 0;
> > > > > + sched_params.timeout =
> > > > > msecs_to_jiffies(JOB_TIMEOUT_MS);
> > > > > + sched_params.timeout_wq = group->ptdev->reset.wq;
> > > > > + sched_params.score = NULL;
> > > > > + sched_params.name = "panthor-queue";
> > > > > + sched_params.dev = group->ptdev->base.dev;
> > > > > +
> > > > > + ret = drm_sched_init(&queue->scheduler,
> > > > > &sched_params);
> > > > > if (ret)
> > > > > goto err_free_queue;
> >
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-22 22:07 ` Maíra Canal
@ 2025-01-23 8:10 ` Philipp Stanner
2025-01-23 8:39 ` Philipp Stanner
2025-01-23 11:10 ` Maíra Canal
0 siblings, 2 replies; 35+ messages in thread
From: Philipp Stanner @ 2025-01-23 8:10 UTC (permalink / raw)
To: Maíra Canal, Philipp Stanner, Alex Deucher,
Christian König, Xinhui Pan, David Airlie, Simona Vetter,
Lucas Stach, Russell King, Christian Gmeiner, Frank Binns,
Matt Coster, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, Karol Herbst, Lyude Paul,
Danilo Krummrich, Boris Brezillon, Rob Herring, Steven Price,
Liviu Dudau, Luben Tuikov, Matthew Brost, Melissa Wen,
Lucas De Marchi, Thomas Hellström, Rodrigo Vivi,
Sunil Khatri, Lijo Lazar, Mario Limonciello, Ma Jun, Yunxiang Li
Cc: amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
On Wed, 2025-01-22 at 19:07 -0300, Maíra Canal wrote:
> Hi Philipp,
>
> On 22/01/25 11:08, Philipp Stanner wrote:
> > drm_sched_init() has a great many parameters and upcoming new
> > functionality for the scheduler might add even more. Generally, the
> > great number of parameters reduces readability and has already
> > caused
> > one missnaming in:
> >
> > commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in
> > nouveau_sched_init()").
> >
> > Introduce a new struct for the scheduler init parameters and port
> > all
> > users.
> >
> > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > ---
> > Howdy,
> >
> > I have a patch-series in the pipe that will add a `flags` argument
> > to
> > drm_sched_init(). I thought it would be wise to first rework the
> > API as
> > detailed in this patch. It's really a lot of parameters by now, and
> > I
> > would expect that it might get more and more over the years for
> > special
> > use cases etc.
> >
> > Regards,
> > P.
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +++-
> > drivers/gpu/drm/etnaviv/etnaviv_sched.c | 20 ++-
> > drivers/gpu/drm/imagination/pvr_queue.c | 21 +++-
> > drivers/gpu/drm/lima/lima_sched.c | 21 +++-
> > drivers/gpu/drm/msm/msm_ringbuffer.c | 22 ++--
> > drivers/gpu/drm/nouveau/nouveau_sched.c | 20 ++-
> > drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++--
> > drivers/gpu/drm/panthor/panthor_mmu.c | 18 ++-
> > drivers/gpu/drm/panthor/panthor_sched.c | 23 ++--
> > drivers/gpu/drm/scheduler/sched_main.c | 53 +++-----
> > drivers/gpu/drm/v3d/v3d_sched.c | 135 +++++++++++++++-
> > -----
> > drivers/gpu/drm/xe/xe_execlist.c | 20 ++-
> > drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++-
> > include/drm/gpu_scheduler.h | 35 +++++-
> > 14 files changed, 311 insertions(+), 139 deletions(-)
> >
>
> [...]
>
> > diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
> > b/drivers/gpu/drm/v3d/v3d_sched.c
> > index 99ac4995b5a1..716e6d074d87 100644
> > --- a/drivers/gpu/drm/v3d/v3d_sched.c
> > +++ b/drivers/gpu/drm/v3d/v3d_sched.c
> > @@ -814,67 +814,124 @@ static const struct drm_sched_backend_ops
> > v3d_cpu_sched_ops = {
> > .free_job = v3d_cpu_job_free
> > };
> >
> > +/*
> > + * v3d's scheduler instances are all identical, except for ops and
> > name.
> > + */
> > +static void
> > +v3d_common_sched_init(struct drm_sched_init_params *params, struct
> > device *dev)
> > +{
> > + memset(params, 0, sizeof(struct drm_sched_init_params));
> > +
> > + params->submit_wq = NULL; /* Use the system_wq. */
> > + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > + params->credit_limit = 1;
> > + params->hang_limit = 0;
> > + params->timeout = msecs_to_jiffies(500);
> > + params->timeout_wq = NULL; /* Use the system_wq. */
> > + params->score = NULL;
> > + params->dev = dev;
> > +}
>
> Could we use only one function that takes struct v3d_dev *v3d, enum
> v3d_queue, and sched_ops as arguments (instead of one function per
> queue)? You can get the name of the scheduler by concatenating "v3d_"
> to
> the return of v3d_queue_to_string().
>
> I believe it would make the code much simpler.
Hello,
so just to get that right:
You'd like to have one universal function that switch-cases over an
enum, sets the ops and creates the name with string concatenation?
I'm not convinced that this is simpler than a few small functions, but
it's not my component, so…
Whatever we'll do will be simpler than the existing code, though. Right
now no reader can see at first glance whether all those schedulers are
identically parametrized or not.
P.
>
> Best Regards,
> - Maíra
>
> > +
> > +static int
> > +v3d_bin_sched_init(struct v3d_dev *v3d)
> > +{
> > + struct drm_sched_init_params params;
> > +
> > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > + params.ops = &v3d_bin_sched_ops;
> > + params.name = "v3d_bin";
> > +
> > + return drm_sched_init(&v3d->queue[V3D_BIN].sched,
> > ¶ms);
> > +}
> > +
> > +static int
> > +v3d_render_sched_init(struct v3d_dev *v3d)
> > +{
> > + struct drm_sched_init_params params;
> > +
> > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > + params.ops = &v3d_render_sched_ops;
> > + params.name = "v3d_render";
> > +
> > + return drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> > ¶ms);
> > +}
> > +
> > +static int
> > +v3d_tfu_sched_init(struct v3d_dev *v3d)
> > +{
> > + struct drm_sched_init_params params;
> > +
> > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > + params.ops = &v3d_tfu_sched_ops;
> > + params.name = "v3d_tfu";
> > +
> > + return drm_sched_init(&v3d->queue[V3D_TFU].sched,
> > ¶ms);
> > +}
> > +
> > +static int
> > +v3d_csd_sched_init(struct v3d_dev *v3d)
> > +{
> > + struct drm_sched_init_params params;
> > +
> > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > + params.ops = &v3d_csd_sched_ops;
> > + params.name = "v3d_csd";
> > +
> > + return drm_sched_init(&v3d->queue[V3D_CSD].sched,
> > ¶ms);
> > +}
> > +
> > +static int
> > +v3d_cache_sched_init(struct v3d_dev *v3d)
> > +{
> > + struct drm_sched_init_params params;
> > +
> > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > + params.ops = &v3d_cache_clean_sched_ops;
> > + params.name = "v3d_cache_clean";
> > +
> > + return drm_sched_init(&v3d->queue[V3D_CACHE_CLEAN].sched,
> > ¶ms);
> > +}
> > +
> > +static int
> > +v3d_cpu_sched_init(struct v3d_dev *v3d)
> > +{
> > + struct drm_sched_init_params params;
> > +
> > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > + params.ops = &v3d_cpu_sched_ops;
> > + params.name = "v3d_cpu";
> > +
> > + return drm_sched_init(&v3d->queue[V3D_CPU].sched,
> > ¶ms);
> > +}
> > +
> > int
> > v3d_sched_init(struct v3d_dev *v3d)
> > {
> > - int hw_jobs_limit = 1;
> > - int job_hang_limit = 0;
> > - int hang_limit_ms = 500;
> > int ret;
> >
> > - ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
> > - &v3d_bin_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - hw_jobs_limit, job_hang_limit,
> > - msecs_to_jiffies(hang_limit_ms),
> > NULL,
> > - NULL, "v3d_bin", v3d->drm.dev);
> > + ret = v3d_bin_sched_init(v3d);
> > if (ret)
> > return ret;
> >
> > - ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> > - &v3d_render_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - hw_jobs_limit, job_hang_limit,
> > - msecs_to_jiffies(hang_limit_ms),
> > NULL,
> > - NULL, "v3d_render", v3d->drm.dev);
> > + ret = v3d_render_sched_init(v3d);
> > if (ret)
> > goto fail;
> >
> > - ret = drm_sched_init(&v3d->queue[V3D_TFU].sched,
> > - &v3d_tfu_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - hw_jobs_limit, job_hang_limit,
> > - msecs_to_jiffies(hang_limit_ms),
> > NULL,
> > - NULL, "v3d_tfu", v3d->drm.dev);
> > + ret = v3d_tfu_sched_init(v3d);
> > if (ret)
> > goto fail;
> >
> > if (v3d_has_csd(v3d)) {
> > - ret = drm_sched_init(&v3d->queue[V3D_CSD].sched,
> > - &v3d_csd_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - hw_jobs_limit,
> > job_hang_limit,
> > -
> > msecs_to_jiffies(hang_limit_ms), NULL,
> > - NULL, "v3d_csd", v3d-
> > >drm.dev);
> > + ret = v3d_csd_sched_init(v3d);
> > if (ret)
> > goto fail;
> >
> > - ret = drm_sched_init(&v3d-
> > >queue[V3D_CACHE_CLEAN].sched,
> > - &v3d_cache_clean_sched_ops,
> > NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - hw_jobs_limit,
> > job_hang_limit,
> > -
> > msecs_to_jiffies(hang_limit_ms), NULL,
> > - NULL, "v3d_cache_clean", v3d-
> > >drm.dev);
> > + ret = v3d_cache_sched_init(v3d);
> > if (ret)
> > goto fail;
> > }
> >
> > - ret = drm_sched_init(&v3d->queue[V3D_CPU].sched,
> > - &v3d_cpu_sched_ops, NULL,
> > - DRM_SCHED_PRIORITY_COUNT,
> > - 1, job_hang_limit,
> > - msecs_to_jiffies(hang_limit_ms),
> > NULL,
> > - NULL, "v3d_cpu", v3d->drm.dev);
> > + ret = v3d_cpu_sched_init(v3d);
> > if (ret)
> > goto fail;
> >
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-23 7:33 ` Philipp Stanner
@ 2025-01-23 8:23 ` Boris Brezillon
2025-01-23 9:29 ` Danilo Krummrich
1 sibling, 0 replies; 35+ messages in thread
From: Boris Brezillon @ 2025-01-23 8:23 UTC (permalink / raw)
To: Philipp Stanner
Cc: phasta, Alex Deucher, Christian König, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Danilo Krummrich, Rob Herring,
Steven Price, Liviu Dudau, Luben Tuikov, Matthew Brost,
Melissa Wen, Maíra Canal, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li, amd-gfx, dri-devel,
linux-kernel, etnaviv, lima, linux-arm-msm, freedreno, nouveau,
intel-xe
On Thu, 23 Jan 2025 08:33:01 +0100
Philipp Stanner <phasta@mailbox.org> wrote:
> On Wed, 2025-01-22 at 18:16 +0100, Boris Brezillon wrote:
> > On Wed, 22 Jan 2025 15:08:20 +0100
> > Philipp Stanner <phasta@kernel.org> wrote:
> >
> > > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > - const struct drm_sched_backend_ops *ops,
> > > - struct workqueue_struct *submit_wq,
> > > - u32 num_rqs, u32 credit_limit, unsigned int hang_limit,
> > > - long timeout, struct workqueue_struct *timeout_wq,
> > > - atomic_t *score, const char *name, struct device *dev);
> > > + const struct drm_sched_init_params *params);
> >
> >
> > Another nit: indenting is messed up here.
>
> That was done on purpose.
>
> I never got why so many like to intend to the opening brackets,
> because:
> 1. The kernel coding guide line does not demand it
> 2. It mixes tabs with spaces
> 3. It doesn't create an identical level of intendation
> 4. It wastes huge amount of space and does not solve the problem of
> long names, but might even make it worse:
> https://elixir.bootlin.com/linux/v6.13-
> rc3/source/drivers/gpu/drm/scheduler/sched_main.c#L1296
It's mostly a matter of keeping things consistent in a code base. I
don't really have strong opinions when it comes to coding style, but I
always try to follow the rules in place in the file/subsystem/project
I'm contributing to, and clearly the pattern in this file is to align
the extra lines of arguments on the first argument...
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-23 8:10 ` Philipp Stanner
@ 2025-01-23 8:39 ` Philipp Stanner
2025-01-23 11:10 ` Maíra Canal
1 sibling, 0 replies; 35+ messages in thread
From: Philipp Stanner @ 2025-01-23 8:39 UTC (permalink / raw)
To: Maíra Canal, Philipp Stanner, Alex Deucher,
Christian König, Xinhui Pan, David Airlie, Simona Vetter,
Lucas Stach, Russell King, Christian Gmeiner, Frank Binns,
Matt Coster, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, Karol Herbst, Lyude Paul,
Danilo Krummrich, Boris Brezillon, Rob Herring, Steven Price,
Liviu Dudau, Luben Tuikov, Matthew Brost, Melissa Wen,
Lucas De Marchi, Thomas Hellström, Rodrigo Vivi,
Sunil Khatri, Lijo Lazar, Mario Limonciello, Ma Jun, Yunxiang Li
Cc: amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
On Thu, 2025-01-23 at 09:10 +0100, Philipp Stanner wrote:
> On Wed, 2025-01-22 at 19:07 -0300, Maíra Canal wrote:
> > Hi Philipp,
> >
> > On 22/01/25 11:08, Philipp Stanner wrote:
> > > drm_sched_init() has a great many parameters and upcoming new
> > > functionality for the scheduler might add even more. Generally,
> > > the
> > > great number of parameters reduces readability and has already
> > > caused
> > > one missnaming in:
> > >
> > > commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in
> > > nouveau_sched_init()").
> > >
> > > Introduce a new struct for the scheduler init parameters and port
> > > all
> > > users.
> > >
> > > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > > ---
> > > Howdy,
> > >
> > > I have a patch-series in the pipe that will add a `flags`
> > > argument
> > > to
> > > drm_sched_init(). I thought it would be wise to first rework the
> > > API as
> > > detailed in this patch. It's really a lot of parameters by now,
> > > and
> > > I
> > > would expect that it might get more and more over the years for
> > > special
> > > use cases etc.
> > >
> > > Regards,
> > > P.
> > > ---
> > > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 21 +++-
> > > drivers/gpu/drm/etnaviv/etnaviv_sched.c | 20 ++-
> > > drivers/gpu/drm/imagination/pvr_queue.c | 21 +++-
> > > drivers/gpu/drm/lima/lima_sched.c | 21 +++-
> > > drivers/gpu/drm/msm/msm_ringbuffer.c | 22 ++--
> > > drivers/gpu/drm/nouveau/nouveau_sched.c | 20 ++-
> > > drivers/gpu/drm/panfrost/panfrost_job.c | 22 ++--
> > > drivers/gpu/drm/panthor/panthor_mmu.c | 18 ++-
> > > drivers/gpu/drm/panthor/panthor_sched.c | 23 ++--
> > > drivers/gpu/drm/scheduler/sched_main.c | 53 +++-----
> > > drivers/gpu/drm/v3d/v3d_sched.c | 135
> > > +++++++++++++++-
> > > -----
> > > drivers/gpu/drm/xe/xe_execlist.c | 20 ++-
> > > drivers/gpu/drm/xe/xe_gpu_scheduler.c | 19 ++-
> > > include/drm/gpu_scheduler.h | 35 +++++-
> > > 14 files changed, 311 insertions(+), 139 deletions(-)
> > >
> >
> > [...]
> >
> > > diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
> > > b/drivers/gpu/drm/v3d/v3d_sched.c
> > > index 99ac4995b5a1..716e6d074d87 100644
> > > --- a/drivers/gpu/drm/v3d/v3d_sched.c
> > > +++ b/drivers/gpu/drm/v3d/v3d_sched.c
> > > @@ -814,67 +814,124 @@ static const struct drm_sched_backend_ops
> > > v3d_cpu_sched_ops = {
> > > .free_job = v3d_cpu_job_free
> > > };
> > >
> > > +/*
> > > + * v3d's scheduler instances are all identical, except for ops
> > > and
> > > name.
> > > + */
> > > +static void
> > > +v3d_common_sched_init(struct drm_sched_init_params *params,
> > > struct
> > > device *dev)
> > > +{
> > > + memset(params, 0, sizeof(struct drm_sched_init_params));
> > > +
> > > + params->submit_wq = NULL; /* Use the system_wq. */
> > > + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > + params->credit_limit = 1;
> > > + params->hang_limit = 0;
> > > + params->timeout = msecs_to_jiffies(500);
> > > + params->timeout_wq = NULL; /* Use the system_wq. */
> > > + params->score = NULL;
> > > + params->dev = dev;
> > > +}
> >
> > Could we use only one function that takes struct v3d_dev *v3d, enum
> > v3d_queue, and sched_ops as arguments (instead of one function per
> > queue)? You can get the name of the scheduler by concatenating
> > "v3d_"
> > to
> > the return of v3d_queue_to_string().
> >
> > I believe it would make the code much simpler.
>
> Hello,
>
> so just to get that right:
> You'd like to have one universal function that switch-cases over an
> enum, sets the ops and creates the name with string concatenation?
Oh, and here's another issue:
The @name string has life time issues to take into account. It must
live as long as the scheduler instance.
In your mind, where should the memory for the strings your
concatenating be and how should their life time be managed?
Currently they're in the TEXT segment, which is fine
P.
>
> I'm not convinced that this is simpler than a few small functions,
> but
> it's not my component, so…
>
> Whatever we'll do will be simpler than the existing code, though.
> Right
> now no reader can see at first glance whether all those schedulers
> are
> identically parametrized or not.
>
> P.
>
>
> >
> > Best Regards,
> > - Maíra
> >
> > > +
> > > +static int
> > > +v3d_bin_sched_init(struct v3d_dev *v3d)
> > > +{
> > > + struct drm_sched_init_params params;
> > > +
> > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > + params.ops = &v3d_bin_sched_ops;
> > > + params.name = "v3d_bin";
> > > +
> > > + return drm_sched_init(&v3d->queue[V3D_BIN].sched,
> > > ¶ms);
> > > +}
> > > +
> > > +static int
> > > +v3d_render_sched_init(struct v3d_dev *v3d)
> > > +{
> > > + struct drm_sched_init_params params;
> > > +
> > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > + params.ops = &v3d_render_sched_ops;
> > > + params.name = "v3d_render";
> > > +
> > > + return drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> > > ¶ms);
> > > +}
> > > +
> > > +static int
> > > +v3d_tfu_sched_init(struct v3d_dev *v3d)
> > > +{
> > > + struct drm_sched_init_params params;
> > > +
> > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > + params.ops = &v3d_tfu_sched_ops;
> > > + params.name = "v3d_tfu";
> > > +
> > > + return drm_sched_init(&v3d->queue[V3D_TFU].sched,
> > > ¶ms);
> > > +}
> > > +
> > > +static int
> > > +v3d_csd_sched_init(struct v3d_dev *v3d)
> > > +{
> > > + struct drm_sched_init_params params;
> > > +
> > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > + params.ops = &v3d_csd_sched_ops;
> > > + params.name = "v3d_csd";
> > > +
> > > + return drm_sched_init(&v3d->queue[V3D_CSD].sched,
> > > ¶ms);
> > > +}
> > > +
> > > +static int
> > > +v3d_cache_sched_init(struct v3d_dev *v3d)
> > > +{
> > > + struct drm_sched_init_params params;
> > > +
> > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > + params.ops = &v3d_cache_clean_sched_ops;
> > > + params.name = "v3d_cache_clean";
> > > +
> > > + return drm_sched_init(&v3d-
> > > >queue[V3D_CACHE_CLEAN].sched,
> > > ¶ms);
> > > +}
> > > +
> > > +static int
> > > +v3d_cpu_sched_init(struct v3d_dev *v3d)
> > > +{
> > > + struct drm_sched_init_params params;
> > > +
> > > + v3d_common_sched_init(¶ms, v3d->drm.dev);
> > > + params.ops = &v3d_cpu_sched_ops;
> > > + params.name = "v3d_cpu";
> > > +
> > > + return drm_sched_init(&v3d->queue[V3D_CPU].sched,
> > > ¶ms);
> > > +}
> > > +
> > > int
> > > v3d_sched_init(struct v3d_dev *v3d)
> > > {
> > > - int hw_jobs_limit = 1;
> > > - int job_hang_limit = 0;
> > > - int hang_limit_ms = 500;
> > > int ret;
> > >
> > > - ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
> > > - &v3d_bin_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - hw_jobs_limit, job_hang_limit,
> > > - msecs_to_jiffies(hang_limit_ms),
> > > NULL,
> > > - NULL, "v3d_bin", v3d->drm.dev);
> > > + ret = v3d_bin_sched_init(v3d);
> > > if (ret)
> > > return ret;
> > >
> > > - ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> > > - &v3d_render_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - hw_jobs_limit, job_hang_limit,
> > > - msecs_to_jiffies(hang_limit_ms),
> > > NULL,
> > > - NULL, "v3d_render", v3d->drm.dev);
> > > + ret = v3d_render_sched_init(v3d);
> > > if (ret)
> > > goto fail;
> > >
> > > - ret = drm_sched_init(&v3d->queue[V3D_TFU].sched,
> > > - &v3d_tfu_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - hw_jobs_limit, job_hang_limit,
> > > - msecs_to_jiffies(hang_limit_ms),
> > > NULL,
> > > - NULL, "v3d_tfu", v3d->drm.dev);
> > > + ret = v3d_tfu_sched_init(v3d);
> > > if (ret)
> > > goto fail;
> > >
> > > if (v3d_has_csd(v3d)) {
> > > - ret = drm_sched_init(&v3d->queue[V3D_CSD].sched,
> > > - &v3d_csd_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - hw_jobs_limit,
> > > job_hang_limit,
> > > -
> > > msecs_to_jiffies(hang_limit_ms), NULL,
> > > - NULL, "v3d_csd", v3d-
> > > > drm.dev);
> > > + ret = v3d_csd_sched_init(v3d);
> > > if (ret)
> > > goto fail;
> > >
> > > - ret = drm_sched_init(&v3d-
> > > > queue[V3D_CACHE_CLEAN].sched,
> > > - &v3d_cache_clean_sched_ops,
> > > NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - hw_jobs_limit,
> > > job_hang_limit,
> > > -
> > > msecs_to_jiffies(hang_limit_ms), NULL,
> > > - NULL, "v3d_cache_clean",
> > > v3d-
> > > > drm.dev);
> > > + ret = v3d_cache_sched_init(v3d);
> > > if (ret)
> > > goto fail;
> > > }
> > >
> > > - ret = drm_sched_init(&v3d->queue[V3D_CPU].sched,
> > > - &v3d_cpu_sched_ops, NULL,
> > > - DRM_SCHED_PRIORITY_COUNT,
> > > - 1, job_hang_limit,
> > > - msecs_to_jiffies(hang_limit_ms),
> > > NULL,
> > > - NULL, "v3d_cpu", v3d->drm.dev);
> > > + ret = v3d_cpu_sched_init(v3d);
> > > if (ret)
> > > goto fail;
> > >
> >
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-23 7:33 ` Philipp Stanner
2025-01-23 8:23 ` Boris Brezillon
@ 2025-01-23 9:29 ` Danilo Krummrich
2025-01-23 9:35 ` Philipp Stanner
1 sibling, 1 reply; 35+ messages in thread
From: Danilo Krummrich @ 2025-01-23 9:29 UTC (permalink / raw)
To: phasta
Cc: Boris Brezillon, Alex Deucher, Christian König, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Rob Herring, Steven Price, Liviu Dudau,
Luben Tuikov, Matthew Brost, Melissa Wen, Maíra Canal,
Lucas De Marchi, Thomas Hellström, Rodrigo Vivi,
Sunil Khatri, Lijo Lazar, Mario Limonciello, Ma Jun, Yunxiang Li,
amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
On Thu, Jan 23, 2025 at 08:33:01AM +0100, Philipp Stanner wrote:
> On Wed, 2025-01-22 at 18:16 +0100, Boris Brezillon wrote:
> > On Wed, 22 Jan 2025 15:08:20 +0100
> > Philipp Stanner <phasta@kernel.org> wrote:
> >
> > > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > - const struct drm_sched_backend_ops *ops,
> > > - struct workqueue_struct *submit_wq,
> > > - u32 num_rqs, u32 credit_limit, unsigned int hang_limit,
> > > - long timeout, struct workqueue_struct *timeout_wq,
> > > - atomic_t *score, const char *name, struct device *dev);
> > > + const struct drm_sched_init_params *params);
> >
> >
> > Another nit: indenting is messed up here.
>
> That was done on purpose.
Let's not change this convention, it's used all over the kernel tree, including
the GPU scheduler. People are used to read code that is formatted this way, plus
the attempt of changing it will make code formatting inconsistent.
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-23 9:29 ` Danilo Krummrich
@ 2025-01-23 9:35 ` Philipp Stanner
2025-01-23 9:55 ` Danilo Krummrich
2025-01-23 10:57 ` Tvrtko Ursulin
0 siblings, 2 replies; 35+ messages in thread
From: Philipp Stanner @ 2025-01-23 9:35 UTC (permalink / raw)
To: Danilo Krummrich, phasta
Cc: Boris Brezillon, Alex Deucher, Christian König, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Rob Herring, Steven Price, Liviu Dudau,
Luben Tuikov, Matthew Brost, Melissa Wen, Maíra Canal,
Lucas De Marchi, Thomas Hellström, Rodrigo Vivi,
Sunil Khatri, Lijo Lazar, Mario Limonciello, Ma Jun, Yunxiang Li,
amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
On Thu, 2025-01-23 at 10:29 +0100, Danilo Krummrich wrote:
> On Thu, Jan 23, 2025 at 08:33:01AM +0100, Philipp Stanner wrote:
> > On Wed, 2025-01-22 at 18:16 +0100, Boris Brezillon wrote:
> > > On Wed, 22 Jan 2025 15:08:20 +0100
> > > Philipp Stanner <phasta@kernel.org> wrote:
> > >
> > > > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > > - const struct drm_sched_backend_ops *ops,
> > > > - struct workqueue_struct *submit_wq,
> > > > - u32 num_rqs, u32 credit_limit, unsigned int hang_limit,
> > > > - long timeout, struct workqueue_struct *timeout_wq,
> > > > - atomic_t *score, const char *name, struct device *dev);
> > > > + const struct drm_sched_init_params *params);
> > >
> > >
> > > Another nit: indenting is messed up here.
> >
> > That was done on purpose.
>
> Let's not change this convention, it's used all over the kernel tree,
> including
> the GPU scheduler. People are used to read code that is formatted
> this way, plus
> the attempt of changing it will make code formatting inconsistent.
Both the tree and this file are already inconsistent in regards to
this.
Anyways, what is your proposed solution to ridiculous nonsense like
this?
https://elixir.bootlin.com/linux/v6.13-rc3/source/drivers/gpu/drm/scheduler/sched_main.c#L1296
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-23 9:35 ` Philipp Stanner
@ 2025-01-23 9:55 ` Danilo Krummrich
2025-01-23 10:57 ` Tvrtko Ursulin
1 sibling, 0 replies; 35+ messages in thread
From: Danilo Krummrich @ 2025-01-23 9:55 UTC (permalink / raw)
To: phasta
Cc: Boris Brezillon, Alex Deucher, Christian König, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Rob Herring, Steven Price, Liviu Dudau,
Luben Tuikov, Matthew Brost, Melissa Wen, Maíra Canal,
Lucas De Marchi, Thomas Hellström, Rodrigo Vivi,
Sunil Khatri, Lijo Lazar, Mario Limonciello, Ma Jun, Yunxiang Li,
amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
On Thu, Jan 23, 2025 at 10:35:43AM +0100, Philipp Stanner wrote:
> On Thu, 2025-01-23 at 10:29 +0100, Danilo Krummrich wrote:
> > On Thu, Jan 23, 2025 at 08:33:01AM +0100, Philipp Stanner wrote:
> > > On Wed, 2025-01-22 at 18:16 +0100, Boris Brezillon wrote:
> > > > On Wed, 22 Jan 2025 15:08:20 +0100
> > > > Philipp Stanner <phasta@kernel.org> wrote:
> > > >
> > > > > int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > > > - const struct drm_sched_backend_ops *ops,
> > > > > - struct workqueue_struct *submit_wq,
> > > > > - u32 num_rqs, u32 credit_limit, unsigned int hang_limit,
> > > > > - long timeout, struct workqueue_struct *timeout_wq,
> > > > > - atomic_t *score, const char *name, struct device *dev);
> > > > > + const struct drm_sched_init_params *params);
> > > >
> > > >
> > > > Another nit: indenting is messed up here.
> > >
> > > That was done on purpose.
> >
> > Let's not change this convention, it's used all over the kernel tree,
> > including
> > the GPU scheduler. People are used to read code that is formatted
> > this way, plus
> > the attempt of changing it will make code formatting inconsistent.
>
> Both the tree and this file are already inconsistent in regards to
> this.
That's not really a good argument to make it more inconsistent, is it?
>
> Anyways, what is your proposed solution to ridiculous nonsense like
> this?
>
> https://elixir.bootlin.com/linux/v6.13-rc3/source/drivers/gpu/drm/scheduler/sched_main.c#L1296
I don't think this one needs a solution.
The kernel picked a convention long ago, which also has downsides. If it gets
too bad, we can deviate from conventions at any point of time; for the thing
that otherwise would be bad, but we shouldn't do it in general.
^ permalink raw reply [flat|nested] 35+ messages in thread
* ✓ CI.Patch_applied: success for drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
` (4 preceding siblings ...)
2025-01-22 22:07 ` Maíra Canal
@ 2025-01-23 10:55 ` Patchwork
2025-01-23 10:55 ` ✗ CI.checkpatch: warning " Patchwork
` (6 subsequent siblings)
12 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2025-01-23 10:55 UTC (permalink / raw)
To: Philipp Stanner; +Cc: intel-xe
== Series Details ==
Series: drm/sched: Use struct for drm_sched_init() params
URL : https://patchwork.freedesktop.org/series/143883/
State : success
== Summary ==
=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: 79f0f76f7f03 drm-tip: 2025y-01m-23d-10h-49m-06s UTC integration manifest
=== git am output follows ===
Applying: drm/sched: Use struct for drm_sched_init() params
^ permalink raw reply [flat|nested] 35+ messages in thread
* ✗ CI.checkpatch: warning for drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
` (5 preceding siblings ...)
2025-01-23 10:55 ` ✓ CI.Patch_applied: success for " Patchwork
@ 2025-01-23 10:55 ` Patchwork
2025-01-23 10:56 ` ✓ CI.KUnit: success " Patchwork
` (5 subsequent siblings)
12 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2025-01-23 10:55 UTC (permalink / raw)
To: Philipp Stanner; +Cc: intel-xe
== Series Details ==
Series: drm/sched: Use struct for drm_sched_init() params
URL : https://patchwork.freedesktop.org/series/143883/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
30ab6715fc09baee6cc14cb3c89ad8858688d474
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit e4e7cf9ab13d54d11e6f53b89e0dd2f9e0b96e88
Author: Philipp Stanner <phasta@kernel.org>
Date: Wed Jan 22 15:08:20 2025 +0100
drm/sched: Use struct for drm_sched_init() params
drm_sched_init() has a great many parameters and upcoming new
functionality for the scheduler might add even more. Generally, the
great number of parameters reduces readability and has already caused
one missnaming in:
commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in nouveau_sched_init()").
Introduce a new struct for the scheduler init parameters and port all
users.
Signed-off-by: Philipp Stanner <phasta@kernel.org>
+ /mt/dim checkpatch 79f0f76f7f03b0667e10fe2fa3a75b2f8727b8de drm-intel
e4e7cf9ab13d drm/sched: Use struct for drm_sched_init() params
-:11: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#11:
commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in nouveau_sched_init()").
-:472: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#472: FILE: drivers/gpu/drm/scheduler/sched_main.c:1285:
+ sched->submit_wq = alloc_ordered_workqueue_lockdep_map(
total: 0 errors, 1 warnings, 1 checks, 686 lines checked
^ permalink raw reply [flat|nested] 35+ messages in thread
* ✓ CI.KUnit: success for drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
` (6 preceding siblings ...)
2025-01-23 10:55 ` ✗ CI.checkpatch: warning " Patchwork
@ 2025-01-23 10:56 ` Patchwork
2025-01-23 11:13 ` ✓ CI.Build: " Patchwork
` (4 subsequent siblings)
12 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2025-01-23 10:56 UTC (permalink / raw)
To: Philipp Stanner; +Cc: intel-xe
== Series Details ==
Series: drm/sched: Use struct for drm_sched_init() params
URL : https://patchwork.freedesktop.org/series/143883/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[10:55:50] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[10:55:54] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
156 | u64 ioread64_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
163 | u64 ioread64_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
170 | u64 ioread64be_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
178 | u64 ioread64be_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
[10:56:20] Starting KUnit Kernel (1/1)...
[10:56:20] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[10:56:20] ================== guc_buf (11 subtests) ===================
[10:56:20] [PASSED] test_smallest
[10:56:20] [PASSED] test_largest
[10:56:20] [PASSED] test_granular
[10:56:20] [PASSED] test_unique
[10:56:20] [PASSED] test_overlap
[10:56:20] [PASSED] test_reusable
[10:56:20] [PASSED] test_too_big
[10:56:20] [PASSED] test_flush
[10:56:20] [PASSED] test_lookup
[10:56:20] [PASSED] test_data
[10:56:20] [PASSED] test_class
[10:56:20] ===================== [PASSED] guc_buf =====================
[10:56:20] =================== guc_dbm (7 subtests) ===================
[10:56:20] [PASSED] test_empty
[10:56:20] [PASSED] test_default
[10:56:20] ======================== test_size ========================
[10:56:20] [PASSED] 4
[10:56:20] [PASSED] 8
[10:56:20] [PASSED] 32
[10:56:20] [PASSED] 256
[10:56:20] ==================== [PASSED] test_size ====================
[10:56:20] ======================= test_reuse ========================
[10:56:20] [PASSED] 4
[10:56:20] [PASSED] 8
[10:56:20] [PASSED] 32
[10:56:20] [PASSED] 256
[10:56:20] =================== [PASSED] test_reuse ====================
[10:56:20] =================== test_range_overlap ====================
[10:56:20] [PASSED] 4
[10:56:20] [PASSED] 8
[10:56:20] [PASSED] 32
[10:56:20] [PASSED] 256
[10:56:20] =============== [PASSED] test_range_overlap ================
[10:56:20] =================== test_range_compact ====================
[10:56:20] [PASSED] 4
[10:56:20] [PASSED] 8
[10:56:20] [PASSED] 32
[10:56:20] [PASSED] 256
[10:56:20] =============== [PASSED] test_range_compact ================
[10:56:20] ==================== test_range_spare =====================
[10:56:20] [PASSED] 4
[10:56:20] [PASSED] 8
[10:56:20] [PASSED] 32
[10:56:20] [PASSED] 256
[10:56:20] ================ [PASSED] test_range_spare =================
[10:56:20] ===================== [PASSED] guc_dbm =====================
[10:56:20] =================== guc_idm (6 subtests) ===================
[10:56:20] [PASSED] bad_init
[10:56:20] [PASSED] no_init
[10:56:20] [PASSED] init_fini
[10:56:20] [PASSED] check_used
[10:56:20] [PASSED] check_quota
[10:56:20] [PASSED] check_all
[10:56:20] ===================== [PASSED] guc_idm =====================
[10:56:20] ================== no_relay (3 subtests) ===================
[10:56:20] [PASSED] xe_drops_guc2pf_if_not_ready
[10:56:20] [PASSED] xe_drops_guc2vf_if_not_ready
[10:56:20] [PASSED] xe_rejects_send_if_not_ready
[10:56:20] ==================== [PASSED] no_relay =====================
[10:56:20] ================== pf_relay (14 subtests) ==================
[10:56:20] [PASSED] pf_rejects_guc2pf_too_short
[10:56:20] [PASSED] pf_rejects_guc2pf_too_long
[10:56:20] [PASSED] pf_rejects_guc2pf_no_payload
[10:56:20] [PASSED] pf_fails_no_payload
[10:56:20] [PASSED] pf_fails_bad_origin
[10:56:20] [PASSED] pf_fails_bad_type
[10:56:20] [PASSED] pf_txn_reports_error
[10:56:20] [PASSED] pf_txn_sends_pf2guc
[10:56:20] [PASSED] pf_sends_pf2guc
[10:56:20] [SKIPPED] pf_loopback_nop
[10:56:20] [SKIPPED] pf_loopback_echo
[10:56:20] [SKIPPED] pf_loopback_fail
[10:56:20] [SKIPPED] pf_loopback_busy
[10:56:20] [SKIPPED] pf_loopback_retry
[10:56:20] ==================== [PASSED] pf_relay =====================
[10:56:20] ================== vf_relay (3 subtests) ===================
[10:56:20] [PASSED] vf_rejects_guc2vf_too_short
[10:56:20] [PASSED] vf_rejects_guc2vf_too_long
[10:56:20] [PASSED] vf_rejects_guc2vf_no_payload
[10:56:20] ==================== [PASSED] vf_relay =====================
[10:56:20] ================= pf_service (11 subtests) =================
[10:56:20] [PASSED] pf_negotiate_any
[10:56:20] [PASSED] pf_negotiate_base_match
[10:56:20] [PASSED] pf_negotiate_base_newer
[10:56:20] [PASSED] pf_negotiate_base_next
[10:56:20] [SKIPPED] pf_negotiate_base_older
[10:56:20] [PASSED] pf_negotiate_base_prev
[10:56:20] [PASSED] pf_negotiate_latest_match
[10:56:20] [PASSED] pf_negotiate_latest_newer
[10:56:20] [PASSED] pf_negotiate_latest_next
[10:56:20] [SKIPPED] pf_negotiate_latest_older
[10:56:20] [SKIPPED] pf_negotiate_latest_prev
[10:56:20] =================== [PASSED] pf_service ====================
[10:56:20] ===================== lmtt (1 subtest) =====================
[10:56:20] ======================== test_ops =========================
[10:56:20] [PASSED] 2-level
[10:56:20] [PASSED] multi-level
[10:56:20] ==================== [PASSED] test_ops =====================
[10:56:20] ====================== [PASSED] lmtt =======================
[10:56:20] =================== xe_mocs (2 subtests) ===================
[10:56:20] ================ xe_live_mocs_kernel_kunit ================
[10:56:20] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[10:56:20] ================ xe_live_mocs_reset_kunit =================
[10:56:20] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[10:56:20] ==================== [SKIPPED] xe_mocs =====================
[10:56:20] ================= xe_migrate (2 subtests) ==================
[10:56:20] ================= xe_migrate_sanity_kunit =================
[10:56:20] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[10:56:20] ================== xe_validate_ccs_kunit ==================
[10:56:20] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[10:56:20] =================== [SKIPPED] xe_migrate ===================
[10:56:20] ================== xe_dma_buf (1 subtest) ==================
[10:56:20] ==================== xe_dma_buf_kunit =====================
[10:56:20] ================ [SKIPPED] xe_dma_buf_kunit ================
[10:56:20] =================== [SKIPPED] xe_dma_buf ===================
[10:56:20] ================= xe_bo_shrink (1 subtest) =================
[10:56:20] =================== xe_bo_shrink_kunit ====================
[10:56:20] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[10:56:20] ================== [SKIPPED] xe_bo_shrink ==================
[10:56:20] ==================== xe_bo (2 subtests) ====================
[10:56:20] ================== xe_ccs_migrate_kunit ===================
[10:56:20] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
stty: 'standard input': Inappropriate ioctl for device
[10:56:20] ==================== xe_bo_evict_kunit ====================
[10:56:20] =============== [SKIPPED] xe_bo_evict_kunit ================
[10:56:20] ===================== [SKIPPED] xe_bo ======================
[10:56:20] ==================== args (11 subtests) ====================
[10:56:20] [PASSED] count_args_test
[10:56:20] [PASSED] call_args_example
[10:56:20] [PASSED] call_args_test
[10:56:20] [PASSED] drop_first_arg_example
[10:56:20] [PASSED] drop_first_arg_test
[10:56:20] [PASSED] first_arg_example
[10:56:20] [PASSED] first_arg_test
[10:56:20] [PASSED] last_arg_example
[10:56:20] [PASSED] last_arg_test
[10:56:20] [PASSED] pick_arg_example
[10:56:20] [PASSED] sep_comma_example
[10:56:20] ====================== [PASSED] args =======================
[10:56:20] =================== xe_pci (2 subtests) ====================
[10:56:20] [PASSED] xe_gmdid_graphics_ip
[10:56:20] [PASSED] xe_gmdid_media_ip
[10:56:20] ===================== [PASSED] xe_pci ======================
[10:56:20] =================== xe_rtp (2 subtests) ====================
[10:56:20] =============== xe_rtp_process_to_sr_tests ================
[10:56:20] [PASSED] coalesce-same-reg
[10:56:20] [PASSED] no-match-no-add
[10:56:20] [PASSED] match-or
[10:56:20] [PASSED] match-or-xfail
[10:56:20] [PASSED] no-match-no-add-multiple-rules
[10:56:20] [PASSED] two-regs-two-entries
[10:56:20] [PASSED] clr-one-set-other
[10:56:20] [PASSED] set-field
[10:56:20] [PASSED] conflict-duplicate
[10:56:20] [PASSED] conflict-not-disjoint
[10:56:20] [PASSED] conflict-reg-type
[10:56:20] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[10:56:20] ================== xe_rtp_process_tests ===================
[10:56:20] [PASSED] active1
[10:56:20] [PASSED] active2
[10:56:20] [PASSED] active-inactive
[10:56:20] [PASSED] inactive-active
[10:56:20] [PASSED] inactive-1st_or_active-inactive
[10:56:20] [PASSED] inactive-2nd_or_active-inactive
[10:56:20] [PASSED] inactive-last_or_active-inactive
[10:56:20] [PASSED] inactive-no_or_active-inactive
[10:56:20] ============== [PASSED] xe_rtp_process_tests ===============
[10:56:20] ===================== [PASSED] xe_rtp ======================
[10:56:20] ==================== xe_wa (1 subtest) =====================
[10:56:20] ======================== xe_wa_gt =========================
[10:56:20] [PASSED] TIGERLAKE (B0)
[10:56:20] [PASSED] DG1 (A0)
[10:56:20] [PASSED] DG1 (B0)
[10:56:20] [PASSED] ALDERLAKE_S (A0)
[10:56:20] [PASSED] ALDERLAKE_S (B0)
[10:56:20] [PASSED] ALDERLAKE_S (C0)
[10:56:20] [PASSED] ALDERLAKE_S (D0)
[10:56:20] [PASSED] ALDERLAKE_P (A0)
[10:56:20] [PASSED] ALDERLAKE_P (B0)
[10:56:20] [PASSED] ALDERLAKE_P (C0)
[10:56:20] [PASSED] ALDERLAKE_S_RPLS (D0)
[10:56:20] [PASSED] ALDERLAKE_P_RPLU (E0)
[10:56:20] [PASSED] DG2_G10 (C0)
[10:56:20] [PASSED] DG2_G11 (B1)
[10:56:20] [PASSED] DG2_G12 (A1)
[10:56:20] [PASSED] METEORLAKE (g:A0, m:A0)
[10:56:20] [PASSED] METEORLAKE (g:A0, m:A0)
[10:56:20] [PASSED] METEORLAKE (g:A0, m:A0)
[10:56:20] [PASSED] LUNARLAKE (g:A0, m:A0)
[10:56:20] [PASSED] LUNARLAKE (g:B0, m:A0)
[10:56:20] [PASSED] BATTLEMAGE (g:A0, m:A1)
[10:56:20] ==================== [PASSED] xe_wa_gt =====================
[10:56:20] ====================== [PASSED] xe_wa ======================
[10:56:20] ============================================================
[10:56:20] Testing complete. Ran 133 tests: passed: 117, skipped: 16
[10:56:20] Elapsed time: 30.088s total, 4.164s configuring, 25.658s building, 0.243s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[10:56:20] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[10:56:22] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
156 | u64 ioread64_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
163 | u64 ioread64_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
170 | u64 ioread64be_lo_hi(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
178 | u64 ioread64be_hi_lo(const void __iomem *addr)
| ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
| ^~~~~~~~~~~~~~~~~
[10:56:43] Starting KUnit Kernel (1/1)...
[10:56:43] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[10:56:43] =========== drm_validate_clone_mode (2 subtests) ===========
[10:56:43] ============== drm_test_check_in_clone_mode ===============
[10:56:43] [PASSED] in_clone_mode
[10:56:43] [PASSED] not_in_clone_mode
[10:56:43] ========== [PASSED] drm_test_check_in_clone_mode ===========
[10:56:43] =============== drm_test_check_valid_clones ===============
[10:56:43] [PASSED] not_in_clone_mode
[10:56:43] [PASSED] valid_clone
[10:56:43] [PASSED] invalid_clone
[10:56:43] =========== [PASSED] drm_test_check_valid_clones ===========
[10:56:43] ============= [PASSED] drm_validate_clone_mode =============
[10:56:43] ============= drm_validate_modeset (1 subtest) =============
[10:56:43] [PASSED] drm_test_check_connector_changed_modeset
[10:56:43] ============== [PASSED] drm_validate_modeset ===============
[10:56:43] ================== drm_buddy (7 subtests) ==================
[10:56:43] [PASSED] drm_test_buddy_alloc_limit
[10:56:43] [PASSED] drm_test_buddy_alloc_optimistic
[10:56:43] [PASSED] drm_test_buddy_alloc_pessimistic
[10:56:43] [PASSED] drm_test_buddy_alloc_pathological
[10:56:43] [PASSED] drm_test_buddy_alloc_contiguous
[10:56:43] [PASSED] drm_test_buddy_alloc_clear
[10:56:43] [PASSED] drm_test_buddy_alloc_range_bias
[10:56:43] ==================== [PASSED] drm_buddy ====================
[10:56:43] ============= drm_cmdline_parser (40 subtests) =============
[10:56:43] [PASSED] drm_test_cmdline_force_d_only
[10:56:43] [PASSED] drm_test_cmdline_force_D_only_dvi
[10:56:43] [PASSED] drm_test_cmdline_force_D_only_hdmi
[10:56:43] [PASSED] drm_test_cmdline_force_D_only_not_digital
[10:56:43] [PASSED] drm_test_cmdline_force_e_only
[10:56:43] [PASSED] drm_test_cmdline_res
[10:56:43] [PASSED] drm_test_cmdline_res_vesa
[10:56:43] [PASSED] drm_test_cmdline_res_vesa_rblank
[10:56:43] [PASSED] drm_test_cmdline_res_rblank
[10:56:43] [PASSED] drm_test_cmdline_res_bpp
[10:56:43] [PASSED] drm_test_cmdline_res_refresh
[10:56:43] [PASSED] drm_test_cmdline_res_bpp_refresh
[10:56:43] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[10:56:43] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[10:56:43] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[10:56:43] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[10:56:43] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[10:56:43] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[10:56:43] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[10:56:43] [PASSED] drm_test_cmdline_res_margins_force_on
[10:56:43] [PASSED] drm_test_cmdline_res_vesa_margins
[10:56:43] [PASSED] drm_test_cmdline_name
[10:56:43] [PASSED] drm_test_cmdline_name_bpp
[10:56:43] [PASSED] drm_test_cmdline_name_option
[10:56:43] [PASSED] drm_test_cmdline_name_bpp_option
[10:56:43] [PASSED] drm_test_cmdline_rotate_0
[10:56:43] [PASSED] drm_test_cmdline_rotate_90
[10:56:43] [PASSED] drm_test_cmdline_rotate_180
[10:56:43] [PASSED] drm_test_cmdline_rotate_270
[10:56:43] [PASSED] drm_test_cmdline_hmirror
[10:56:43] [PASSED] drm_test_cmdline_vmirror
[10:56:43] [PASSED] drm_test_cmdline_margin_options
[10:56:43] [PASSED] drm_test_cmdline_multiple_options
[10:56:43] [PASSED] drm_test_cmdline_bpp_extra_and_option
[10:56:43] [PASSED] drm_test_cmdline_extra_and_option
[10:56:43] [PASSED] drm_test_cmdline_freestanding_options
[10:56:43] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[10:56:43] [PASSED] drm_test_cmdline_panel_orientation
[10:56:43] ================ drm_test_cmdline_invalid =================
[10:56:43] [PASSED] margin_only
[10:56:43] [PASSED] interlace_only
[10:56:43] [PASSED] res_missing_x
[10:56:43] [PASSED] res_missing_y
[10:56:43] [PASSED] res_bad_y
[10:56:43] [PASSED] res_missing_y_bpp
[10:56:43] [PASSED] res_bad_bpp
[10:56:43] [PASSED] res_bad_refresh
[10:56:43] [PASSED] res_bpp_refresh_force_on_off
[10:56:43] [PASSED] res_invalid_mode
[10:56:43] [PASSED] res_bpp_wrong_place_mode
[10:56:43] [PASSED] name_bpp_refresh
[10:56:43] [PASSED] name_refresh
[10:56:43] [PASSED] name_refresh_wrong_mode
[10:56:43] [PASSED] name_refresh_invalid_mode
[10:56:43] [PASSED] rotate_multiple
[10:56:43] [PASSED] rotate_invalid_val
[10:56:43] [PASSED] rotate_truncated
[10:56:43] [PASSED] invalid_option
[10:56:43] [PASSED] invalid_tv_option
[10:56:43] [PASSED] truncated_tv_option
[10:56:43] ============ [PASSED] drm_test_cmdline_invalid =============
[10:56:43] =============== drm_test_cmdline_tv_options ===============
[10:56:43] [PASSED] NTSC
[10:56:43] [PASSED] NTSC_443
[10:56:43] [PASSED] NTSC_J
[10:56:43] [PASSED] PAL
[10:56:43] [PASSED] PAL_M
[10:56:43] [PASSED] PAL_N
[10:56:43] [PASSED] SECAM
[10:56:43] [PASSED] MONO_525
[10:56:43] [PASSED] MONO_625
[10:56:43] =========== [PASSED] drm_test_cmdline_tv_options ===========
[10:56:43] =============== [PASSED] drm_cmdline_parser ================
[10:56:43] ========== drmm_connector_hdmi_init (20 subtests) ==========
[10:56:43] [PASSED] drm_test_connector_hdmi_init_valid
[10:56:43] [PASSED] drm_test_connector_hdmi_init_bpc_8
[10:56:43] [PASSED] drm_test_connector_hdmi_init_bpc_10
[10:56:43] [PASSED] drm_test_connector_hdmi_init_bpc_12
[10:56:43] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[10:56:43] [PASSED] drm_test_connector_hdmi_init_bpc_null
[10:56:43] [PASSED] drm_test_connector_hdmi_init_formats_empty
[10:56:43] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[10:56:43] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[10:56:43] [PASSED] supported_formats=0x9 yuv420_allowed=1
[10:56:43] [PASSED] supported_formats=0x9 yuv420_allowed=0
[10:56:43] [PASSED] supported_formats=0x3 yuv420_allowed=1
[10:56:43] [PASSED] supported_formats=0x3 yuv420_allowed=0
[10:56:43] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[10:56:43] [PASSED] drm_test_connector_hdmi_init_null_ddc
[10:56:43] [PASSED] drm_test_connector_hdmi_init_null_product
[10:56:43] [PASSED] drm_test_connector_hdmi_init_null_vendor
[10:56:43] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[10:56:43] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[10:56:43] [PASSED] drm_test_connector_hdmi_init_product_valid
[10:56:43] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[10:56:43] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[10:56:43] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[10:56:43] ========= drm_test_connector_hdmi_init_type_valid =========
[10:56:43] [PASSED] HDMI-A
[10:56:43] [PASSED] HDMI-B
[10:56:43] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[10:56:43] ======== drm_test_connector_hdmi_init_type_invalid ========
[10:56:43] [PASSED] Unknown
[10:56:43] [PASSED] VGA
[10:56:43] [PASSED] DVI-I
[10:56:43] [PASSED] DVI-D
[10:56:43] [PASSED] DVI-A
[10:56:43] [PASSED] Composite
[10:56:43] [PASSED] SVIDEO
[10:56:43] [PASSED] LVDS
[10:56:43] [PASSED] Component
[10:56:43] [PASSED] DIN
[10:56:43] [PASSED] DP
[10:56:43] [PASSED] TV
[10:56:43] [PASSED] eDP
[10:56:43] [PASSED] Virtual
[10:56:43] [PASSED] DSI
[10:56:43] [PASSED] DPI
[10:56:43] [PASSED] Writeback
[10:56:43] [PASSED] SPI
[10:56:43] [PASSED] USB
[10:56:43] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[10:56:43] ============ [PASSED] drmm_connector_hdmi_init =============
[10:56:43] ============= drmm_connector_init (3 subtests) =============
[10:56:43] [PASSED] drm_test_drmm_connector_init
[10:56:43] [PASSED] drm_test_drmm_connector_init_null_ddc
[10:56:43] ========= drm_test_drmm_connector_init_type_valid =========
[10:56:43] [PASSED] Unknown
[10:56:43] [PASSED] VGA
[10:56:43] [PASSED] DVI-I
[10:56:43] [PASSED] DVI-D
[10:56:43] [PASSED] DVI-A
[10:56:43] [PASSED] Composite
[10:56:43] [PASSED] SVIDEO
[10:56:43] [PASSED] LVDS
[10:56:43] [PASSED] Component
[10:56:43] [PASSED] DIN
[10:56:43] [PASSED] DP
[10:56:43] [PASSED] HDMI-A
[10:56:43] [PASSED] HDMI-B
[10:56:43] [PASSED] TV
[10:56:43] [PASSED] eDP
[10:56:43] [PASSED] Virtual
[10:56:43] [PASSED] DSI
[10:56:43] [PASSED] DPI
[10:56:43] [PASSED] Writeback
[10:56:43] [PASSED] SPI
[10:56:43] [PASSED] USB
[10:56:43] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[10:56:43] =============== [PASSED] drmm_connector_init ===============
[10:56:43] ========= drm_connector_dynamic_init (6 subtests) ==========
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_init
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_init_properties
[10:56:43] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[10:56:43] [PASSED] Unknown
[10:56:43] [PASSED] VGA
[10:56:43] [PASSED] DVI-I
[10:56:43] [PASSED] DVI-D
[10:56:43] [PASSED] DVI-A
[10:56:43] [PASSED] Composite
[10:56:43] [PASSED] SVIDEO
[10:56:43] [PASSED] LVDS
[10:56:43] [PASSED] Component
[10:56:43] [PASSED] DIN
[10:56:43] [PASSED] DP
[10:56:43] [PASSED] HDMI-A
[10:56:43] [PASSED] HDMI-B
[10:56:43] [PASSED] TV
[10:56:43] [PASSED] eDP
[10:56:43] [PASSED] Virtual
[10:56:43] [PASSED] DSI
[10:56:43] [PASSED] DPI
[10:56:43] [PASSED] Writeback
[10:56:43] [PASSED] SPI
[10:56:43] [PASSED] USB
[10:56:43] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[10:56:43] ======== drm_test_drm_connector_dynamic_init_name =========
[10:56:43] [PASSED] Unknown
[10:56:43] [PASSED] VGA
[10:56:43] [PASSED] DVI-I
[10:56:43] [PASSED] DVI-D
[10:56:43] [PASSED] DVI-A
[10:56:43] [PASSED] Composite
[10:56:43] [PASSED] SVIDEO
[10:56:43] [PASSED] LVDS
[10:56:43] [PASSED] Component
[10:56:43] [PASSED] DIN
[10:56:43] [PASSED] DP
[10:56:43] [PASSED] HDMI-A
[10:56:43] [PASSED] HDMI-B
[10:56:43] [PASSED] TV
[10:56:43] [PASSED] eDP
[10:56:43] [PASSED] Virtual
[10:56:43] [PASSED] DSI
[10:56:43] [PASSED] DPI
[10:56:43] [PASSED] Writeback
[10:56:43] [PASSED] SPI
[10:56:43] [PASSED] USB
[10:56:43] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[10:56:43] =========== [PASSED] drm_connector_dynamic_init ============
[10:56:43] ==== drm_connector_dynamic_register_early (4 subtests) =====
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[10:56:43] ====== [PASSED] drm_connector_dynamic_register_early =======
[10:56:43] ======= drm_connector_dynamic_register (7 subtests) ========
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[10:56:43] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[10:56:43] ========= [PASSED] drm_connector_dynamic_register ==========
[10:56:43] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[10:56:43] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[10:56:43] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[10:56:43] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[10:56:43] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[10:56:43] ========== drm_test_get_tv_mode_from_name_valid ===========
[10:56:43] [PASSED] NTSC
[10:56:43] [PASSED] NTSC-443
[10:56:43] [PASSED] NTSC-J
[10:56:43] [PASSED] PAL
[10:56:43] [PASSED] PAL-M
[10:56:43] [PASSED] PAL-N
[10:56:43] [PASSED] SECAM
[10:56:43] [PASSED] Mono
[10:56:43] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[10:56:43] [PASSED] drm_test_get_tv_mode_from_name_truncated
[10:56:43] ============ [PASSED] drm_get_tv_mode_from_name ============
[10:56:43] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[10:56:43] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[10:56:43] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[10:56:43] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[10:56:43] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[10:56:43] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[10:56:43] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[10:56:43] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[10:56:43] [PASSED] VIC 96
[10:56:43] [PASSED] VIC 97
[10:56:43] [PASSED] VIC 101
[10:56:43] [PASSED] VIC 102
[10:56:43] [PASSED] VIC 106
[10:56:43] [PASSED] VIC 107
[10:56:43] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[10:56:43] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[10:56:43] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[10:56:43] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[10:56:43] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[10:56:43] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[10:56:43] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[10:56:43] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[10:56:43] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[10:56:43] [PASSED] Automatic
[10:56:43] [PASSED] Full
[10:56:43] [PASSED] Limited 16:235
[10:56:43] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[10:56:43] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[10:56:43] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[10:56:43] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[10:56:43] === drm_test_drm_hdmi_connector_get_output_format_name ====
[10:56:43] [PASSED] RGB
[10:56:43] [PASSED] YUV 4:2:0
[10:56:43] [PASSED] YUV 4:2:2
[10:56:43] [PASSED] YUV 4:4:4
[10:56:43] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[10:56:43] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[10:56:43] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[10:56:43] ============= drm_damage_helper (21 subtests) ==============
[10:56:43] [PASSED] drm_test_damage_iter_no_damage
[10:56:43] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[10:56:43] [PASSED] drm_test_damage_iter_no_damage_src_moved
[10:56:43] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[10:56:43] [PASSED] drm_test_damage_iter_no_damage_not_visible
[10:56:43] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[10:56:43] [PASSED] drm_test_damage_iter_no_damage_no_fb
[10:56:43] [PASSED] drm_test_damage_iter_simple_damage
[10:56:43] [PASSED] drm_test_damage_iter_single_damage
[10:56:43] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[10:56:43] [PASSED] drm_test_damage_iter_single_damage_outside_src
[10:56:43] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[10:56:43] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[10:56:43] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[10:56:43] [PASSED] drm_test_damage_iter_single_damage_src_moved
[10:56:43] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[10:56:43] [PASSED] drm_test_damage_iter_damage
[10:56:43] [PASSED] drm_test_damage_iter_damage_one_intersect
[10:56:43] [PASSED] drm_test_damage_iter_damage_one_outside
[10:56:43] [PASSED] drm_test_damage_iter_damage_src_moved
[10:56:43] [PASSED] drm_test_damage_iter_damage_not_visible
[10:56:43] ================ [PASSED] drm_damage_helper ================
[10:56:43] ============== drm_dp_mst_helper (3 subtests) ==============
[10:56:43] ============== drm_test_dp_mst_calc_pbn_mode ==============
[10:56:43] [PASSED] Clock 154000 BPP 30 DSC disabled
[10:56:43] [PASSED] Clock 234000 BPP 30 DSC disabled
[10:56:43] [PASSED] Clock 297000 BPP 24 DSC disabled
[10:56:43] [PASSED] Clock 332880 BPP 24 DSC enabled
[10:56:43] [PASSED] Clock 324540 BPP 24 DSC enabled
[10:56:43] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[10:56:43] ============== drm_test_dp_mst_calc_pbn_div ===============
[10:56:43] [PASSED] Link rate 2000000 lane count 4
[10:56:43] [PASSED] Link rate 2000000 lane count 2
[10:56:43] [PASSED] Link rate 2000000 lane count 1
[10:56:43] [PASSED] Link rate 1350000 lane count 4
[10:56:43] [PASSED] Link rate 1350000 lane count 2
[10:56:43] [PASSED] Link rate 1350000 lane count 1
[10:56:43] [PASSED] Link rate 1000000 lane count 4
[10:56:43] [PASSED] Link rate 1000000 lane count 2
[10:56:43] [PASSED] Link rate 1000000 lane count 1
[10:56:43] [PASSED] Link rate 810000 lane count 4
[10:56:43] [PASSED] Link rate 810000 lane count 2
[10:56:43] [PASSED] Link rate 810000 lane count 1
[10:56:43] [PASSED] Link rate 540000 lane count 4
[10:56:43] [PASSED] Link rate 540000 lane count 2
[10:56:43] [PASSED] Link rate 540000 lane count 1
[10:56:43] [PASSED] Link rate 270000 lane count 4
[10:56:43] [PASSED] Link rate 270000 lane count 2
[10:56:43] [PASSED] Link rate 270000 lane count 1
[10:56:43] [PASSED] Link rate 162000 lane count 4
[10:56:43] [PASSED] Link rate 162000 lane count 2
[10:56:43] [PASSED] Link rate 162000 lane count 1
[10:56:43] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[10:56:43] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[10:56:43] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[10:56:43] [PASSED] DP_POWER_UP_PHY with port number
[10:56:43] [PASSED] DP_POWER_DOWN_PHY with port number
[10:56:43] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[10:56:43] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[10:56:43] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[10:56:43] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[10:56:43] [PASSED] DP_QUERY_PAYLOAD with port number
[10:56:43] [PASSED] DP_QUERY_PAYLOAD with VCPI
[10:56:43] [PASSED] DP_REMOTE_DPCD_READ with port number
[10:56:43] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[10:56:43] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[10:56:43] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[10:56:43] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[10:56:43] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[10:56:43] [PASSED] DP_REMOTE_I2C_READ with port number
[10:56:43] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[10:56:43] [PASSED] DP_REMOTE_I2C_READ with transactions array
[10:56:43] [PASSED] DP_REMOTE_I2C_WRITE with port number
[10:56:43] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[10:56:43] [PASSED] DP_REMOTE_I2C_WRITE with data array
[10:56:43] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[10:56:43] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[10:56:43] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[10:56:43] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[10:56:43] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[10:56:43] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[10:56:43] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[10:56:43] ================ [PASSED] drm_dp_mst_helper ================
[10:56:43] ================== drm_exec (7 subtests) ===================
[10:56:43] [PASSED] sanitycheck
[10:56:43] [PASSED] test_lock
[10:56:43] [PASSED] test_lock_unlock
[10:56:43] [PASSED] test_duplicates
[10:56:43] [PASSED] test_prepare
[10:56:43] [PASSED] test_prepare_array
[10:56:43] [PASSED] test_multiple_loops
[10:56:43] ==================== [PASSED] drm_exec =====================
[10:56:43] =========== drm_format_helper_test (17 subtests) ===========
[10:56:43] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[10:56:43] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[10:56:43] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[10:56:43] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[10:56:43] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[10:56:43] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[10:56:43] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[10:56:43] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[10:56:43] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[10:56:43] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[10:56:43] ============== drm_test_fb_xrgb8888_to_mono ===============
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[10:56:43] ==================== drm_test_fb_swab =====================
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ================ [PASSED] drm_test_fb_swab =================
[10:56:43] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[10:56:43] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[10:56:43] [PASSED] single_pixel_source_buffer
[10:56:43] [PASSED] single_pixel_clip_rectangle
[10:56:43] [PASSED] well_known_colors
[10:56:43] [PASSED] destination_pitch
[10:56:43] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[10:56:43] ================= drm_test_fb_clip_offset =================
[10:56:43] [PASSED] pass through
[10:56:43] [PASSED] horizontal offset
[10:56:43] [PASSED] vertical offset
[10:56:43] [PASSED] horizontal and vertical offset
[10:56:43] [PASSED] horizontal offset (custom pitch)
[10:56:43] [PASSED] vertical offset (custom pitch)
[10:56:43] [PASSED] horizontal and vertical offset (custom pitch)
[10:56:43] ============= [PASSED] drm_test_fb_clip_offset =============
[10:56:43] ============== drm_test_fb_build_fourcc_list ==============
[10:56:43] [PASSED] no native formats
[10:56:43] [PASSED] XRGB8888 as native format
[10:56:43] [PASSED] remove duplicates
[10:56:43] [PASSED] convert alpha formats
[10:56:43] [PASSED] random formats
[10:56:43] ========== [PASSED] drm_test_fb_build_fourcc_list ==========
[10:56:43] =================== drm_test_fb_memcpy ====================
[10:56:43] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[10:56:43] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[10:56:43] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[10:56:43] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[10:56:43] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[10:56:43] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[10:56:43] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[10:56:43] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[10:56:43] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[10:56:43] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[10:56:43] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[10:56:43] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[10:56:43] =============== [PASSED] drm_test_fb_memcpy ================
[10:56:43] ============= [PASSED] drm_format_helper_test ==============
[10:56:43] ================= drm_format (18 subtests) =================
[10:56:43] [PASSED] drm_test_format_block_width_invalid
[10:56:43] [PASSED] drm_test_format_block_width_one_plane
[10:56:43] [PASSED] drm_test_format_block_width_two_plane
[10:56:43] [PASSED] drm_test_format_block_width_three_plane
[10:56:43] [PASSED] drm_test_format_block_width_tiled
[10:56:43] [PASSED] drm_test_format_block_height_invalid
[10:56:43] [PASSED] drm_test_format_block_height_one_plane
[10:56:43] [PASSED] drm_test_format_block_height_two_plane
[10:56:43] [PASSED] drm_test_format_block_height_three_plane
[10:56:43] [PASSED] drm_test_format_block_height_tiled
[10:56:43] [PASSED] drm_test_format_min_pitch_invalid
[10:56:43] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[10:56:43] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[10:56:43] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[10:56:43] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[10:56:43] [PASSED] drm_test_format_min_pitch_two_plane
[10:56:43] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[10:56:43] [PASSED] drm_test_format_min_pitch_tiled
[10:56:43] =================== [PASSED] drm_format ====================
[10:56:43] ============== drm_framebuffer (10 subtests) ===============
[10:56:43] ========== drm_test_framebuffer_check_src_coords ==========
[10:56:43] [PASSED] Success: source fits into fb
[10:56:43] [PASSED] Fail: overflowing fb with x-axis coordinate
[10:56:43] [PASSED] Fail: overflowing fb with y-axis coordinate
[10:56:43] [PASSED] Fail: overflowing fb with source width
[10:56:43] [PASSED] Fail: overflowing fb with source height
[10:56:43] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[10:56:43] [PASSED] drm_test_framebuffer_cleanup
[10:56:43] =============== drm_test_framebuffer_create ===============
[10:56:43] [PASSED] ABGR8888 normal sizes
[10:56:43] [PASSED] ABGR8888 max sizes
[10:56:43] [PASSED] ABGR8888 pitch greater than min required
[10:56:43] [PASSED] ABGR8888 pitch less than min required
[10:56:43] [PASSED] ABGR8888 Invalid width
[10:56:43] [PASSED] ABGR8888 Invalid buffer handle
[10:56:43] [PASSED] No pixel format
[10:56:43] [PASSED] ABGR8888 Width 0
[10:56:43] [PASSED] ABGR8888 Height 0
[10:56:43] [PASSED] ABGR8888 Out of bound height * pitch combination
[10:56:43] [PASSED] ABGR8888 Large buffer offset
[10:56:43] [PASSED] ABGR8888 Buffer offset for inexistent plane
[10:56:43] [PASSED] ABGR8888 Invalid flag
[10:56:43] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[10:56:43] [PASSED] ABGR8888 Valid buffer modifier
[10:56:43] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[10:56:43] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[10:56:43] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[10:56:43] [PASSED] NV12 Normal sizes
[10:56:43] [PASSED] NV12 Max sizes
[10:56:43] [PASSED] NV12 Invalid pitch
[10:56:43] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[10:56:43] [PASSED] NV12 different modifier per-plane
[10:56:43] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[10:56:43] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[10:56:43] [PASSED] NV12 Modifier for inexistent plane
[10:56:43] [PASSED] NV12 Handle for inexistent plane
[10:56:43] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[10:56:43] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[10:56:43] [PASSED] YVU420 Normal sizes
[10:56:43] [PASSED] YVU420 Max sizes
[10:56:43] [PASSED] YVU420 Invalid pitch
[10:56:43] [PASSED] YVU420 Different pitches
[10:56:43] [PASSED] YVU420 Different buffer offsets/pitches
[10:56:43] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[10:56:43] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[10:56:43] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[10:56:43] [PASSED] YVU420 Valid modifier
[10:56:43] [PASSED] YVU420 Different modifiers per plane
[10:56:43] [PASSED] YVU420 Modifier for inexistent plane
[10:56:43] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[10:56:43] [PASSED] X0L2 Normal sizes
[10:56:43] [PASSED] X0L2 Max sizes
[10:56:43] [PASSED] X0L2 Invalid pitch
[10:56:43] [PASSED] X0L2 Pitch greater than minimum required
[10:56:43] [PASSED] X0L2 Handle for inexistent plane
[10:56:43] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[10:56:43] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[10:56:43] [PASSED] X0L2 Valid modifier
[10:56:43] [PASSED] X0L2 Modifier for inexistent plane
[10:56:43] =========== [PASSED] drm_test_framebuffer_create ===========
[10:56:43] [PASSED] drm_test_framebuffer_free
[10:56:43] [PASSED] drm_test_framebuffer_init
[10:56:43] [PASSED] drm_test_framebuffer_init_bad_format
[10:56:43] [PASSED] drm_test_framebuffer_init_dev_mismatch
[10:56:43] [PASSED] drm_test_framebuffer_lookup
[10:56:43] [PASSED] drm_test_framebuffer_lookup_inexistent
[10:56:43] [PASSED] drm_test_framebuffer_modifiers_not_supported
[10:56:43] ================= [PASSED] drm_framebuffer =================
[10:56:43] ================ drm_gem_shmem (8 subtests) ================
[10:56:43] [PASSED] drm_gem_shmem_test_obj_create
[10:56:43] [PASSED] drm_gem_shmem_test_obj_create_private
[10:56:43] [PASSED] drm_gem_shmem_test_pin_pages
[10:56:43] [PASSED] drm_gem_shmem_test_vmap
[10:56:43] [PASSED] drm_gem_shmem_test_get_pages_sgt
[10:56:43] [PASSED] drm_gem_shmem_test_get_sg_table
[10:56:43] [PASSED] drm_gem_shmem_test_madvise
[10:56:43] [PASSED] drm_gem_shmem_test_purge
[10:56:43] ================== [PASSED] drm_gem_shmem ==================
[10:56:43] === drm_atomic_helper_connector_hdmi_check (23 subtests) ===
[10:56:43] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[10:56:43] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[10:56:43] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[10:56:43] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[10:56:43] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[10:56:43] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[10:56:43] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[10:56:43] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[10:56:43] [PASSED] drm_test_check_disable_connector
[10:56:43] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[10:56:43] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback
[10:56:43] [PASSED] drm_test_check_max_tmds_rate_format_fallback
[10:56:43] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[10:56:43] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[10:56:43] [PASSED] drm_test_check_output_bpc_dvi
[10:56:43] [PASSED] drm_test_check_output_bpc_format_vic_1
[10:56:43] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[10:56:43] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[10:56:43] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[10:56:43] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[10:56:43] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[10:56:43] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[10:56:43] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[10:56:43] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[10:56:43] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[10:56:43] [PASSED] drm_test_check_broadcast_rgb_value
[10:56:43] [PASSED] drm_test_check_bpc_8_value
[10:56:43] [PASSED] drm_test_check_bpc_10_value
[10:56:43] [PASSED] drm_test_check_bpc_12_value
[10:56:43] [PASSED] drm_test_check_format_value
[10:56:43] [PASSED] drm_test_check_tmds_char_value
[10:56:43] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[10:56:43] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[10:56:43] [PASSED] drm_test_check_mode_valid
[10:56:43] [PASSED] drm_test_check_mode_valid_reject
[10:56:43] [PASSED] drm_test_check_mode_valid_reject_rate
[10:56:43] [PASSED] drm_test_check_mode_valid_reject_max_clock
[10:56:43] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[10:56:43] ================= drm_managed (2 subtests) =================
[10:56:43] [PASSED] drm_test_managed_release_action
[10:56:43] [PASSED] drm_test_managed_run_action
[10:56:43] =================== [PASSED] drm_managed ===================
[10:56:43] =================== drm_mm (6 subtests) ====================
[10:56:43] [PASSED] drm_test_mm_init
[10:56:43] [PASSED] drm_test_mm_debug
[10:56:43] [PASSED] drm_test_mm_align32
[10:56:43] [PASSED] drm_test_mm_align64
[10:56:43] [PASSED] drm_test_mm_lowest
[10:56:43] [PASSED] drm_test_mm_highest
[10:56:43] ===================== [PASSED] drm_mm ======================
[10:56:43] ============= drm_modes_analog_tv (5 subtests) =============
[10:56:43] [PASSED] drm_test_modes_analog_tv_mono_576i
[10:56:43] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[10:56:43] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[10:56:43] [PASSED] drm_test_modes_analog_tv_pal_576i
[10:56:43] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[10:56:43] =============== [PASSED] drm_modes_analog_tv ===============
[10:56:43] ============== drm_plane_helper (2 subtests) ===============
[10:56:43] =============== drm_test_check_plane_state ================
[10:56:43] [PASSED] clipping_simple
[10:56:43] [PASSED] clipping_rotate_reflect
[10:56:43] [PASSED] positioning_simple
[10:56:43] [PASSED] upscaling
[10:56:43] [PASSED] downscaling
[10:56:43] [PASSED] rounding1
[10:56:43] [PASSED] rounding2
[10:56:43] [PASSED] rounding3
[10:56:43] [PASSED] rounding4
[10:56:43] =========== [PASSED] drm_test_check_plane_state ============
[10:56:43] =========== drm_test_check_invalid_plane_state ============
[10:56:43] [PASSED] positioning_invalid
[10:56:43] [PASSED] upscaling_invalid
[10:56:43] [PASSED] downscaling_invalid
[10:56:43] ======= [PASSED] drm_test_check_invalid_plane_state ========
[10:56:43] ================ [PASSED] drm_plane_helper =================
[10:56:43] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[10:56:43] ====== drm_test_connector_helper_tv_get_modes_check =======
[10:56:43] [PASSED] None
[10:56:43] [PASSED] PAL
[10:56:43] [PASSED] NTSC
[10:56:43] [PASSED] Both, NTSC Default
[10:56:43] [PASSED] Both, PAL Default
[10:56:43] [PASSED] Both, NTSC Default, with PAL on command-line
[10:56:43] [PASSED] Both, PAL Default, with NTSC on command-line
[10:56:43] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[10:56:43] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[10:56:43] ================== drm_rect (9 subtests) ===================
[10:56:43] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[10:56:43] [PASSED] drm_test_rect_clip_scaled_not_clipped
[10:56:43] [PASSED] drm_test_rect_clip_scaled_clipped
[10:56:43] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[10:56:43] ================= drm_test_rect_intersect =================
[10:56:43] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[10:56:43] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[10:56:43] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[10:56:43] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[10:56:43] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[10:56:43] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[10:56:43] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[10:56:43] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[10:56:43] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[10:56:43] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[10:56:43] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[10:56:43] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[10:56:43] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[10:56:43] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[10:56:43] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[10:56:43] ============= [PASSED] drm_test_rect_intersect =============
[10:56:43] ================ drm_test_rect_calc_hscale ================
[10:56:43] [PASSED] normal use
[10:56:43] [PASSED] out of max range
[10:56:43] [PASSED] out of min range
[10:56:43] [PASSED] zero dst
[10:56:43] [PASSED] negative src
[10:56:43] [PASSED] negative dst
[10:56:43] ============ [PASSED] drm_test_rect_calc_hscale ============
[10:56:43] ================ drm_test_rect_calc_vscale ================
[10:56:43] [PASSED] normal use
[10:56:43] [PASSED] out of max range
[10:56:43] [PASSED] out of min range
[10:56:43] [PASSED] zero dst
[10:56:43] [PASSED] negative src
[10:56:43] [PASSED] negative dst
[10:56:43] ============ [PASSED] drm_test_rect_calc_vscale ============
[10:56:43] ================== drm_test_rect_rotate ===================
[10:56:43] [PASSED] reflect-x
[10:56:43] [PASSED] reflect-y
[10:56:43] [PASSED] rotate-0
[10:56:43] [PASSED] rotate-90
[10:56:43] [PASSED] rotate-180
[10:56:43] [PASSED] rotate-270
stty: 'standard input': Inappropriate ioctl for device
[10:56:43] ============== [PASSED] drm_test_rect_rotate ===============
[10:56:43] ================ drm_test_rect_rotate_inv =================
[10:56:43] [PASSED] reflect-x
[10:56:43] [PASSED] reflect-y
[10:56:43] [PASSED] rotate-0
[10:56:43] [PASSED] rotate-90
[10:56:43] [PASSED] rotate-180
[10:56:43] [PASSED] rotate-270
[10:56:43] ============ [PASSED] drm_test_rect_rotate_inv =============
[10:56:43] ==================== [PASSED] drm_rect =====================
[10:56:43] ============================================================
[10:56:43] Testing complete. Ran 598 tests: passed: 598
[10:56:43] Elapsed time: 22.912s total, 1.695s configuring, 21.047s building, 0.143s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[10:56:43] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[10:56:45] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json ARCH=um O=.kunit --jobs=48
[10:56:53] Starting KUnit Kernel (1/1)...
[10:56:53] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[10:56:53] ================= ttm_device (5 subtests) ==================
[10:56:53] [PASSED] ttm_device_init_basic
[10:56:53] [PASSED] ttm_device_init_multiple
[10:56:53] [PASSED] ttm_device_fini_basic
[10:56:53] [PASSED] ttm_device_init_no_vma_man
[10:56:53] ================== ttm_device_init_pools ==================
[10:56:53] [PASSED] No DMA allocations, no DMA32 required
[10:56:53] [PASSED] DMA allocations, DMA32 required
[10:56:53] [PASSED] No DMA allocations, DMA32 required
[10:56:53] [PASSED] DMA allocations, no DMA32 required
[10:56:53] ============== [PASSED] ttm_device_init_pools ==============
[10:56:53] =================== [PASSED] ttm_device ====================
[10:56:53] ================== ttm_pool (8 subtests) ===================
[10:56:53] ================== ttm_pool_alloc_basic ===================
[10:56:53] [PASSED] One page
[10:56:53] [PASSED] More than one page
[10:56:53] [PASSED] Above the allocation limit
[10:56:53] [PASSED] One page, with coherent DMA mappings enabled
[10:56:53] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[10:56:53] ============== [PASSED] ttm_pool_alloc_basic ===============
[10:56:53] ============== ttm_pool_alloc_basic_dma_addr ==============
[10:56:53] [PASSED] One page
[10:56:53] [PASSED] More than one page
[10:56:53] [PASSED] Above the allocation limit
[10:56:53] [PASSED] One page, with coherent DMA mappings enabled
[10:56:53] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[10:56:53] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[10:56:53] [PASSED] ttm_pool_alloc_order_caching_match
[10:56:53] [PASSED] ttm_pool_alloc_caching_mismatch
[10:56:53] [PASSED] ttm_pool_alloc_order_mismatch
[10:56:53] [PASSED] ttm_pool_free_dma_alloc
[10:56:53] [PASSED] ttm_pool_free_no_dma_alloc
[10:56:53] [PASSED] ttm_pool_fini_basic
[10:56:53] ==================== [PASSED] ttm_pool =====================
[10:56:53] ================ ttm_resource (8 subtests) =================
[10:56:53] ================= ttm_resource_init_basic =================
[10:56:53] [PASSED] Init resource in TTM_PL_SYSTEM
[10:56:53] [PASSED] Init resource in TTM_PL_VRAM
[10:56:53] [PASSED] Init resource in a private placement
[10:56:53] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[10:56:53] ============= [PASSED] ttm_resource_init_basic =============
[10:56:53] [PASSED] ttm_resource_init_pinned
[10:56:53] [PASSED] ttm_resource_fini_basic
[10:56:53] [PASSED] ttm_resource_manager_init_basic
[10:56:53] [PASSED] ttm_resource_manager_usage_basic
[10:56:53] [PASSED] ttm_resource_manager_set_used_basic
[10:56:53] [PASSED] ttm_sys_man_alloc_basic
[10:56:53] [PASSED] ttm_sys_man_free_basic
[10:56:53] ================== [PASSED] ttm_resource ===================
[10:56:53] =================== ttm_tt (15 subtests) ===================
[10:56:53] ==================== ttm_tt_init_basic ====================
[10:56:53] [PASSED] Page-aligned size
[10:56:53] [PASSED] Extra pages requested
[10:56:53] ================ [PASSED] ttm_tt_init_basic ================
[10:56:53] [PASSED] ttm_tt_init_misaligned
[10:56:53] [PASSED] ttm_tt_fini_basic
[10:56:53] [PASSED] ttm_tt_fini_sg
[10:56:53] [PASSED] ttm_tt_fini_shmem
[10:56:53] [PASSED] ttm_tt_create_basic
[10:56:53] [PASSED] ttm_tt_create_invalid_bo_type
[10:56:53] [PASSED] ttm_tt_create_ttm_exists
[10:56:53] [PASSED] ttm_tt_create_failed
[10:56:53] [PASSED] ttm_tt_destroy_basic
[10:56:53] [PASSED] ttm_tt_populate_null_ttm
[10:56:53] [PASSED] ttm_tt_populate_populated_ttm
[10:56:53] [PASSED] ttm_tt_unpopulate_basic
[10:56:53] [PASSED] ttm_tt_unpopulate_empty_ttm
[10:56:53] [PASSED] ttm_tt_swapin_basic
[10:56:53] ===================== [PASSED] ttm_tt ======================
[10:56:53] =================== ttm_bo (14 subtests) ===================
[10:56:53] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[10:56:53] [PASSED] Cannot be interrupted and sleeps
[10:56:53] [PASSED] Cannot be interrupted, locks straight away
[10:56:53] [PASSED] Can be interrupted, sleeps
[10:56:53] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[10:56:53] [PASSED] ttm_bo_reserve_locked_no_sleep
[10:56:53] [PASSED] ttm_bo_reserve_no_wait_ticket
[10:56:53] [PASSED] ttm_bo_reserve_double_resv
[10:56:53] [PASSED] ttm_bo_reserve_interrupted
[10:56:53] [PASSED] ttm_bo_reserve_deadlock
[10:56:53] [PASSED] ttm_bo_unreserve_basic
[10:56:53] [PASSED] ttm_bo_unreserve_pinned
[10:56:53] [PASSED] ttm_bo_unreserve_bulk
[10:56:53] [PASSED] ttm_bo_put_basic
[10:56:53] [PASSED] ttm_bo_put_shared_resv
[10:56:53] [PASSED] ttm_bo_pin_basic
[10:56:53] [PASSED] ttm_bo_pin_unpin_resource
[10:56:53] [PASSED] ttm_bo_multiple_pin_one_unpin
[10:56:53] ===================== [PASSED] ttm_bo ======================
[10:56:53] ============== ttm_bo_validate (22 subtests) ===============
[10:56:53] ============== ttm_bo_init_reserved_sys_man ===============
[10:56:53] [PASSED] Buffer object for userspace
[10:56:53] [PASSED] Kernel buffer object
[10:56:53] [PASSED] Shared buffer object
[10:56:53] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[10:56:53] ============== ttm_bo_init_reserved_mock_man ==============
[10:56:53] [PASSED] Buffer object for userspace
[10:56:53] [PASSED] Kernel buffer object
[10:56:53] [PASSED] Shared buffer object
[10:56:53] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[10:56:53] [PASSED] ttm_bo_init_reserved_resv
[10:56:53] ================== ttm_bo_validate_basic ==================
[10:56:53] [PASSED] Buffer object for userspace
[10:56:53] [PASSED] Kernel buffer object
[10:56:53] [PASSED] Shared buffer object
[10:56:53] ============== [PASSED] ttm_bo_validate_basic ==============
[10:56:53] [PASSED] ttm_bo_validate_invalid_placement
[10:56:53] ============= ttm_bo_validate_same_placement ==============
[10:56:53] [PASSED] System manager
[10:56:53] [PASSED] VRAM manager
[10:56:53] ========= [PASSED] ttm_bo_validate_same_placement ==========
[10:56:53] [PASSED] ttm_bo_validate_failed_alloc
[10:56:53] [PASSED] ttm_bo_validate_pinned
[10:56:53] [PASSED] ttm_bo_validate_busy_placement
[10:56:53] ================ ttm_bo_validate_multihop =================
[10:56:53] [PASSED] Buffer object for userspace
[10:56:53] [PASSED] Kernel buffer object
[10:56:53] [PASSED] Shared buffer object
[10:56:53] ============ [PASSED] ttm_bo_validate_multihop =============
[10:56:53] ========== ttm_bo_validate_no_placement_signaled ==========
[10:56:53] [PASSED] Buffer object in system domain, no page vector
[10:56:53] [PASSED] Buffer object in system domain with an existing page vector
[10:56:53] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[10:56:53] ======== ttm_bo_validate_no_placement_not_signaled ========
[10:56:53] [PASSED] Buffer object for userspace
[10:56:53] [PASSED] Kernel buffer object
[10:56:53] [PASSED] Shared buffer object
[10:56:53] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[10:56:53] [PASSED] ttm_bo_validate_move_fence_signaled
[10:56:53] ========= ttm_bo_validate_move_fence_not_signaled =========
[10:56:53] [PASSED] Waits for GPU
[10:56:53] [PASSED] Tries to lock straight away
[10:56:53] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[10:56:53] [PASSED] ttm_bo_validate_swapout
[10:56:53] [PASSED] ttm_bo_validate_happy_evict
[10:56:53] [PASSED] ttm_bo_validate_all_pinned_evict
[10:56:53] [PASSED] ttm_bo_validate_allowed_only_evict
[10:56:53] [PASSED] ttm_bo_validate_deleted_evict
[10:56:53] [PASSED] ttm_bo_validate_busy_domain_evict
[10:56:53] [PASSED] ttm_bo_validate_evict_gutting
[10:56:53] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[10:56:53] ================= [PASSED] ttm_bo_validate =================
[10:56:53] ============================================================
[10:56:53] Testing complete. Ran 102 tests: passed: 102
[10:56:53] Elapsed time: 9.808s total, 1.649s configuring, 7.491s building, 0.577s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-23 9:35 ` Philipp Stanner
2025-01-23 9:55 ` Danilo Krummrich
@ 2025-01-23 10:57 ` Tvrtko Ursulin
1 sibling, 0 replies; 35+ messages in thread
From: Tvrtko Ursulin @ 2025-01-23 10:57 UTC (permalink / raw)
To: phasta, Danilo Krummrich
Cc: Boris Brezillon, Alex Deucher, Christian König, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Rob Herring, Steven Price, Liviu Dudau,
Luben Tuikov, Matthew Brost, Melissa Wen, Maíra Canal,
Lucas De Marchi, Thomas Hellström, Rodrigo Vivi,
Sunil Khatri, Lijo Lazar, Mario Limonciello, Ma Jun, Yunxiang Li,
amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
On 23/01/2025 09:35, Philipp Stanner wrote:
> On Thu, 2025-01-23 at 10:29 +0100, Danilo Krummrich wrote:
>> On Thu, Jan 23, 2025 at 08:33:01AM +0100, Philipp Stanner wrote:
>>> On Wed, 2025-01-22 at 18:16 +0100, Boris Brezillon wrote:
>>>> On Wed, 22 Jan 2025 15:08:20 +0100
>>>> Philipp Stanner <phasta@kernel.org> wrote:
>>>>
>>>>> int drm_sched_init(struct drm_gpu_scheduler *sched,
>>>>> - const struct drm_sched_backend_ops *ops,
>>>>> - struct workqueue_struct *submit_wq,
>>>>> - u32 num_rqs, u32 credit_limit, unsigned int hang_limit,
>>>>> - long timeout, struct workqueue_struct *timeout_wq,
>>>>> - atomic_t *score, const char *name, struct device *dev);
>>>>> + const struct drm_sched_init_params *params);
>>>>
>>>>
>>>> Another nit: indenting is messed up here.
>>>
>>> That was done on purpose.
>>
>> Let's not change this convention, it's used all over the kernel tree,
>> including
>> the GPU scheduler. People are used to read code that is formatted
>> this way, plus
>> the attempt of changing it will make code formatting inconsistent.
>
> Both the tree and this file are already inconsistent in regards to
> this.
>
> Anyways, what is your proposed solution to ridiculous nonsense like
> this?
>
> https://elixir.bootlin.com/linux/v6.13-rc3/source/drivers/gpu/drm/scheduler/sched_main.c#L1296
Apologies for budging in. Sometimes breaking 80 cols is unavoidable, or
perhaps something like the below would be a bit easier on the eyes?
Although it still breaks 80 columns, just a bit less.
diff --git a/drivers/gpu/drm/scheduler/sched_main.c
b/drivers/gpu/drm/scheduler/sched_main.c
index 06b06987129d..3f7e97b240d1 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -1287,22 +1287,18 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
return 0;
}
- if (submit_wq) {
- sched->submit_wq = submit_wq;
- sched->own_submit_wq = false;
- } else {
-#ifdef CONFIG_LOCKDEP
- sched->submit_wq = alloc_ordered_workqueue_lockdep_map(name,
-
WQ_MEM_RECLAIM,
-
&drm_sched_lockdep_map);
-#else
- sched->submit_wq = alloc_ordered_workqueue(name,
WQ_MEM_RECLAIM);
-#endif
- if (!sched->submit_wq)
- return -ENOMEM;
+ own_wq = !!submit_wq;
+ if (!submit_wq && IS_ENABLED(CONFIG_LOCKDEP))
+ submit_wq = alloc_ordered_workqueue_lockdep_map(name,
+
WQ_MEM_RECLAIM,
+
&drm_sched_lockdep_map);
+ else if (!submit_wq)
+ submit_wq = alloc_ordered_workqueue(name, WQ_MEM_RECLAIM);
+ if (!submit_wq)
+ return -ENOMEM;
- sched->own_submit_wq = true;
- }
+ sched->submit_wq = submit_wq;
+ sched->own_submit_wq = own_wq;
sched->sched_rq = kmalloc_array(num_rqs, sizeof(*sched->sched_rq),
GFP_KERNEL | __GFP_ZERO);
Could bring it under 80 by renaming drm_sched_lockdep_map to something
shorter. Which should be fine since it is local to the file.
Regards,
Tvrtko
^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-23 8:10 ` Philipp Stanner
2025-01-23 8:39 ` Philipp Stanner
@ 2025-01-23 11:10 ` Maíra Canal
2025-01-23 12:13 ` Philipp Stanner
1 sibling, 1 reply; 35+ messages in thread
From: Maíra Canal @ 2025-01-23 11:10 UTC (permalink / raw)
To: Philipp Stanner, Philipp Stanner, Alex Deucher,
Christian König, Xinhui Pan, David Airlie, Simona Vetter,
Lucas Stach, Russell King, Christian Gmeiner, Frank Binns,
Matt Coster, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, Karol Herbst, Lyude Paul,
Danilo Krummrich, Boris Brezillon, Rob Herring, Steven Price,
Liviu Dudau, Luben Tuikov, Matthew Brost, Melissa Wen,
Lucas De Marchi, Thomas Hellström, Rodrigo Vivi,
Sunil Khatri, Lijo Lazar, Mario Limonciello, Ma Jun, Yunxiang Li
Cc: amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
Hi Philipp,
On 23/01/25 05:10, Philipp Stanner wrote:
> On Wed, 2025-01-22 at 19:07 -0300, Maíra Canal wrote:
>> Hi Philipp,
>>
>> On 22/01/25 11:08, Philipp Stanner wrote:
>>> drm_sched_init() has a great many parameters and upcoming new
>>> functionality for the scheduler might add even more. Generally, the
>>> great number of parameters reduces readability and has already
>>> caused
>>> one missnaming in:
>>>
>>> commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in
>>> nouveau_sched_init()").
>>>
>>> Introduce a new struct for the scheduler init parameters and port
>>> all
>>> users.
>>>
>>> Signed-off-by: Philipp Stanner <phasta@kernel.org>
[...]
>>
>>> diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
>>> b/drivers/gpu/drm/v3d/v3d_sched.c
>>> index 99ac4995b5a1..716e6d074d87 100644
>>> --- a/drivers/gpu/drm/v3d/v3d_sched.c
>>> +++ b/drivers/gpu/drm/v3d/v3d_sched.c
>>> @@ -814,67 +814,124 @@ static const struct drm_sched_backend_ops
>>> v3d_cpu_sched_ops = {
>>> .free_job = v3d_cpu_job_free
>>> };
>>>
>>> +/*
>>> + * v3d's scheduler instances are all identical, except for ops and
>>> name.
>>> + */
>>> +static void
>>> +v3d_common_sched_init(struct drm_sched_init_params *params, struct
>>> device *dev)
>>> +{
>>> + memset(params, 0, sizeof(struct drm_sched_init_params));
>>> +
>>> + params->submit_wq = NULL; /* Use the system_wq. */
>>> + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>> + params->credit_limit = 1;
>>> + params->hang_limit = 0;
>>> + params->timeout = msecs_to_jiffies(500);
>>> + params->timeout_wq = NULL; /* Use the system_wq. */
>>> + params->score = NULL;
>>> + params->dev = dev;
>>> +}
>>
>> Could we use only one function that takes struct v3d_dev *v3d, enum
>> v3d_queue, and sched_ops as arguments (instead of one function per
>> queue)? You can get the name of the scheduler by concatenating "v3d_"
>> to
>> the return of v3d_queue_to_string().
>>
>> I believe it would make the code much simpler.
>
> Hello,
>
> so just to get that right:
> You'd like to have one universal function that switch-cases over an
> enum, sets the ops and creates the name with string concatenation?
>
> I'm not convinced that this is simpler than a few small functions, but
> it's not my component, so…
>
> Whatever we'll do will be simpler than the existing code, though. Right
> now no reader can see at first glance whether all those schedulers are
> identically parametrized or not.
>
This is my proposal (just a quick draft, please check if it compiles):
diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
b/drivers/gpu/drm/v3d/v3d_sched.c
index 961465128d80..7cc45a0c6ca0 100644
--- a/drivers/gpu/drm/v3d/v3d_sched.c
+++ b/drivers/gpu/drm/v3d/v3d_sched.c
@@ -820,67 +820,62 @@ static const struct drm_sched_backend_ops
v3d_cpu_sched_ops = {
.free_job = v3d_cpu_job_free
};
+static int
+v3d_sched_queue_init(struct v3d_dev *v3d, enum v3d_queue queue,
+ const struct drm_sched_backend_ops *ops, const char
*name)
+{
+ struct drm_sched_init_params params = {
+ .submit_wq = NULL,
+ .num_rqs = DRM_SCHED_PRIORITY_COUNT,
+ .credit_limit = 1,
+ .hang_limit = 0,
+ .timeout = msecs_to_jiffies(500),
+ .timeout_wq = NULL,
+ .score = NULL,
+ .dev = v3d->drm.dev,
+ };
+
+ params.ops = ops;
+ params.name = name;
+
+ return drm_sched_init(&v3d->queue[queue].sched, ¶ms);
+}
+
int
v3d_sched_init(struct v3d_dev *v3d)
{
- int hw_jobs_limit = 1;
- int job_hang_limit = 0;
- int hang_limit_ms = 500;
int ret;
- ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
- &v3d_bin_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- hw_jobs_limit, job_hang_limit,
- msecs_to_jiffies(hang_limit_ms), NULL,
- NULL, "v3d_bin", v3d->drm.dev);
+ ret = v3d_sched_queue_init(v3d, V3D_BIN, &v3d_bin_sched_ops,
+ "v3d_bin");
if (ret)
return ret;
- ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
- &v3d_render_sched_ops, NULL,
- DRM_SCHED_PRIORITY_COUNT,
- hw_jobs_limit, job_hang_limit,
- msecs_to_jiffies(hang_limit_ms), NULL,
- NULL, "v3d_render", v3d->drm.dev);
+ ret = v3d_sched_queue_init(v3d, V3D_RENDER, &v3d_render_sched_ops,
+ "v3d_render");
if (ret)
goto fail;
[...]
At least for me, this looks much simpler than one function for each
V3D queue.
Best Regards,
- Maíra
> P.
>
>
>>
>> Best Regards,
>> - Maíra
>>
^ permalink raw reply related [flat|nested] 35+ messages in thread
* ✓ CI.Build: success for drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
` (7 preceding siblings ...)
2025-01-23 10:56 ` ✓ CI.KUnit: success " Patchwork
@ 2025-01-23 11:13 ` Patchwork
2025-01-23 11:15 ` ✓ CI.Hooks: " Patchwork
` (3 subsequent siblings)
12 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2025-01-23 11:13 UTC (permalink / raw)
To: Philipp Stanner; +Cc: intel-xe
== Series Details ==
Series: drm/sched: Use struct for drm_sched_init() params
URL : https://patchwork.freedesktop.org/series/143883/
State : success
== Summary ==
lib/modules/6.13.0-xe+/kernel/arch/x86/events/rapl.ko
lib/modules/6.13.0-xe+/kernel/arch/x86/kvm/
lib/modules/6.13.0-xe+/kernel/arch/x86/kvm/kvm.ko
lib/modules/6.13.0-xe+/kernel/arch/x86/kvm/kvm-intel.ko
lib/modules/6.13.0-xe+/kernel/arch/x86/kvm/kvm-amd.ko
lib/modules/6.13.0-xe+/kernel/kernel/
lib/modules/6.13.0-xe+/kernel/kernel/kheaders.ko
lib/modules/6.13.0-xe+/kernel/crypto/
lib/modules/6.13.0-xe+/kernel/crypto/ecrdsa_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/xcbc.ko
lib/modules/6.13.0-xe+/kernel/crypto/serpent_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/aria_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/crypto_simd.ko
lib/modules/6.13.0-xe+/kernel/crypto/adiantum.ko
lib/modules/6.13.0-xe+/kernel/crypto/tcrypt.ko
lib/modules/6.13.0-xe+/kernel/crypto/crypto_engine.ko
lib/modules/6.13.0-xe+/kernel/crypto/zstd.ko
lib/modules/6.13.0-xe+/kernel/crypto/asymmetric_keys/
lib/modules/6.13.0-xe+/kernel/crypto/asymmetric_keys/pkcs7_test_key.ko
lib/modules/6.13.0-xe+/kernel/crypto/asymmetric_keys/pkcs8_key_parser.ko
lib/modules/6.13.0-xe+/kernel/crypto/des_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/xctr.ko
lib/modules/6.13.0-xe+/kernel/crypto/authenc.ko
lib/modules/6.13.0-xe+/kernel/crypto/sm4_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/keywrap.ko
lib/modules/6.13.0-xe+/kernel/crypto/camellia_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/sm3.ko
lib/modules/6.13.0-xe+/kernel/crypto/pcrypt.ko
lib/modules/6.13.0-xe+/kernel/crypto/aegis128.ko
lib/modules/6.13.0-xe+/kernel/crypto/af_alg.ko
lib/modules/6.13.0-xe+/kernel/crypto/algif_aead.ko
lib/modules/6.13.0-xe+/kernel/crypto/cmac.ko
lib/modules/6.13.0-xe+/kernel/crypto/sm3_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/aes_ti.ko
lib/modules/6.13.0-xe+/kernel/crypto/chacha_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/poly1305_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/nhpoly1305.ko
lib/modules/6.13.0-xe+/kernel/crypto/crc32_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/essiv.ko
lib/modules/6.13.0-xe+/kernel/crypto/ccm.ko
lib/modules/6.13.0-xe+/kernel/crypto/wp512.ko
lib/modules/6.13.0-xe+/kernel/crypto/streebog_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/authencesn.ko
lib/modules/6.13.0-xe+/kernel/crypto/echainiv.ko
lib/modules/6.13.0-xe+/kernel/crypto/lrw.ko
lib/modules/6.13.0-xe+/kernel/crypto/cryptd.ko
lib/modules/6.13.0-xe+/kernel/crypto/crypto_user.ko
lib/modules/6.13.0-xe+/kernel/crypto/algif_hash.ko
lib/modules/6.13.0-xe+/kernel/crypto/vmac.ko
lib/modules/6.13.0-xe+/kernel/crypto/polyval-generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/hctr2.ko
lib/modules/6.13.0-xe+/kernel/crypto/842.ko
lib/modules/6.13.0-xe+/kernel/crypto/pcbc.ko
lib/modules/6.13.0-xe+/kernel/crypto/ansi_cprng.ko
lib/modules/6.13.0-xe+/kernel/crypto/cast6_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/twofish_common.ko
lib/modules/6.13.0-xe+/kernel/crypto/twofish_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/lz4hc.ko
lib/modules/6.13.0-xe+/kernel/crypto/blowfish_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/md4.ko
lib/modules/6.13.0-xe+/kernel/crypto/chacha20poly1305.ko
lib/modules/6.13.0-xe+/kernel/crypto/curve25519-generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/lz4.ko
lib/modules/6.13.0-xe+/kernel/crypto/rmd160.ko
lib/modules/6.13.0-xe+/kernel/crypto/algif_skcipher.ko
lib/modules/6.13.0-xe+/kernel/crypto/cast5_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/fcrypt.ko
lib/modules/6.13.0-xe+/kernel/crypto/ecdsa_generic.ko
lib/modules/6.13.0-xe+/kernel/crypto/sm4.ko
lib/modules/6.13.0-xe+/kernel/crypto/cast_common.ko
lib/modules/6.13.0-xe+/kernel/crypto/blowfish_common.ko
lib/modules/6.13.0-xe+/kernel/crypto/michael_mic.ko
lib/modules/6.13.0-xe+/kernel/crypto/async_tx/
lib/modules/6.13.0-xe+/kernel/crypto/async_tx/async_xor.ko
lib/modules/6.13.0-xe+/kernel/crypto/async_tx/async_tx.ko
lib/modules/6.13.0-xe+/kernel/crypto/async_tx/async_memcpy.ko
lib/modules/6.13.0-xe+/kernel/crypto/async_tx/async_pq.ko
lib/modules/6.13.0-xe+/kernel/crypto/async_tx/async_raid6_recov.ko
lib/modules/6.13.0-xe+/kernel/crypto/algif_rng.ko
lib/modules/6.13.0-xe+/kernel/block/
lib/modules/6.13.0-xe+/kernel/block/bfq.ko
lib/modules/6.13.0-xe+/kernel/block/kyber-iosched.ko
lib/modules/6.13.0-xe+/build
lib/modules/6.13.0-xe+/modules.alias.bin
lib/modules/6.13.0-xe+/modules.builtin
lib/modules/6.13.0-xe+/modules.softdep
lib/modules/6.13.0-xe+/modules.alias
lib/modules/6.13.0-xe+/modules.order
lib/modules/6.13.0-xe+/modules.symbols
lib/modules/6.13.0-xe+/modules.dep.bin
+ mv kernel-nodebug.tar.gz ..
+ cd ..
+ rm -rf archive
++ date +%s
^[[0Ksection_end:1737630798:package_x86_64_nodebug
^[[0K
+ echo -e '\e[0Ksection_end:1737630798:package_x86_64_nodebug\r\e[0K'
+ sync
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* ✓ CI.Hooks: success for drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
` (8 preceding siblings ...)
2025-01-23 11:13 ` ✓ CI.Build: " Patchwork
@ 2025-01-23 11:15 ` Patchwork
2025-01-23 11:17 ` ✓ CI.checksparse: " Patchwork
` (2 subsequent siblings)
12 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2025-01-23 11:15 UTC (permalink / raw)
To: Philipp Stanner; +Cc: intel-xe
== Series Details ==
Series: drm/sched: Use struct for drm_sched_init() params
URL : https://patchwork.freedesktop.org/series/143883/
State : success
== Summary ==
run-parts: executing /workspace/ci/hooks/00-showenv
+ export
+ grep -Ei '(^|\W)CI_'
declare -x CI_KERNEL_BUILD_DIR="/workspace/kernel/build64-default"
declare -x CI_KERNEL_SRC_DIR="/workspace/kernel"
declare -x CI_TOOLS_SRC_DIR="/workspace/ci"
declare -x CI_WORKSPACE_DIR="/workspace"
run-parts: executing /workspace/ci/hooks/10-build-W1
+ SRC_DIR=/workspace/kernel
+ RESTORE_DISPLAY_CONFIG=0
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ cd /workspace/kernel
++ nproc
+ make -j48 O=/workspace/kernel/build64-default modules_prepare
make[1]: Entering directory '/workspace/kernel/build64-default'
GEN Makefile
mkdir -p /workspace/kernel/build64-default/tools/objtool && make O=/workspace/kernel/build64-default subdir=tools/objtool --no-print-directory -C objtool
CALL ../scripts/checksyscalls.sh
INSTALL libsubcmd_headers
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/exec-cmd.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/help.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/pager.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/parse-options.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/run-command.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/sigchain.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/subcmd-config.o
LD /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd-in.o
AR /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd.a
CC /workspace/kernel/build64-default/tools/objtool/weak.o
CC /workspace/kernel/build64-default/tools/objtool/check.o
CC /workspace/kernel/build64-default/tools/objtool/builtin-check.o
CC /workspace/kernel/build64-default/tools/objtool/special.o
CC /workspace/kernel/build64-default/tools/objtool/elf.o
CC /workspace/kernel/build64-default/tools/objtool/objtool.o
CC /workspace/kernel/build64-default/tools/objtool/orc_gen.o
CC /workspace/kernel/build64-default/tools/objtool/orc_dump.o
CC /workspace/kernel/build64-default/tools/objtool/libstring.o
CC /workspace/kernel/build64-default/tools/objtool/arch/x86/special.o
CC /workspace/kernel/build64-default/tools/objtool/libctype.o
CC /workspace/kernel/build64-default/tools/objtool/str_error_r.o
CC /workspace/kernel/build64-default/tools/objtool/arch/x86/decode.o
CC /workspace/kernel/build64-default/tools/objtool/librbtree.o
CC /workspace/kernel/build64-default/tools/objtool/arch/x86/orc.o
LD /workspace/kernel/build64-default/tools/objtool/arch/x86/objtool-in.o
LD /workspace/kernel/build64-default/tools/objtool/objtool-in.o
LINK /workspace/kernel/build64-default/tools/objtool/objtool
make[1]: Leaving directory '/workspace/kernel/build64-default'
++ nproc
+ make -j48 O=/workspace/kernel/build64-default W=1 drivers/gpu/drm/xe
make[1]: Entering directory '/workspace/kernel/build64-default'
make[2]: Nothing to be done for 'drivers/gpu/drm/xe'.
make[1]: Leaving directory '/workspace/kernel/build64-default'
run-parts: executing /workspace/ci/hooks/11-build-32b
+++ realpath /workspace/ci/hooks/11-build-32b
++ dirname /workspace/ci/hooks/11-build-32b
+ THIS_SCRIPT_DIR=/workspace/ci/hooks
+ SRC_DIR=/workspace/kernel
+ TOOLS_SRC_DIR=/workspace/ci
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ BUILD_DIR=/workspace/kernel/build64-default/build32
+ cd /workspace/kernel
+ mkdir -p /workspace/kernel/build64-default/build32
++ nproc
+ make -j48 ARCH=i386 O=/workspace/kernel/build64-default/build32 defconfig
make[1]: Entering directory '/workspace/kernel/build64-default/build32'
GEN Makefile
HOSTCC scripts/basic/fixdep
HOSTCC scripts/kconfig/conf.o
HOSTCC scripts/kconfig/confdata.o
HOSTCC scripts/kconfig/expr.o
LEX scripts/kconfig/lexer.lex.c
YACC scripts/kconfig/parser.tab.[ch]
HOSTCC scripts/kconfig/menu.o
HOSTCC scripts/kconfig/preprocess.o
HOSTCC scripts/kconfig/symbol.o
HOSTCC scripts/kconfig/util.o
HOSTCC scripts/kconfig/lexer.lex.o
HOSTCC scripts/kconfig/parser.tab.o
HOSTLD scripts/kconfig/conf
*** Default configuration is based on 'i386_defconfig'
#
# configuration written to .config
#
make[1]: Leaving directory '/workspace/kernel/build64-default/build32'
+ cd /workspace/kernel/build64-default/build32
+ /workspace/kernel/scripts/kconfig/merge_config.sh .config /workspace/ci/kernel/fragments/10-xe.fragment
Using .config as base
Merging /workspace/ci/kernel/fragments/10-xe.fragment
Value of CONFIG_DRM_XE is redefined by fragment /workspace/ci/kernel/fragments/10-xe.fragment:
Previous value: # CONFIG_DRM_XE is not set
New value: CONFIG_DRM_XE=m
GEN Makefile
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m] && HAS_IOPORT [=y]
#
# configuration written to .config
#
Value requested for CONFIG_HAVE_UID16 not in final .config
Requested value: CONFIG_HAVE_UID16=y
Actual value:
Value requested for CONFIG_UID16 not in final .config
Requested value: CONFIG_UID16=y
Actual value:
Value requested for CONFIG_X86_32 not in final .config
Requested value: CONFIG_X86_32=y
Actual value:
Value requested for CONFIG_OUTPUT_FORMAT not in final .config
Requested value: CONFIG_OUTPUT_FORMAT="elf32-i386"
Actual value: CONFIG_OUTPUT_FORMAT="elf64-x86-64"
Value requested for CONFIG_ARCH_MMAP_RND_BITS_MIN not in final .config
Requested value: CONFIG_ARCH_MMAP_RND_BITS_MIN=8
Actual value: CONFIG_ARCH_MMAP_RND_BITS_MIN=28
Value requested for CONFIG_ARCH_MMAP_RND_BITS_MAX not in final .config
Requested value: CONFIG_ARCH_MMAP_RND_BITS_MAX=16
Actual value: CONFIG_ARCH_MMAP_RND_BITS_MAX=32
Value requested for CONFIG_PGTABLE_LEVELS not in final .config
Requested value: CONFIG_PGTABLE_LEVELS=2
Actual value: CONFIG_PGTABLE_LEVELS=5
Value requested for CONFIG_X86_BIGSMP not in final .config
Requested value: # CONFIG_X86_BIGSMP is not set
Actual value:
Value requested for CONFIG_X86_INTEL_QUARK not in final .config
Requested value: # CONFIG_X86_INTEL_QUARK is not set
Actual value:
Value requested for CONFIG_X86_RDC321X not in final .config
Requested value: # CONFIG_X86_RDC321X is not set
Actual value:
Value requested for CONFIG_X86_32_NON_STANDARD not in final .config
Requested value: # CONFIG_X86_32_NON_STANDARD is not set
Actual value:
Value requested for CONFIG_X86_32_IRIS not in final .config
Requested value: # CONFIG_X86_32_IRIS is not set
Actual value:
Value requested for CONFIG_M486SX not in final .config
Requested value: # CONFIG_M486SX is not set
Actual value:
Value requested for CONFIG_M486 not in final .config
Requested value: # CONFIG_M486 is not set
Actual value:
Value requested for CONFIG_M586 not in final .config
Requested value: # CONFIG_M586 is not set
Actual value:
Value requested for CONFIG_M586TSC not in final .config
Requested value: # CONFIG_M586TSC is not set
Actual value:
Value requested for CONFIG_M586MMX not in final .config
Requested value: # CONFIG_M586MMX is not set
Actual value:
Value requested for CONFIG_M686 not in final .config
Requested value: CONFIG_M686=y
Actual value:
Value requested for CONFIG_MPENTIUMII not in final .config
Requested value: # CONFIG_MPENTIUMII is not set
Actual value:
Value requested for CONFIG_MPENTIUMIII not in final .config
Requested value: # CONFIG_MPENTIUMIII is not set
Actual value:
Value requested for CONFIG_MPENTIUMM not in final .config
Requested value: # CONFIG_MPENTIUMM is not set
Actual value:
Value requested for CONFIG_MPENTIUM4 not in final .config
Requested value: # CONFIG_MPENTIUM4 is not set
Actual value:
Value requested for CONFIG_MK6 not in final .config
Requested value: # CONFIG_MK6 is not set
Actual value:
Value requested for CONFIG_MK7 not in final .config
Requested value: # CONFIG_MK7 is not set
Actual value:
Value requested for CONFIG_MCRUSOE not in final .config
Requested value: # CONFIG_MCRUSOE is not set
Actual value:
Value requested for CONFIG_MEFFICEON not in final .config
Requested value: # CONFIG_MEFFICEON is not set
Actual value:
Value requested for CONFIG_MWINCHIPC6 not in final .config
Requested value: # CONFIG_MWINCHIPC6 is not set
Actual value:
Value requested for CONFIG_MWINCHIP3D not in final .config
Requested value: # CONFIG_MWINCHIP3D is not set
Actual value:
Value requested for CONFIG_MELAN not in final .config
Requested value: # CONFIG_MELAN is not set
Actual value:
Value requested for CONFIG_MGEODEGX1 not in final .config
Requested value: # CONFIG_MGEODEGX1 is not set
Actual value:
Value requested for CONFIG_MGEODE_LX not in final .config
Requested value: # CONFIG_MGEODE_LX is not set
Actual value:
Value requested for CONFIG_MCYRIXIII not in final .config
Requested value: # CONFIG_MCYRIXIII is not set
Actual value:
Value requested for CONFIG_MVIAC3_2 not in final .config
Requested value: # CONFIG_MVIAC3_2 is not set
Actual value:
Value requested for CONFIG_MVIAC7 not in final .config
Requested value: # CONFIG_MVIAC7 is not set
Actual value:
Value requested for CONFIG_X86_GENERIC not in final .config
Requested value: # CONFIG_X86_GENERIC is not set
Actual value:
Value requested for CONFIG_X86_INTERNODE_CACHE_SHIFT not in final .config
Requested value: CONFIG_X86_INTERNODE_CACHE_SHIFT=5
Actual value: CONFIG_X86_INTERNODE_CACHE_SHIFT=6
Value requested for CONFIG_X86_L1_CACHE_SHIFT not in final .config
Requested value: CONFIG_X86_L1_CACHE_SHIFT=5
Actual value: CONFIG_X86_L1_CACHE_SHIFT=6
Value requested for CONFIG_X86_USE_PPRO_CHECKSUM not in final .config
Requested value: CONFIG_X86_USE_PPRO_CHECKSUM=y
Actual value:
Value requested for CONFIG_X86_MINIMUM_CPU_FAMILY not in final .config
Requested value: CONFIG_X86_MINIMUM_CPU_FAMILY=6
Actual value: CONFIG_X86_MINIMUM_CPU_FAMILY=64
Value requested for CONFIG_CPU_SUP_TRANSMETA_32 not in final .config
Requested value: CONFIG_CPU_SUP_TRANSMETA_32=y
Actual value:
Value requested for CONFIG_CPU_SUP_VORTEX_32 not in final .config
Requested value: CONFIG_CPU_SUP_VORTEX_32=y
Actual value:
Value requested for CONFIG_HPET_TIMER not in final .config
Requested value: # CONFIG_HPET_TIMER is not set
Actual value: CONFIG_HPET_TIMER=y
Value requested for CONFIG_NR_CPUS_RANGE_END not in final .config
Requested value: CONFIG_NR_CPUS_RANGE_END=8
Actual value: CONFIG_NR_CPUS_RANGE_END=512
Value requested for CONFIG_NR_CPUS_DEFAULT not in final .config
Requested value: CONFIG_NR_CPUS_DEFAULT=8
Actual value: CONFIG_NR_CPUS_DEFAULT=64
Value requested for CONFIG_X86_ANCIENT_MCE not in final .config
Requested value: # CONFIG_X86_ANCIENT_MCE is not set
Actual value:
Value requested for CONFIG_X86_LEGACY_VM86 not in final .config
Requested value: # CONFIG_X86_LEGACY_VM86 is not set
Actual value:
Value requested for CONFIG_X86_ESPFIX32 not in final .config
Requested value: CONFIG_X86_ESPFIX32=y
Actual value:
Value requested for CONFIG_TOSHIBA not in final .config
Requested value: # CONFIG_TOSHIBA is not set
Actual value:
Value requested for CONFIG_X86_REBOOTFIXUPS not in final .config
Requested value: # CONFIG_X86_REBOOTFIXUPS is not set
Actual value:
Value requested for CONFIG_MICROCODE_INITRD32 not in final .config
Requested value: CONFIG_MICROCODE_INITRD32=y
Actual value:
Value requested for CONFIG_NOHIGHMEM not in final .config
Requested value: # CONFIG_NOHIGHMEM is not set
Actual value:
Value requested for CONFIG_HIGHMEM4G not in final .config
Requested value: CONFIG_HIGHMEM4G=y
Actual value:
Value requested for CONFIG_HIGHMEM64G not in final .config
Requested value: # CONFIG_HIGHMEM64G is not set
Actual value:
Value requested for CONFIG_VMSPLIT_3G not in final .config
Requested value: CONFIG_VMSPLIT_3G=y
Actual value:
Value requested for CONFIG_VMSPLIT_3G_OPT not in final .config
Requested value: # CONFIG_VMSPLIT_3G_OPT is not set
Actual value:
Value requested for CONFIG_VMSPLIT_2G not in final .config
Requested value: # CONFIG_VMSPLIT_2G is not set
Actual value:
Value requested for CONFIG_VMSPLIT_2G_OPT not in final .config
Requested value: # CONFIG_VMSPLIT_2G_OPT is not set
Actual value:
Value requested for CONFIG_VMSPLIT_1G not in final .config
Requested value: # CONFIG_VMSPLIT_1G is not set
Actual value:
Value requested for CONFIG_PAGE_OFFSET not in final .config
Requested value: CONFIG_PAGE_OFFSET=0xC0000000
Actual value:
Value requested for CONFIG_HIGHMEM not in final .config
Requested value: CONFIG_HIGHMEM=y
Actual value:
Value requested for CONFIG_X86_PAE not in final .config
Requested value: # CONFIG_X86_PAE is not set
Actual value:
Value requested for CONFIG_ARCH_FLATMEM_ENABLE not in final .config
Requested value: CONFIG_ARCH_FLATMEM_ENABLE=y
Actual value:
Value requested for CONFIG_ARCH_SELECT_MEMORY_MODEL not in final .config
Requested value: CONFIG_ARCH_SELECT_MEMORY_MODEL=y
Actual value:
Value requested for CONFIG_ILLEGAL_POINTER_VALUE not in final .config
Requested value: CONFIG_ILLEGAL_POINTER_VALUE=0
Actual value: CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
Value requested for CONFIG_HIGHPTE not in final .config
Requested value: # CONFIG_HIGHPTE is not set
Actual value:
Value requested for CONFIG_COMPAT_VDSO not in final .config
Requested value: # CONFIG_COMPAT_VDSO is not set
Actual value:
Value requested for CONFIG_FUNCTION_PADDING_CFI not in final .config
Requested value: CONFIG_FUNCTION_PADDING_CFI=0
Actual value: CONFIG_FUNCTION_PADDING_CFI=11
Value requested for CONFIG_FUNCTION_PADDING_BYTES not in final .config
Requested value: CONFIG_FUNCTION_PADDING_BYTES=4
Actual value: CONFIG_FUNCTION_PADDING_BYTES=16
Value requested for CONFIG_APM not in final .config
Requested value: # CONFIG_APM is not set
Actual value:
Value requested for CONFIG_X86_POWERNOW_K6 not in final .config
Requested value: # CONFIG_X86_POWERNOW_K6 is not set
Actual value:
Value requested for CONFIG_X86_POWERNOW_K7 not in final .config
Requested value: # CONFIG_X86_POWERNOW_K7 is not set
Actual value:
Value requested for CONFIG_X86_GX_SUSPMOD not in final .config
Requested value: # CONFIG_X86_GX_SUSPMOD is not set
Actual value:
Value requested for CONFIG_X86_SPEEDSTEP_ICH not in final .config
Requested value: # CONFIG_X86_SPEEDSTEP_ICH is not set
Actual value:
Value requested for CONFIG_X86_SPEEDSTEP_SMI not in final .config
Requested value: # CONFIG_X86_SPEEDSTEP_SMI is not set
Actual value:
Value requested for CONFIG_X86_CPUFREQ_NFORCE2 not in final .config
Requested value: # CONFIG_X86_CPUFREQ_NFORCE2 is not set
Actual value:
Value requested for CONFIG_X86_LONGRUN not in final .config
Requested value: # CONFIG_X86_LONGRUN is not set
Actual value:
Value requested for CONFIG_X86_LONGHAUL not in final .config
Requested value: # CONFIG_X86_LONGHAUL is not set
Actual value:
Value requested for CONFIG_X86_E_POWERSAVER not in final .config
Requested value: # CONFIG_X86_E_POWERSAVER is not set
Actual value:
Value requested for CONFIG_PCI_GOBIOS not in final .config
Requested value: # CONFIG_PCI_GOBIOS is not set
Actual value:
Value requested for CONFIG_PCI_GOMMCONFIG not in final .config
Requested value: # CONFIG_PCI_GOMMCONFIG is not set
Actual value:
Value requested for CONFIG_PCI_GODIRECT not in final .config
Requested value: # CONFIG_PCI_GODIRECT is not set
Actual value:
Value requested for CONFIG_PCI_GOANY not in final .config
Requested value: CONFIG_PCI_GOANY=y
Actual value:
Value requested for CONFIG_PCI_BIOS not in final .config
Requested value: CONFIG_PCI_BIOS=y
Actual value:
Value requested for CONFIG_ISA not in final .config
Requested value: # CONFIG_ISA is not set
Actual value:
Value requested for CONFIG_SCx200 not in final .config
Requested value: # CONFIG_SCx200 is not set
Actual value:
Value requested for CONFIG_OLPC not in final .config
Requested value: # CONFIG_OLPC is not set
Actual value:
Value requested for CONFIG_ALIX not in final .config
Requested value: # CONFIG_ALIX is not set
Actual value:
Value requested for CONFIG_NET5501 not in final .config
Requested value: # CONFIG_NET5501 is not set
Actual value:
Value requested for CONFIG_GEOS not in final .config
Requested value: # CONFIG_GEOS is not set
Actual value:
Value requested for CONFIG_COMPAT_32 not in final .config
Requested value: CONFIG_COMPAT_32=y
Actual value:
Value requested for CONFIG_HAVE_ATOMIC_IOMAP not in final .config
Requested value: CONFIG_HAVE_ATOMIC_IOMAP=y
Actual value:
Value requested for CONFIG_ARCH_32BIT_OFF_T not in final .config
Requested value: CONFIG_ARCH_32BIT_OFF_T=y
Actual value:
Value requested for CONFIG_ARCH_WANT_IPC_PARSE_VERSION not in final .config
Requested value: CONFIG_ARCH_WANT_IPC_PARSE_VERSION=y
Actual value:
Value requested for CONFIG_MODULES_USE_ELF_REL not in final .config
Requested value: CONFIG_MODULES_USE_ELF_REL=y
Actual value:
Value requested for CONFIG_ARCH_MMAP_RND_BITS not in final .config
Requested value: CONFIG_ARCH_MMAP_RND_BITS=8
Actual value: CONFIG_ARCH_MMAP_RND_BITS=28
Value requested for CONFIG_CLONE_BACKWARDS not in final .config
Requested value: CONFIG_CLONE_BACKWARDS=y
Actual value:
Value requested for CONFIG_OLD_SIGSUSPEND3 not in final .config
Requested value: CONFIG_OLD_SIGSUSPEND3=y
Actual value:
Value requested for CONFIG_OLD_SIGACTION not in final .config
Requested value: CONFIG_OLD_SIGACTION=y
Actual value:
Value requested for CONFIG_ARCH_SPLIT_ARG64 not in final .config
Requested value: CONFIG_ARCH_SPLIT_ARG64=y
Actual value:
Value requested for CONFIG_FUNCTION_ALIGNMENT not in final .config
Requested value: CONFIG_FUNCTION_ALIGNMENT=4
Actual value: CONFIG_FUNCTION_ALIGNMENT=16
Value requested for CONFIG_SELECT_MEMORY_MODEL not in final .config
Requested value: CONFIG_SELECT_MEMORY_MODEL=y
Actual value:
Value requested for CONFIG_FLATMEM_MANUAL not in final .config
Requested value: CONFIG_FLATMEM_MANUAL=y
Actual value:
Value requested for CONFIG_SPARSEMEM_MANUAL not in final .config
Requested value: # CONFIG_SPARSEMEM_MANUAL is not set
Actual value:
Value requested for CONFIG_FLATMEM not in final .config
Requested value: CONFIG_FLATMEM=y
Actual value:
Value requested for CONFIG_SPARSEMEM_STATIC not in final .config
Requested value: CONFIG_SPARSEMEM_STATIC=y
Actual value:
Value requested for CONFIG_BOUNCE not in final .config
Requested value: CONFIG_BOUNCE=y
Actual value:
Value requested for CONFIG_KMAP_LOCAL not in final .config
Requested value: CONFIG_KMAP_LOCAL=y
Actual value:
Value requested for CONFIG_HOTPLUG_PCI_COMPAQ not in final .config
Requested value: # CONFIG_HOTPLUG_PCI_COMPAQ is not set
Actual value:
Value requested for CONFIG_HOTPLUG_PCI_IBM not in final .config
Requested value: # CONFIG_HOTPLUG_PCI_IBM is not set
Actual value:
Value requested for CONFIG_EFI_CAPSULE_QUIRK_QUARK_CSH not in final .config
Requested value: CONFIG_EFI_CAPSULE_QUIRK_QUARK_CSH=y
Actual value:
Value requested for CONFIG_PCH_PHUB not in final .config
Requested value: # CONFIG_PCH_PHUB is not set
Actual value:
Value requested for CONFIG_SCSI_NSP32 not in final .config
Requested value: # CONFIG_SCSI_NSP32 is not set
Actual value:
Value requested for CONFIG_PATA_CS5520 not in final .config
Requested value: # CONFIG_PATA_CS5520 is not set
Actual value:
Value requested for CONFIG_PATA_CS5530 not in final .config
Requested value: # CONFIG_PATA_CS5530 is not set
Actual value:
Value requested for CONFIG_PATA_CS5535 not in final .config
Requested value: # CONFIG_PATA_CS5535 is not set
Actual value:
Value requested for CONFIG_PATA_CS5536 not in final .config
Requested value: # CONFIG_PATA_CS5536 is not set
Actual value:
Value requested for CONFIG_PATA_SC1200 not in final .config
Requested value: # CONFIG_PATA_SC1200 is not set
Actual value:
Value requested for CONFIG_PCH_GBE not in final .config
Requested value: # CONFIG_PCH_GBE is not set
Actual value:
Value requested for CONFIG_INPUT_WISTRON_BTNS not in final .config
Requested value: # CONFIG_INPUT_WISTRON_BTNS is not set
Actual value:
Value requested for CONFIG_SERIAL_TIMBERDALE not in final .config
Requested value: # CONFIG_SERIAL_TIMBERDALE is not set
Actual value:
Value requested for CONFIG_SERIAL_PCH_UART not in final .config
Requested value: # CONFIG_SERIAL_PCH_UART is not set
Actual value:
Value requested for CONFIG_HW_RANDOM_GEODE not in final .config
Requested value: CONFIG_HW_RANDOM_GEODE=y
Actual value:
Value requested for CONFIG_SONYPI not in final .config
Requested value: # CONFIG_SONYPI is not set
Actual value:
Value requested for CONFIG_PC8736x_GPIO not in final .config
Requested value: # CONFIG_PC8736x_GPIO is not set
Actual value:
Value requested for CONFIG_NSC_GPIO not in final .config
Requested value: # CONFIG_NSC_GPIO is not set
Actual value:
Value requested for CONFIG_I2C_EG20T not in final .config
Requested value: # CONFIG_I2C_EG20T is not set
Actual value:
Value requested for CONFIG_SCx200_ACB not in final .config
Requested value: # CONFIG_SCx200_ACB is not set
Actual value:
Value requested for CONFIG_PTP_1588_CLOCK_PCH not in final .config
Requested value: # CONFIG_PTP_1588_CLOCK_PCH is not set
Actual value:
Value requested for CONFIG_SBC8360_WDT not in final .config
Requested value: # CONFIG_SBC8360_WDT is not set
Actual value:
Value requested for CONFIG_SBC7240_WDT not in final .config
Requested value: # CONFIG_SBC7240_WDT is not set
Actual value:
Value requested for CONFIG_MFD_CS5535 not in final .config
Requested value: # CONFIG_MFD_CS5535 is not set
Actual value:
Value requested for CONFIG_AGP_ALI not in final .config
Requested value: # CONFIG_AGP_ALI is not set
Actual value:
Value requested for CONFIG_AGP_ATI not in final .config
Requested value: # CONFIG_AGP_ATI is not set
Actual value:
Value requested for CONFIG_AGP_AMD not in final .config
Requested value: # CONFIG_AGP_AMD is not set
Actual value:
Value requested for CONFIG_AGP_NVIDIA not in final .config
Requested value: # CONFIG_AGP_NVIDIA is not set
Actual value:
Value requested for CONFIG_AGP_SWORKS not in final .config
Requested value: # CONFIG_AGP_SWORKS is not set
Actual value:
Value requested for CONFIG_AGP_EFFICEON not in final .config
Requested value: # CONFIG_AGP_EFFICEON is not set
Actual value:
Value requested for CONFIG_SND_CS5530 not in final .config
Requested value: # CONFIG_SND_CS5530 is not set
Actual value:
Value requested for CONFIG_SND_CS5535AUDIO not in final .config
Requested value: # CONFIG_SND_CS5535AUDIO is not set
Actual value:
Value requested for CONFIG_SND_SIS7019 not in final .config
Requested value: # CONFIG_SND_SIS7019 is not set
Actual value:
Value requested for CONFIG_LEDS_OT200 not in final .config
Requested value: # CONFIG_LEDS_OT200 is not set
Actual value:
Value requested for CONFIG_PCH_DMA not in final .config
Requested value: # CONFIG_PCH_DMA is not set
Actual value:
Value requested for CONFIG_CLKSRC_I8253 not in final .config
Requested value: CONFIG_CLKSRC_I8253=y
Actual value:
Value requested for CONFIG_MAILBOX not in final .config
Requested value: # CONFIG_MAILBOX is not set
Actual value: CONFIG_MAILBOX=y
Value requested for CONFIG_CRYPTO_SERPENT_SSE2_586 not in final .config
Requested value: # CONFIG_CRYPTO_SERPENT_SSE2_586 is not set
Actual value:
Value requested for CONFIG_CRYPTO_TWOFISH_586 not in final .config
Requested value: # CONFIG_CRYPTO_TWOFISH_586 is not set
Actual value:
Value requested for CONFIG_CRYPTO_DEV_GEODE not in final .config
Requested value: # CONFIG_CRYPTO_DEV_GEODE is not set
Actual value:
Value requested for CONFIG_CRYPTO_DEV_HIFN_795X not in final .config
Requested value: # CONFIG_CRYPTO_DEV_HIFN_795X is not set
Actual value:
Value requested for CONFIG_CRYPTO_LIB_POLY1305_RSIZE not in final .config
Requested value: CONFIG_CRYPTO_LIB_POLY1305_RSIZE=1
Actual value: CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11
Value requested for CONFIG_AUDIT_GENERIC not in final .config
Requested value: CONFIG_AUDIT_GENERIC=y
Actual value:
Value requested for CONFIG_GENERIC_VDSO_32 not in final .config
Requested value: CONFIG_GENERIC_VDSO_32=y
Actual value:
Value requested for CONFIG_DEBUG_KMAP_LOCAL not in final .config
Requested value: # CONFIG_DEBUG_KMAP_LOCAL is not set
Actual value:
Value requested for CONFIG_DEBUG_HIGHMEM not in final .config
Requested value: # CONFIG_DEBUG_HIGHMEM is not set
Actual value:
Value requested for CONFIG_HAVE_DEBUG_STACKOVERFLOW not in final .config
Requested value: CONFIG_HAVE_DEBUG_STACKOVERFLOW=y
Actual value:
Value requested for CONFIG_DEBUG_STACKOVERFLOW not in final .config
Requested value: # CONFIG_DEBUG_STACKOVERFLOW is not set
Actual value:
Value requested for CONFIG_HAVE_FUNCTION_GRAPH_TRACER not in final .config
Requested value: CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
Actual value:
Value requested for CONFIG_HAVE_FUNCTION_GRAPH_RETVAL not in final .config
Requested value: CONFIG_HAVE_FUNCTION_GRAPH_RETVAL=y
Actual value:
Value requested for CONFIG_DRM_KUNIT_TEST not in final .config
Requested value: CONFIG_DRM_KUNIT_TEST=m
Actual value:
Value requested for CONFIG_DRM_XE_WERROR not in final .config
Requested value: CONFIG_DRM_XE_WERROR=y
Actual value:
Value requested for CONFIG_DRM_XE_DEBUG not in final .config
Requested value: CONFIG_DRM_XE_DEBUG=y
Actual value:
Value requested for CONFIG_DRM_XE_DEBUG_MEM not in final .config
Requested value: CONFIG_DRM_XE_DEBUG_MEM=y
Actual value:
Value requested for CONFIG_DRM_XE_KUNIT_TEST not in final .config
Requested value: CONFIG_DRM_XE_KUNIT_TEST=m
Actual value:
++ nproc
+ make -j48 ARCH=i386 olddefconfig
GEN Makefile
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m] && HAS_IOPORT [=y]
#
# configuration written to .config
#
++ nproc
+ make -j48 ARCH=i386
SYNC include/config/auto.conf.cmd
GEN Makefile
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m] && HAS_IOPORT [=y]
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m] && HAS_IOPORT [=y]
WARNING: unmet direct dependencies detected for FB_IOMEM_HELPERS
Depends on [n]: HAS_IOMEM [=y] && FB_CORE [=n]
Selected by [m]:
- DRM_XE_DISPLAY [=y] && HAS_IOMEM [=y] && DRM [=y] && DRM_XE [=m] && DRM_XE [=m]=m [=m] && HAS_IOPORT [=y]
GEN Makefile
WRAP arch/x86/include/generated/uapi/asm/bpf_perf_event.h
WRAP arch/x86/include/generated/uapi/asm/errno.h
UPD include/generated/uapi/linux/version.h
WRAP arch/x86/include/generated/uapi/asm/fcntl.h
WRAP arch/x86/include/generated/uapi/asm/ioctls.h
WRAP arch/x86/include/generated/uapi/asm/ioctl.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_32.h
WRAP arch/x86/include/generated/uapi/asm/ipcbuf.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_64.h
WRAP arch/x86/include/generated/uapi/asm/param.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_x32.h
WRAP arch/x86/include/generated/uapi/asm/poll.h
SYSTBL arch/x86/include/generated/asm/syscalls_32.h
WRAP arch/x86/include/generated/uapi/asm/resource.h
WRAP arch/x86/include/generated/uapi/asm/socket.h
WRAP arch/x86/include/generated/uapi/asm/sockios.h
WRAP arch/x86/include/generated/uapi/asm/termbits.h
WRAP arch/x86/include/generated/uapi/asm/termios.h
HOSTCC arch/x86/tools/relocs_32.o
WRAP arch/x86/include/generated/uapi/asm/types.h
HOSTCC arch/x86/tools/relocs_64.o
UPD include/generated/compile.h
HOSTCC arch/x86/tools/relocs_common.o
WRAP arch/x86/include/generated/asm/early_ioremap.h
WRAP arch/x86/include/generated/asm/mcs_spinlock.h
WRAP arch/x86/include/generated/asm/mmzone.h
WRAP arch/x86/include/generated/asm/irq_regs.h
WRAP arch/x86/include/generated/asm/kmap_size.h
WRAP arch/x86/include/generated/asm/local64.h
WRAP arch/x86/include/generated/asm/mmiowb.h
WRAP arch/x86/include/generated/asm/module.lds.h
HOSTCC scripts/kallsyms
WRAP arch/x86/include/generated/asm/rwonce.h
HOSTCC scripts/sorttable
HOSTCC scripts/asn1_compiler
HOSTCC scripts/selinux/mdp/mdp
HOSTLD arch/x86/tools/relocs
UPD include/config/kernel.release
UPD include/generated/utsrelease.h
CC scripts/mod/empty.o
HOSTCC scripts/mod/mk_elfconfig
CC scripts/mod/devicetable-offsets.s
UPD scripts/mod/devicetable-offsets.h
MKELF scripts/mod/elfconfig.h
HOSTCC scripts/mod/modpost.o
HOSTCC scripts/mod/file2alias.o
HOSTCC scripts/mod/sumversion.o
HOSTCC scripts/mod/symsearch.o
HOSTLD scripts/mod/modpost
CC kernel/bounds.s
CHKSHA1 /workspace/kernel/include/linux/atomic/atomic-arch-fallback.h
CHKSHA1 /workspace/kernel/include/linux/atomic/atomic-instrumented.h
CHKSHA1 /workspace/kernel/include/linux/atomic/atomic-long.h
UPD include/generated/timeconst.h
UPD include/generated/bounds.h
CC arch/x86/kernel/asm-offsets.s
UPD include/generated/asm-offsets.h
CALL /workspace/kernel/scripts/checksyscalls.sh
LDS scripts/module.lds
CC init/main.o
CC init/do_mounts.o
CC init/do_mounts_initrd.o
HOSTCC usr/gen_init_cpio
UPD init/utsversion-tmp.h
CC init/initramfs.o
CC certs/system_keyring.o
CC init/calibrate.o
CC init/init_task.o
CC ipc/util.o
AS arch/x86/entry/entry.o
CC ipc/msgutil.o
CC io_uring/io_uring.o
AS arch/x86/entry/entry_32.o
CC security/commoncap.o
CC arch/x86/events/core.o
AR arch/x86/crypto/built-in.a
CC arch/x86/realmode/init.o
CC ipc/msg.o
AS arch/x86/lib/atomic64_cx8_32.o
CC mm/filemap.o
CC init/version.o
CC arch/x86/entry/syscall_32.o
AR arch/x86/net/built-in.a
CC arch/x86/video/video-common.o
CC arch/x86/power/cpu.o
AR arch/x86/entry/vsyscall/built-in.a
CC arch/x86/pci/i386.o
CC security/keys/gc.o
CC security/integrity/iint.o
HOSTCC security/selinux/genheaders
CC arch/x86/events/amd/core.o
CC arch/x86/events/zhaoxin/core.o
CC arch/x86/events/intel/core.o
CC block/partitions/core.o
AR virt/lib/built-in.a
CC fs/nfs_common/nfsacl.o
AR arch/x86/platform/atom/built-in.a
CC arch/x86/mm/pat/set_memory.o
CC security/lsm_syscalls.o
CC arch/x86/pci/init.o
CC arch/x86/kernel/fpu/init.o
CC arch/x86/virt/svm/cmdline.o
CC net/core/sock.o
AR virt/built-in.a
AR drivers/cache/built-in.a
CC fs/notify/dnotify/dnotify.o
CC lib/math/div64.o
CC sound/core/seq/seq.o
CC arch/x86/entry/vdso/vma.o
AR arch/x86/platform/ce4100/built-in.a
AR sound/i2c/other/built-in.a
AR drivers/irqchip/built-in.a
AS arch/x86/lib/checksum_32.o
AR sound/i2c/built-in.a
CC arch/x86/platform/efi/memmap.o
CC fs/iomap/trace.o
CC kernel/locking/mutex.o
CC arch/x86/mm/pat/memtype.o
AR drivers/bus/mhi/built-in.a
CC kernel/sched/core.o
CC block/bdev.o
AR drivers/bus/built-in.a
CC arch/x86/lib/cmdline.o
CC crypto/asymmetric_keys/asymmetric_type.o
AR drivers/pwm/built-in.a
AR drivers/leds/trigger/built-in.a
AR drivers/leds/blink/built-in.a
AR arch/x86/virt/svm/built-in.a
AR drivers/leds/simple/built-in.a
CC drivers/leds/led-core.o
AR arch/x86/virt/vmx/built-in.a
AR arch/x86/virt/built-in.a
CC block/fops.o
AS arch/x86/lib/cmpxchg8b_emu.o
GEN security/selinux/flask.h security/selinux/av_permissions.h
CC lib/math/gcd.o
CC security/selinux/avc.o
CC arch/x86/lib/cpu.o
CC arch/x86/kernel/fpu/bugs.o
CC lib/math/lcm.o
CC lib/math/int_log.o
CC lib/crypto/mpi/generic_mpih-lshift.o
CC kernel/locking/semaphore.o
GEN usr/initramfs_data.cpio
COPY usr/initramfs_inc_data
AS usr/initramfs_data.o
HOSTCC certs/extract-cert
CC arch/x86/kernel/fpu/core.o
AR usr/built-in.a
CC lib/math/int_pow.o
CC block/partitions/msdos.o
CC lib/math/int_sqrt.o
CC lib/math/reciprocal_div.o
CC sound/core/seq/seq_lock.o
CC arch/x86/lib/delay.o
AS arch/x86/realmode/rm/header.o
CC lib/math/rational.o
AR arch/x86/video/built-in.a
AS arch/x86/realmode/rm/trampoline_32.o
AS arch/x86/lib/getuser.o
CC block/partitions/efi.o
AS arch/x86/realmode/rm/stack.o
CC sound/core/seq/seq_clientmgr.o
CERT certs/x509_certificate_list
AS arch/x86/realmode/rm/reboot.o
CERT certs/signing_key.x509
AS certs/system_certificates.o
CC arch/x86/events/amd/lbr.o
CC security/integrity/integrity_audit.o
AS arch/x86/realmode/rm/wakeup_asm.o
CC sound/core/sound.o
AR certs/built-in.a
CC fs/nfs_common/grace.o
CC security/keys/key.o
CC drivers/leds/led-class.o
CC sound/core/init.o
CC arch/x86/realmode/rm/wakemain.o
CC crypto/asymmetric_keys/restrict.o
CC kernel/sched/fair.o
CC arch/x86/pci/pcbios.o
AR fs/notify/dnotify/built-in.a
CC arch/x86/entry/vdso/extable.o
CC net/ethernet/eth.o
CC fs/notify/inotify/inotify_fsnotify.o
CC arch/x86/power/hibernate_32.o
CC arch/x86/platform/efi/quirks.o
GEN arch/x86/lib/inat-tables.c
CC arch/x86/lib/insn-eval.o
CC arch/x86/realmode/rm/video-mode.o
AR net/802/built-in.a
AR arch/x86/events/zhaoxin/built-in.a
CC net/sched/sch_generic.o
CC lib/crypto/mpi/generic_mpih-mul1.o
CC net/netlink/af_netlink.o
AR net/bpf/built-in.a
CC lib/crypto/memneq.o
CC arch/x86/entry/common.o
AR lib/math/built-in.a
AS arch/x86/realmode/rm/copy.o
AR arch/x86/platform/geode/built-in.a
CC drivers/leds/led-triggers.o
CC arch/x86/events/intel/bts.o
CC arch/x86/events/intel/ds.o
CC arch/x86/events/intel/knc.o
AS arch/x86/realmode/rm/bioscall.o
CC arch/x86/realmode/rm/regs.o
CC arch/x86/realmode/rm/video-vga.o
CC crypto/asymmetric_keys/signature.o
CC arch/x86/realmode/rm/video-vesa.o
CC ipc/sem.o
CC net/netlink/genetlink.o
CC arch/x86/lib/insn.o
CC mm/mempool.o
CC arch/x86/realmode/rm/video-bios.o
CC fs/iomap/iter.o
CC fs/quota/dquot.o
CC fs/proc/task_mmu.o
CC crypto/api.o
AS arch/x86/power/hibernate_asm_32.o
CC drivers/pci/msi/pcidev_msi.o
PASYMS arch/x86/realmode/rm/pasyms.h
CC kernel/locking/rwsem.o
CC kernel/locking/percpu-rwsem.o
CC kernel/locking/spinlock.o
LDS arch/x86/realmode/rm/realmode.lds
LD arch/x86/realmode/rm/realmode.elf
CC drivers/video/console/dummycon.o
RELOCS arch/x86/realmode/rm/realmode.relocs
CC fs/notify/inotify/inotify_user.o
OBJCOPY arch/x86/realmode/rm/realmode.bin
AS arch/x86/realmode/rmpiggy.o
CC arch/x86/mm/pat/memtype_interval.o
AR arch/x86/realmode/built-in.a
CC drivers/video/backlight/backlight.o
AR security/integrity/built-in.a
CC fs/nfs_common/common.o
AR drivers/idle/built-in.a
LDS arch/x86/entry/vdso/vdso32/vdso32.lds
CC security/min_addr.o
CC lib/crypto/mpi/generic_mpih-mul2.o
CC crypto/cipher.o
AR block/partitions/built-in.a
CC block/bio.o
CC arch/x86/power/hibernate.o
CC arch/x86/pci/mmconfig_32.o
AR init/built-in.a
AS arch/x86/entry/vdso/vdso32/note.o
AR fs/notify/fanotify/built-in.a
CC arch/x86/events/amd/ibs.o
AS arch/x86/entry/vdso/vdso32/system_call.o
CC arch/x86/kernel/fpu/regset.o
CC lib/zlib_inflate/inffast.o
CC crypto/asymmetric_keys/public_key.o
AS arch/x86/entry/vdso/vdso32/sigreturn.o
CC arch/x86/entry/vdso/vdso32/vclock_gettime.o
CC lib/zlib_deflate/deflate.o
CC arch/x86/events/intel/lbr.o
CC lib/lzo/lzo1x_compress.o
AR drivers/leds/built-in.a
CC arch/x86/lib/kaslr.o
CC security/keys/keyring.o
CC lib/zlib_inflate/inflate.o
CC sound/core/seq/seq_memory.o
CC arch/x86/platform/efi/efi.o
CC io_uring/opdef.o
CC security/selinux/hooks.o
CC drivers/pci/msi/api.o
CC fs/notify/fsnotify.o
CC security/security.o
CC fs/quota/quota_v2.o
CC drivers/video/console/vgacon.o
CC net/netlink/policy.o
CC lib/crypto/utils.o
CC fs/iomap/buffered-io.o
CC arch/x86/lib/memcpy_32.o
CC drivers/pci/pcie/portdrv.o
AS arch/x86/lib/memmove_32.o
AR arch/x86/mm/pat/built-in.a
CC arch/x86/lib/misc.o
CC arch/x86/mm/init.o
CC lib/crypto/mpi/generic_mpih-mul3.o
CC arch/x86/lib/pc-conf-reg.o
CC fs/kernfs/mount.o
AR fs/nfs_common/built-in.a
CC drivers/pci/pcie/rcec.o
CC fs/kernfs/inode.o
CC kernel/locking/osq_lock.o
AR net/ethernet/built-in.a
CC arch/x86/pci/direct.o
CC net/ethtool/ioctl.o
AR arch/x86/power/built-in.a
CC arch/x86/platform/efi/efi_32.o
CC lib/lzo/lzo1x_decompress_safe.o
AS arch/x86/lib/putuser.o
AR drivers/video/backlight/built-in.a
CC arch/x86/entry/vdso/vdso32/vgetcpu.o
ASN.1 crypto/asymmetric_keys/x509.asn1.[ch]
ASN.1 crypto/asymmetric_keys/x509_akid.asn1.[ch]
CC fs/sysfs/file.o
CC crypto/asymmetric_keys/x509_loader.o
AS arch/x86/lib/retpoline.o
AR fs/notify/inotify/built-in.a
CC fs/sysfs/dir.o
CC arch/x86/lib/string_32.o
HOSTCC arch/x86/entry/vdso/vdso2c
CC arch/x86/kernel/fpu/signal.o
CC fs/devpts/inode.o
CC arch/x86/lib/strstr_32.o
CC kernel/locking/qspinlock.o
CC lib/zlib_inflate/infutil.o
CC arch/x86/lib/usercopy.o
CC lib/zlib_deflate/deftree.o
CC crypto/asymmetric_keys/x509_public_key.o
CC sound/core/seq/seq_queue.o
CC drivers/pci/msi/msi.o
CC arch/x86/events/intel/p4.o
CC security/keys/keyctl.o
CC lib/crypto/mpi/generic_mpih-rshift.o
CC arch/x86/lib/usercopy_32.o
AR lib/lzo/built-in.a
CC lib/zlib_inflate/inftrees.o
AR sound/drivers/opl3/built-in.a
CC arch/x86/entry/vdso/vdso32-setup.o
CC security/selinux/selinuxfs.o
AR sound/drivers/opl4/built-in.a
CC arch/x86/events/amd/uncore.o
AR sound/drivers/mpu401/built-in.a
AR sound/drivers/vx/built-in.a
CC fs/notify/notification.o
CC fs/sysfs/symlink.o
AR sound/drivers/pcsp/built-in.a
AR sound/drivers/built-in.a
CC net/sched/sch_mq.o
CC kernel/sched/build_policy.o
CC drivers/pci/pcie/bwctrl.o
CC mm/oom_kill.o
CC kernel/locking/rtmutex_api.o
CC sound/core/memory.o
CC ipc/shm.o
CC arch/x86/pci/mmconfig-shared.o
CC lib/zlib_inflate/inflate_syms.o
CC net/ethtool/common.o
CC security/keys/permission.o
CC lib/lz4/lz4_decompress.o
CC fs/proc/inode.o
AS arch/x86/platform/efi/efi_stub_32.o
AR drivers/video/console/built-in.a
CC fs/kernfs/dir.o
CC arch/x86/platform/efi/runtime-map.o
AR drivers/video/fbdev/core/built-in.a
CC fs/proc/root.o
AR drivers/video/fbdev/omap/built-in.a
CC arch/x86/mm/init_32.o
AR drivers/video/fbdev/omap2/omapfb/dss/built-in.a
CC arch/x86/lib/msr-smp.o
AR drivers/video/fbdev/omap2/omapfb/displays/built-in.a
CC fs/sysfs/mount.o
AR drivers/video/fbdev/omap2/omapfb/built-in.a
VDSO arch/x86/entry/vdso/vdso32.so.dbg
AR drivers/video/fbdev/omap2/built-in.a
CC kernel/sched/build_utility.o
AR drivers/video/fbdev/built-in.a
CC net/core/request_sock.o
CC drivers/video/aperture.o
ASN.1 crypto/asymmetric_keys/pkcs7.asn1.[ch]
CC crypto/asymmetric_keys/pkcs7_trust.o
OBJCOPY arch/x86/entry/vdso/vdso32.so
VDSO2C arch/x86/entry/vdso/vdso-image-32.c
CC arch/x86/entry/vdso/vdso-image-32.o
CC net/ethtool/netlink.o
CC lib/zlib_deflate/deflate_syms.o
AR fs/devpts/built-in.a
CC drivers/pci/pcie/aspm.o
CC fs/quota/quota_tree.o
CC arch/x86/kernel/fpu/xstate.o
CC arch/x86/lib/cache-smp.o
AR lib/zlib_inflate/built-in.a
AS arch/x86/entry/thunk.o
CC lib/crypto/mpi/generic_mpih-sub1.o
CC fs/notify/group.o
AR net/netlink/built-in.a
CC block/elevator.o
CC arch/x86/events/probe.o
CC lib/crypto/chacha.o
AR arch/x86/entry/vdso/built-in.a
AR arch/x86/entry/built-in.a
CC net/core/skbuff.o
CC sound/core/seq/seq_fifo.o
CC arch/x86/lib/msr.o
CC crypto/asymmetric_keys/pkcs7_verify.o
CC fs/sysfs/group.o
AR lib/zlib_deflate/built-in.a
CC net/ethtool/bitset.o
AR arch/x86/platform/iris/built-in.a
CC sound/core/control.o
CC net/netfilter/core.o
CC net/ipv4/netfilter/nf_defrag_ipv4.o
CC net/xfrm/xfrm_policy.o
CC drivers/pci/msi/irqdomain.o
CC arch/x86/events/intel/p6.o
CC net/unix/af_unix.o
CC net/ipv6/netfilter/ip6_tables.o
AR arch/x86/platform/efi/built-in.a
CC arch/x86/platform/intel/iosf_mbi.o
CC net/netfilter/nf_log.o
CC fs/proc/base.o
CC net/ipv6/netfilter/ip6table_filter.o
CC security/keys/process_keys.o
CC io_uring/kbuf.o
CC kernel/locking/qrwlock.o
CC lib/crypto/mpi/generic_mpih-add1.o
CC arch/x86/pci/fixup.o
CC net/sched/sch_frag.o
CC drivers/video/cmdline.o
CC crypto/asymmetric_keys/x509.asn1.o
AR arch/x86/events/amd/built-in.a
CC crypto/asymmetric_keys/x509_akid.asn1.o
CC ipc/syscall.o
CC arch/x86/mm/fault.o
CC crypto/asymmetric_keys/x509_cert_parser.o
CC fs/notify/mark.o
CC fs/iomap/direct-io.o
CC sound/core/seq/seq_prioq.o
CC sound/core/seq/seq_timer.o
CC crypto/compress.o
CC fs/quota/quota.o
AR fs/sysfs/built-in.a
CC fs/quota/kqid.o
CC drivers/video/nomodeset.o
CC fs/kernfs/file.o
AS arch/x86/lib/msr-reg.o
AR lib/lz4/built-in.a
CC security/lsm_audit.o
CC arch/x86/lib/msr-reg-export.o
AR kernel/locking/built-in.a
CC arch/x86/events/intel/pt.o
CC block/blk-core.o
AR drivers/pci/msi/built-in.a
CC arch/x86/pci/acpi.o
CC drivers/pci/pcie/pme.o
AR arch/x86/kernel/fpu/built-in.a
CC lib/crypto/mpi/mpicoder.o
CC mm/fadvise.o
CC arch/x86/kernel/cpu/mce/core.o
CC arch/x86/kernel/cpu/mtrr/mtrr.o
AS arch/x86/lib/hweight.o
AR arch/x86/platform/intel/built-in.a
CC arch/x86/lib/iomem.o
CC crypto/asymmetric_keys/pkcs7.asn1.o
CC arch/x86/pci/legacy.o
AR arch/x86/platform/intel-mid/built-in.a
CC crypto/asymmetric_keys/pkcs7_parser.o
AR arch/x86/platform/intel-quark/built-in.a
CC arch/x86/pci/irq.o
AR arch/x86/platform/olpc/built-in.a
AR arch/x86/platform/scx200/built-in.a
AR arch/x86/platform/ts5500/built-in.a
AR arch/x86/platform/uv/built-in.a
AR arch/x86/platform/built-in.a
CC drivers/video/hdmi.o
CC net/ipv4/netfilter/nf_reject_ipv4.o
AR drivers/pci/pwrctrl/built-in.a
AR sound/isa/ad1816a/built-in.a
AR sound/isa/ad1848/built-in.a
AR sound/pci/ac97/built-in.a
AR sound/isa/cs423x/built-in.a
CC mm/maccess.o
CC ipc/ipc_sysctl.o
AR sound/pci/ali5451/built-in.a
CC mm/page-writeback.o
AR sound/isa/es1688/built-in.a
AR sound/pci/asihpi/built-in.a
AR sound/isa/galaxy/built-in.a
AR sound/pci/au88x0/built-in.a
AR sound/pci/aw2/built-in.a
AR sound/isa/gus/built-in.a
CC mm/folio-compat.o
AR sound/pci/ctxfi/built-in.a
AR sound/isa/msnd/built-in.a
AR sound/pci/ca0106/built-in.a
AR sound/isa/opti9xx/built-in.a
AR sound/pci/cs46xx/built-in.a
AR sound/isa/sb/built-in.a
CC security/device_cgroup.o
AR sound/pci/cs5535audio/built-in.a
AR sound/isa/wavefront/built-in.a
AR sound/isa/wss/built-in.a
CC net/unix/garbage.o
AR sound/isa/built-in.a
AR sound/pci/lola/built-in.a
CC block/blk-sysfs.o
CC sound/core/seq/seq_system.o
CC net/ethtool/strset.o
AR sound/pci/lx6464es/built-in.a
CC net/ethtool/linkinfo.o
AR sound/pci/echoaudio/built-in.a
AR sound/pci/emu10k1/built-in.a
CC sound/pci/hda/hda_bind.o
CC security/keys/request_key.o
CC block/blk-flush.o
CC fs/notify/fdinfo.o
CC arch/x86/lib/atomic64_32.o
CC ipc/mqueue.o
CC net/netfilter/nf_queue.o
CC ipc/namespace.o
CC arch/x86/lib/inat.o
CC fs/proc/generic.o
CC net/sched/sch_api.o
CC io_uring/rsrc.o
AR crypto/asymmetric_keys/built-in.a
CC crypto/algapi.o
CC fs/iomap/fiemap.o
AR arch/x86/lib/built-in.a
AR arch/x86/lib/lib.a
CC lib/crypto/mpi/mpi-add.o
AR drivers/pci/pcie/built-in.a
CC arch/x86/kernel/cpu/microcode/core.o
CC net/packet/af_packet.o
CC sound/core/seq/seq_ports.o
CC drivers/pci/hotplug/pci_hotplug_core.o
CC fs/kernfs/symlink.o
AR drivers/pci/controller/dwc/built-in.a
CC arch/x86/mm/ioremap.o
AR drivers/pci/controller/mobiveil/built-in.a
AR drivers/pci/controller/plda/built-in.a
CC arch/x86/kernel/cpu/mtrr/if.o
AR drivers/pci/controller/built-in.a
CC sound/core/seq/seq_info.o
CC arch/x86/kernel/cpu/mtrr/generic.o
CC fs/quota/netlink.o
CC arch/x86/kernel/cpu/mtrr/cleanup.o
CC net/ipv6/af_inet6.o
CC net/ipv6/netfilter/ip6table_mangle.o
AR drivers/video/built-in.a
CC net/ethtool/linkmodes.o
CC lib/crypto/aes.o
CC lib/crypto/mpi/mpi-bit.o
CC arch/x86/kernel/cpu/mtrr/amd.o
CC net/core/datagram.o
AR fs/notify/built-in.a
CC arch/x86/kernel/acpi/boot.o
CC sound/pci/hda/hda_codec.o
CC arch/x86/events/intel/uncore.o
CC security/selinux/netlink.o
CC arch/x86/kernel/apic/apic.o
CC arch/x86/events/intel/uncore_nhmex.o
CC security/keys/request_key_auth.o
CC arch/x86/pci/common.o
CC sound/pci/hda/hda_jack.o
CC fs/iomap/seek.o
CC arch/x86/kernel/cpu/mtrr/cyrix.o
CC arch/x86/kernel/cpu/mce/severity.o
CC lib/crypto/mpi/mpi-cmp.o
CC fs/proc/array.o
CC arch/x86/kernel/cpu/microcode/intel.o
CC net/unix/sysctl_net_unix.o
AR fs/kernfs/built-in.a
CC net/sched/sch_blackhole.o
CC arch/x86/events/intel/uncore_snb.o
AR drivers/char/ipmi/built-in.a
CC sound/core/seq/seq_dummy.o
CC sound/pci/hda/hda_auto_parser.o
CC security/keys/user_defined.o
CC net/ipv4/netfilter/ip_tables.o
CC drivers/pci/hotplug/acpi_pcihp.o
CC arch/x86/kernel/apic/apic_common.o
CC arch/x86/mm/extable.o
AR net/dsa/built-in.a
CC net/netfilter/nf_sockopt.o
AR sound/pci/ice1712/built-in.a
CC arch/x86/kernel/cpu/mce/genpool.o
CC arch/x86/kernel/kprobes/core.o
CC arch/x86/kernel/cpu/microcode/amd.o
AR fs/quota/built-in.a
CC net/ethtool/rss.o
CC lib/zstd/zstd_decompress_module.o
CC crypto/scatterwalk.o
CC arch/x86/kernel/cpu/cacheinfo.o
CC drivers/acpi/acpica/dsargs.o
CC drivers/pnp/pnpacpi/core.o
CC lib/crypto/mpi/mpi-sub-ui.o
CC arch/x86/kernel/cpu/mtrr/centaur.o
CC io_uring/notif.o
CC block/blk-settings.o
CC ipc/mq_sysctl.o
AR drivers/amba/built-in.a
CC security/selinux/nlmsgtab.o
CC net/netfilter/utils.o
AR sound/pci/korg1212/built-in.a
CC fs/iomap/swapfile.o
CC security/selinux/netif.o
AR sound/core/seq/built-in.a
CC mm/readahead.o
CC sound/core/misc.o
CC arch/x86/pci/early.o
CC security/keys/proc.o
CC net/ipv6/netfilter/nf_defrag_ipv6_hooks.o
CC lib/zstd/decompress/huf_decompress.o
CC sound/pci/hda/hda_sysfs.o
CC arch/x86/kernel/acpi/sleep.o
CC crypto/proc.o
AS arch/x86/kernel/acpi/wakeup_32.o
CC block/blk-ioc.o
CC drivers/acpi/acpica/dscontrol.o
CC arch/x86/kernel/cpu/mce/intel.o
AR drivers/pci/hotplug/built-in.a
AR drivers/pci/switch/built-in.a
AR net/unix/built-in.a
CC drivers/pci/access.o
CC drivers/pci/bus.o
CC net/core/stream.o
AR ipc/built-in.a
CC net/ipv4/route.o
CC net/xfrm/xfrm_state.o
LDS arch/x86/kernel/vmlinux.lds
CC lib/xz/xz_dec_syms.o
CC fs/proc/fd.o
CC drivers/pnp/core.o
AS arch/x86/kernel/head_32.o
CC arch/x86/kernel/acpi/cstate.o
CC net/ethtool/linkstate.o
CC arch/x86/mm/mmap.o
CC arch/x86/kernel/cpu/mtrr/legacy.o
CC arch/x86/pci/bus_numa.o
CC lib/crypto/mpi/mpi-div.o
CC net/sched/cls_api.o
CC drivers/pnp/pnpacpi/rsparser.o
CC net/sunrpc/auth_gss/auth_gss.o
AR arch/x86/kernel/cpu/microcode/built-in.a
CC arch/x86/kernel/kprobes/opt.o
CC net/ipv6/netfilter/nf_conntrack_reasm.o
CC drivers/acpi/acpica/dsdebug.o
CC arch/x86/events/utils.o
AR drivers/acpi/pmic/built-in.a
CC lib/crypto/arc4.o
CC lib/xz/xz_dec_stream.o
AR sound/ppc/built-in.a
CC sound/core/device.o
AR sound/arm/built-in.a
CC drivers/pnp/card.o
CC lib/crypto/gf128mul.o
AR fs/iomap/built-in.a
CC crypto/aead.o
CC net/ipv4/netfilter/iptable_filter.o
CC arch/x86/events/intel/uncore_snbep.o
CC security/keys/sysctl.o
AR arch/x86/kernel/cpu/mtrr/built-in.a
CC lib/dim/dim.o
CC arch/x86/kernel/cpu/mce/amd.o
CC net/netfilter/nfnetlink.o
CC lib/xz/xz_dec_lzma2.o
AR kernel/sched/built-in.a
CC net/sunrpc/auth_gss/gss_generic_token.o
CC kernel/power/qos.o
CC security/keys/keyctl_pkey.o
CC drivers/acpi/acpica/dsfield.o
CC arch/x86/kernel/apic/apic_noop.o
CC block/blk-map.o
CC lib/dim/net_dim.o
AR arch/x86/kernel/acpi/built-in.a
CC kernel/power/main.o
CC io_uring/tctx.o
CC mm/swap.o
CC net/sunrpc/auth_gss/gss_mech_switch.o
CC fs/netfs/buffered_read.o
CC crypto/geniv.o
CC arch/x86/mm/pgtable.o
CC net/ethtool/debug.o
CC security/selinux/netnode.o
CC lib/crypto/mpi/mpi-mod.o
CC drivers/pci/probe.o
CC arch/x86/pci/amd_bus.o
CC sound/core/info.o
CC lib/zstd/decompress/zstd_ddict.o
CC fs/proc/proc_tty.o
CC block/blk-merge.o
CC drivers/acpi/dptf/int340x_thermal.o
CC sound/pci/hda/hda_controller.o
CC net/core/scm.o
CC lib/crypto/blake2s.o
CC arch/x86/kernel/apic/ipi.o
CC arch/x86/kernel/apic/vector.o
CC drivers/acpi/acpica/dsinit.o
CC lib/fonts/fonts.o
CC arch/x86/events/intel/uncore_discovery.o
CC drivers/pci/host-bridge.o
CC lib/zstd/decompress/zstd_decompress.o
AR arch/x86/kernel/kprobes/built-in.a
CC net/netfilter/nfnetlink_log.o
AR drivers/pnp/pnpacpi/built-in.a
CC lib/xz/xz_dec_bcj.o
CC arch/x86/events/intel/cstate.o
CC drivers/pnp/driver.o
AR security/keys/built-in.a
CC drivers/pci/remove.o
CC net/netfilter/nf_conntrack_core.o
CC security/selinux/netport.o
CC lib/dim/rdma_dim.o
CC mm/truncate.o
AR drivers/acpi/dptf/built-in.a
CC lib/crypto/mpi/mpi-mul.o
CC net/core/gen_stats.o
CC lib/crypto/mpi/mpih-cmp.o
CC drivers/acpi/acpica/dsmethod.o
AR net/packet/built-in.a
CC net/ipv4/netfilter/iptable_mangle.o
CC lib/fonts/font_8x16.o
CC security/selinux/status.o
CC arch/x86/mm/physaddr.o
CC fs/proc/cmdline.o
CC drivers/pnp/resource.o
CC lib/zstd/decompress/zstd_decompress_block.o
AR arch/x86/pci/built-in.a
CC arch/x86/kernel/head32.o
CC io_uring/filetable.o
CC arch/x86/mm/tlb.o
AR lib/xz/built-in.a
CC net/ipv6/netfilter/nf_reject_ipv6.o
CC crypto/lskcipher.o
CC arch/x86/mm/cpu_entry_area.o
AR lib/dim/built-in.a
AR sound/pci/mixart/built-in.a
AR net/wireless/tests/built-in.a
AR net/mac80211/tests/built-in.a
CC net/wireless/core.o
CC net/ipv6/netfilter/ip6t_ipv6header.o
CC net/mac80211/main.o
CC kernel/power/console.o
AR sound/pci/nm256/built-in.a
AR sound/pci/oxygen/built-in.a
CC arch/x86/kernel/ebda.o
CC drivers/acpi/x86/apple.o
CC sound/core/isadma.o
CC net/ethtool/wol.o
AR lib/fonts/built-in.a
CC drivers/pnp/manager.o
CC lib/zstd/zstd_common_module.o
CC drivers/acpi/acpica/dsmthdat.o
CC drivers/acpi/acpica/dsobject.o
CC arch/x86/kernel/cpu/mce/threshold.o
CC drivers/acpi/tables.o
CC io_uring/rw.o
CC lib/crypto/mpi/mpih-div.o
CC fs/netfs/buffered_write.o
CC net/sunrpc/clnt.o
CC sound/core/vmaster.o
CC net/core/gen_estimator.o
CC fs/proc/consoles.o
CC sound/core/ctljack.o
CC arch/x86/events/rapl.o
CC lib/crypto/blake2s-generic.o
CC net/mac80211/status.o
CC net/netlabel/netlabel_user.o
CC sound/pci/hda/hda_proc.o
CC drivers/acpi/x86/cmos_rtc.o
CC arch/x86/events/msr.o
CC kernel/printk/printk.o
CC kernel/power/process.o
CC net/sunrpc/auth_gss/svcauth_gss.o
CC net/sched/act_api.o
CC drivers/acpi/acpica/dsopcode.o
CC mm/vmscan.o
CC mm/shrinker.o
CC net/netfilter/nf_conntrack_standalone.o
CC net/rfkill/core.o
CC block/blk-timeout.o
CC drivers/pci/pci.o
CC net/ethtool/features.o
CC net/ethtool/privflags.o
CC net/ipv4/netfilter/ipt_REJECT.o
CC net/9p/mod.o
CC security/selinux/ss/ebitmap.o
CC arch/x86/kernel/apic/init.o
CC net/dns_resolver/dns_key.o
CC drivers/pnp/support.o
CC net/handshake/alert.o
CC crypto/skcipher.o
AR arch/x86/events/intel/built-in.a
CC fs/proc/cpuinfo.o
CC kernel/irq/irqdesc.o
CC sound/core/jack.o
CC arch/x86/mm/maccess.o
CC net/9p/client.o
CC net/ipv6/netfilter/ip6t_REJECT.o
CC drivers/acpi/osi.o
CC drivers/acpi/acpica/dspkginit.o
CC lib/crypto/mpi/mpih-mul.o
CC net/devres.o
CC mm/shmem.o
CC drivers/acpi/x86/lpss.o
CC arch/x86/mm/pgprot.o
CC net/xfrm/xfrm_hash.o
CC arch/x86/kernel/apic/hw_nmi.o
CC net/netlabel/netlabel_kapi.o
AR arch/x86/kernel/cpu/mce/built-in.a
CC arch/x86/kernel/cpu/scattered.o
CC lib/argv_split.o
CC net/handshake/genl.o
CC net/core/net_namespace.o
AR arch/x86/events/built-in.a
CC block/blk-lib.o
CC net/handshake/netlink.o
CC drivers/pnp/interface.o
CC fs/netfs/direct_read.o
CC lib/bug.o
CC fs/proc/devices.o
CC net/dns_resolver/dns_query.o
CC drivers/acpi/acpica/dsutils.o
CC arch/x86/kernel/platform-quirks.o
CC kernel/power/suspend.o
CC net/rfkill/input.o
CC kernel/irq/handle.o
CC sound/core/hwdep.o
CC sound/pci/hda/hda_hwdep.o
CC io_uring/net.o
CC net/ethtool/rings.o
CC lib/crypto/sha1.o
CC arch/x86/kernel/cpu/topology_common.o
CC sound/pci/hda/hda_intel.o
CC drivers/pnp/quirks.o
CC net/sunrpc/auth_gss/gss_rpc_upcall.o
CC [M] net/ipv4/netfilter/iptable_nat.o
CC net/netfilter/nf_conntrack_expect.o
CC arch/x86/mm/pgtable_32.o
CC net/xfrm/xfrm_input.o
CC lib/crypto/mpi/mpi-pow.o
CC arch/x86/kernel/apic/io_apic.o
CC drivers/acpi/x86/s2idle.o
CC drivers/acpi/acpica/dswexec.o
CC security/selinux/ss/hashtab.o
CC kernel/rcu/update.o
CC crypto/seqiv.o
CC kernel/power/hibernate.o
CC fs/proc/interrupts.o
CC kernel/rcu/sync.o
CC arch/x86/mm/iomap_32.o
CC io_uring/poll.o
CC net/wireless/sysfs.o
CC lib/buildid.o
CC lib/zstd/common/debug.o
CC lib/zstd/common/entropy_common.o
CC block/blk-mq.o
AR net/dns_resolver/built-in.a
CC kernel/irq/manage.o
CC arch/x86/kernel/cpu/topology_ext.o
AR net/ipv6/netfilter/built-in.a
AR sound/sh/built-in.a
CC net/ipv6/anycast.o
CC drivers/acpi/osl.o
AR net/rfkill/built-in.a
CC lib/zstd/common/error_private.o
CC lib/crypto/mpi/mpiutil.o
AR net/ipv4/netfilter/built-in.a
CC net/sched/sch_fifo.o
CC fs/netfs/direct_write.o
CC lib/zstd/common/fse_decompress.o
CC io_uring/eventfd.o
CC sound/core/timer.o
CC security/selinux/ss/symtab.o
CC drivers/acpi/acpica/dswload.o
CC drivers/acpi/acpica/dswload2.o
CC lib/crypto/sha256.o
CC net/handshake/request.o
CC drivers/pnp/system.o
CC fs/proc/loadavg.o
CC security/selinux/ss/sidtab.o
CC arch/x86/kernel/cpu/topology_amd.o
CC net/netlabel/netlabel_domainhash.o
CC net/ethtool/channels.o
CC arch/x86/mm/hugetlbpage.o
CC crypto/echainiv.o
CC net/mac80211/driver-ops.o
AR sound/synth/emux/built-in.a
AR sound/synth/built-in.a
CC net/xfrm/xfrm_output.o
CC net/ipv4/inetpeer.o
CC net/core/secure_seq.o
CC arch/x86/kernel/apic/msi.o
CC net/9p/error.o
CC drivers/acpi/x86/utils.o
CC lib/clz_tab.o
CC net/sunrpc/auth_gss/gss_rpc_xdr.o
CC drivers/acpi/acpica/dswscope.o
CC kernel/irq/spurious.o
CC fs/netfs/iterator.o
CC lib/zstd/common/zstd_common.o
AR lib/crypto/mpi/built-in.a
CC kernel/printk/printk_safe.o
CC drivers/acpi/acpica/dswstate.o
CC arch/x86/mm/dump_pagetables.o
AR lib/zstd/built-in.a
CC net/netfilter/nf_conntrack_helper.o
AR drivers/pnp/built-in.a
CC arch/x86/kernel/apic/probe_32.o
CC arch/x86/mm/highmem_32.o
CC net/xfrm/xfrm_sysctl.o
CC drivers/pci/pci-driver.o
CC fs/proc/meminfo.o
CC arch/x86/kernel/cpu/common.o
AR lib/crypto/built-in.a
CC lib/cmdline.o
CC kernel/power/snapshot.o
CC net/9p/protocol.o
CC net/9p/trans_common.o
CC lib/cpumask.o
AR sound/pci/hda/built-in.a
AR sound/pci/pcxhr/built-in.a
AR sound/usb/misc/built-in.a
AR sound/pci/riptide/built-in.a
AR sound/usb/usx2y/built-in.a
AR sound/pci/rme9652/built-in.a
AR sound/usb/caiaq/built-in.a
AR sound/firewire/built-in.a
CC net/netfilter/nf_conntrack_proto.o
AR sound/pci/trident/built-in.a
AR sound/usb/6fire/built-in.a
AR sound/pci/ymfpci/built-in.a
AR sound/usb/hiface/built-in.a
AR sound/pci/vx222/built-in.a
AR sound/usb/bcd2000/built-in.a
CC io_uring/uring_cmd.o
AR sound/pci/built-in.a
AR sound/usb/built-in.a
CC net/sched/cls_cgroup.o
CC net/netfilter/nf_conntrack_proto_generic.o
CC drivers/acpi/acpica/evevent.o
CC arch/x86/kernel/process_32.o
CC arch/x86/kernel/signal.o
AR kernel/livepatch/built-in.a
CC net/9p/trans_fd.o
CC net/9p/trans_virtio.o
CC crypto/ahash.o
CC net/sunrpc/auth_gss/trace.o
CC net/core/flow_dissector.o
CC net/ipv6/ip6_output.o
CC security/selinux/ss/avtab.o
CC kernel/printk/nbcon.o
CC drivers/acpi/x86/blacklist.o
CC drivers/pci/search.o
CC fs/proc/stat.o
CC mm/util.o
CC net/mac80211/sta_info.o
CC kernel/irq/resend.o
CC net/netfilter/nf_conntrack_proto_tcp.o
AR arch/x86/mm/built-in.a
CC io_uring/openclose.o
CC net/ethtool/coalesce.o
CC drivers/acpi/acpica/evgpe.o
CC sound/core/hrtimer.o
CC net/xfrm/xfrm_replay.o
AR arch/x86/kernel/apic/built-in.a
CC drivers/acpi/acpica/evgpeblk.o
CC net/ethtool/pause.o
CC net/handshake/tlshd.o
AR sound/sparc/built-in.a
CC fs/netfs/locking.o
CC block/blk-mq-tag.o
CC fs/ext4/balloc.o
CC lib/ctype.o
CC net/ipv4/protocol.o
CC lib/dec_and_lock.o
CC net/netlabel/netlabel_addrlist.o
CC net/ethtool/eee.o
AR drivers/acpi/x86/built-in.a
CC net/sunrpc/xprt.o
CC net/socket.o
CC arch/x86/kernel/signal_32.o
CC kernel/irq/chip.o
CC lib/decompress.o
CC net/sched/ematch.o
CC lib/decompress_bunzip2.o
CC kernel/dma/mapping.o
CC fs/proc/uptime.o
CC arch/x86/kernel/cpu/rdrand.o
CC net/wireless/radiotap.o
CC drivers/pci/rom.o
CC sound/core/pcm.o
CC io_uring/sqpoll.o
AR sound/spi/built-in.a
CC security/selinux/ss/policydb.o
CC kernel/rcu/srcutree.o
CC drivers/acpi/acpica/evgpeinit.o
CC kernel/printk/printk_ringbuffer.o
CC kernel/power/swap.o
CC net/ethtool/tsinfo.o
CC kernel/irq/dummychip.o
CC crypto/shash.o
CC block/blk-stat.o
CC arch/x86/kernel/cpu/match.o
CC net/wireless/util.o
CC mm/mmzone.o
CC net/netlabel/netlabel_mgmt.o
CC fs/netfs/main.o
AR net/9p/built-in.a
CC drivers/acpi/acpica/evgpeutil.o
CC fs/proc/util.o
CC arch/x86/kernel/cpu/bugs.o
CC arch/x86/kernel/traps.o
CC net/xfrm/xfrm_device.o
AR sound/parisc/built-in.a
CC drivers/acpi/acpica/evglock.o
CC net/sunrpc/socklib.o
CC net/wireless/reg.o
CC net/ipv4/ip_input.o
CC arch/x86/kernel/idt.o
CC drivers/pci/setup-res.o
CC net/handshake/trace.o
CC kernel/entry/common.o
CC net/ipv4/ip_fragment.o
CC kernel/module/main.o
CC net/wireless/scan.o
AR drivers/clk/actions/built-in.a
CC lib/decompress_inflate.o
AR drivers/clk/analogbits/built-in.a
AR drivers/clk/bcm/built-in.a
AR drivers/clk/imgtec/built-in.a
CC fs/jbd2/transaction.o
AR drivers/clk/imx/built-in.a
CC kernel/irq/devres.o
AR drivers/clk/ingenic/built-in.a
AR drivers/clk/mediatek/built-in.a
CC arch/x86/kernel/irq.o
CC kernel/printk/sysctl.o
CC arch/x86/kernel/irq_32.o
AR drivers/clk/microchip/built-in.a
CC arch/x86/kernel/dumpstack_32.o
AR drivers/clk/mstar/built-in.a
AR drivers/clk/mvebu/built-in.a
CC mm/vmstat.o
AR drivers/clk/ralink/built-in.a
CC drivers/acpi/acpica/evhandler.o
AR drivers/clk/renesas/built-in.a
AR net/sched/built-in.a
AR drivers/clk/socfpga/built-in.a
AR drivers/clk/sophgo/built-in.a
AR drivers/clk/sprd/built-in.a
CC kernel/module/strict_rwx.o
CC kernel/irq/autoprobe.o
AR drivers/clk/starfive/built-in.a
CC sound/core/pcm_native.o
CC fs/proc/version.o
AR drivers/clk/sunxi-ng/built-in.a
AR drivers/clk/ti/built-in.a
AR drivers/clk/versatile/built-in.a
AR drivers/clk/xilinx/built-in.a
CC net/sunrpc/xprtsock.o
AR drivers/clk/built-in.a
CC fs/jbd2/commit.o
CC net/ethtool/cabletest.o
CC fs/jbd2/recovery.o
CC net/netfilter/nf_conntrack_proto_udp.o
AR kernel/printk/built-in.a
CC crypto/akcipher.o
CC net/sunrpc/sched.o
CC lib/decompress_unlz4.o
CC net/core/sysctl_net_core.o
CC drivers/acpi/acpica/evmisc.o
CC kernel/entry/syscall_user_dispatch.o
CC kernel/rcu/tree.o
CC block/blk-mq-sysfs.o
CC net/sunrpc/auth_gss/gss_krb5_mech.o
CC kernel/power/user.o
CC drivers/pci/irq.o
CC net/sunrpc/auth_gss/gss_krb5_seal.o
CC kernel/irq/irqdomain.o
CC net/xfrm/xfrm_nat_keepalive.o
CC io_uring/xattr.o
CC fs/proc/softirqs.o
CC lib/decompress_unlzma.o
CC net/core/dev.o
CC fs/ext4/bitmap.o
CC net/sysctl_net.o
CC mm/backing-dev.o
CC drivers/acpi/acpica/evregion.o
CC drivers/acpi/acpica/evrgnini.o
CC net/netlabel/netlabel_unlabeled.o
CC security/selinux/ss/services.o
CC fs/netfs/misc.o
CC net/ipv6/ip6_input.o
AR kernel/entry/built-in.a
CC fs/jbd2/checkpoint.o
CC fs/proc/namespaces.o
CC net/ipv4/ip_forward.o
CC crypto/sig.o
CC arch/x86/kernel/cpu/aperfmperf.o
CC block/blk-mq-cpumap.o
CC arch/x86/kernel/cpu/cpuid-deps.o
CC fs/ramfs/inode.o
CC drivers/acpi/acpica/evsci.o
CC fs/netfs/objects.o
CC drivers/pci/vpd.o
AR net/handshake/built-in.a
CC net/netlabel/netlabel_cipso_v4.o
CC fs/ext4/block_validity.o
CC fs/ext4/dir.o
CC kernel/dma/direct.o
CC kernel/power/poweroff.o
CC net/ethtool/tunnels.o
CC mm/mm_init.o
CC net/sunrpc/auth_gss/gss_krb5_unseal.o
CC net/netfilter/nf_conntrack_proto_icmp.o
CC lib/decompress_unlzo.o
CC kernel/time/time.o
CC io_uring/nop.o
CC net/netlabel/netlabel_calipso.o
CC kernel/futex/core.o
AR kernel/power/built-in.a
CC security/selinux/ss/conditional.o
CC kernel/irq/proc.o
CC drivers/acpi/acpica/evxface.o
AR sound/pcmcia/vx/built-in.a
AR sound/mips/built-in.a
AR sound/pcmcia/pdaudiocf/built-in.a
CC mm/percpu.o
AR sound/pcmcia/built-in.a
CC drivers/pci/setup-bus.o
CC kernel/dma/ops_helpers.o
CC kernel/module/kmod.o
CC kernel/cgroup/cgroup.o
CC kernel/cgroup/rstat.o
CC kernel/cgroup/namespace.o
CC net/xfrm/xfrm_algo.o
CC fs/proc/self.o
CC kernel/dma/remap.o
CC block/blk-mq-sched.o
CC arch/x86/kernel/cpu/umwait.o
CC fs/ramfs/file-mmu.o
CC block/ioctl.o
CC kernel/cgroup/cgroup-v1.o
CC crypto/kpp.o
CC fs/jbd2/revoke.o
CC kernel/cgroup/freezer.o
CC lib/decompress_unxz.o
CC drivers/acpi/acpica/evxfevnt.o
CC net/mac80211/wep.o
CC fs/hugetlbfs/inode.o
CC mm/slab_common.o
CC fs/netfs/read_collect.o
CC fs/fat/cache.o
CC fs/ext4/ext4_jbd2.o
CC kernel/irq/migration.o
CC kernel/futex/syscalls.o
CC io_uring/fs.o
CC net/ipv4/ip_options.o
CC fs/proc/thread_self.o
CC sound/core/pcm_lib.o
CC net/sunrpc/auth_gss/gss_krb5_wrap.o
CC fs/fat/dir.o
AR kernel/dma/built-in.a
CC kernel/time/timer.o
CC drivers/acpi/acpica/evxfgpe.o
CC io_uring/splice.o
CC net/netfilter/nf_conntrack_extend.o
CC net/netfilter/nf_conntrack_acct.o
CC kernel/module/tree_lookup.o
AR net/netlabel/built-in.a
CC fs/ext4/extents.o
MKCAP arch/x86/kernel/cpu/capflags.c
CC fs/jbd2/journal.o
CC lib/decompress_unzstd.o
CC net/ethtool/fec.o
AR fs/ramfs/built-in.a
CC kernel/rcu/rcu_segcblist.o
CC net/ipv6/addrconf.o
CC lib/dump_stack.o
CC kernel/irq/cpuhotplug.o
CC net/sunrpc/auth.o
ASN.1 crypto/rsapubkey.asn1.[ch]
ASN.1 crypto/rsaprivkey.asn1.[ch]
CC crypto/rsa.o
CC net/sunrpc/auth_null.o
CC crypto/rsa_helper.o
CC fs/netfs/read_pgpriv2.o
CC block/genhd.o
CC net/xfrm/xfrm_user.o
CC drivers/acpi/acpica/evxfregn.o
CC fs/proc/proc_sysctl.o
CC kernel/trace/trace_clock.o
CC io_uring/sync.o
CC kernel/module/kallsyms.o
CC kernel/trace/ring_buffer.o
CC kernel/trace/trace.o
CC kernel/module/procfs.o
CC drivers/pci/vc.o
CC arch/x86/kernel/time.o
CC kernel/futex/pi.o
CC fs/fat/fatent.o
CC arch/x86/kernel/cpu/powerflags.o
CC net/mac80211/aead_api.o
AR sound/soc/built-in.a
CC lib/earlycpio.o
CC drivers/acpi/acpica/exconcat.o
CC kernel/cgroup/legacy_freezer.o
AR sound/atmel/built-in.a
CC drivers/acpi/acpica/exconfig.o
CC kernel/cgroup/pids.o
CC fs/ext4/extents_status.o
CC mm/compaction.o
CC kernel/module/sysfs.o
CC security/selinux/ss/mls.o
CC lib/extable.o
CC kernel/irq/pm.o
CC crypto/rsa-pkcs1pad.o
CC net/sunrpc/auth_gss/gss_krb5_crypto.o
CC kernel/irq/msi.o
CC net/netfilter/nf_conntrack_seqadj.o
CC net/ethtool/eeprom.o
CC net/ipv4/ip_output.o
CC net/core/dev_addr_lists.o
CC drivers/acpi/utils.o
AR fs/hugetlbfs/built-in.a
CC crypto/rsassa-pkcs1.o
CC drivers/acpi/acpica/exconvrt.o
CC security/selinux/ss/context.o
CC security/selinux/netlabel.o
CC fs/netfs/read_retry.o
CC io_uring/msg_ring.o
CC lib/flex_proportions.o
CC drivers/dma/dw/core.o
CC net/mac80211/wpa.o
CC fs/proc/proc_net.o
AR drivers/soc/apple/built-in.a
CC mm/show_mem.o
AR drivers/soc/aspeed/built-in.a
AR drivers/soc/bcm/built-in.a
AR drivers/soc/fsl/built-in.a
CC mm/interval_tree.o
AR drivers/soc/fujitsu/built-in.a
CC net/netfilter/nf_conntrack_proto_icmpv6.o
AR drivers/soc/hisilicon/built-in.a
AR drivers/soc/imx/built-in.a
CC drivers/pci/mmap.o
AR drivers/soc/ixp4xx/built-in.a
CC net/wireless/nl80211.o
CC kernel/futex/requeue.o
AR drivers/soc/loongson/built-in.a
CC drivers/pci/devres.o
AR drivers/soc/mediatek/built-in.a
AR drivers/soc/microchip/built-in.a
CC sound/core/pcm_misc.o
AR drivers/soc/nuvoton/built-in.a
AR drivers/soc/pxa/built-in.a
AR drivers/soc/amlogic/built-in.a
AR drivers/soc/qcom/built-in.a
AR drivers/soc/renesas/built-in.a
AR drivers/soc/rockchip/built-in.a
CC block/ioprio.o
AR drivers/soc/sunxi/built-in.a
AR drivers/soc/ti/built-in.a
AR kernel/module/built-in.a
AR drivers/soc/versatile/built-in.a
CC sound/core/pcm_memory.o
CC kernel/trace/trace_output.o
AR drivers/soc/xilinx/built-in.a
AR drivers/soc/built-in.a
CC drivers/acpi/acpica/excreate.o
CC kernel/time/hrtimer.o
CC net/wireless/mlme.o
CC lib/idr.o
CC net/sunrpc/auth_gss/gss_krb5_keys.o
CC fs/fat/file.o
CC crypto/acompress.o
CC drivers/acpi/reboot.o
AR kernel/rcu/built-in.a
CC net/netfilter/nf_conntrack_netlink.o
CC drivers/acpi/nvs.o
CC drivers/acpi/acpica/exdebug.o
CC kernel/irq/affinity.o
CC net/ipv4/ip_sockglue.o
CC kernel/irq/matrix.o
CC net/ethtool/stats.o
CC kernel/futex/waitwake.o
CC io_uring/advise.o
CC fs/proc/kcore.o
CC sound/core/memalloc.o
CC net/ipv4/inet_hashtables.o
CC sound/hda/hda_bus_type.o
CC lib/iomem_copy.o
CC fs/netfs/write_collect.o
CC drivers/pci/proc.o
CC io_uring/epoll.o
CC drivers/virtio/virtio.o
CC drivers/tty/vt/vt_ioctl.o
CC drivers/tty/hvc/hvc_console.o
CC lib/irq_regs.o
CC drivers/tty/serial/8250/8250_core.o
CC drivers/acpi/acpica/exdump.o
CC drivers/dma/dw/dw.o
CC block/badblocks.o
AR fs/jbd2/built-in.a
CC drivers/tty/serial/serial_core.o
AR drivers/tty/ipwireless/built-in.a
CC sound/hda/hdac_bus.o
CC sound/core/pcm_timer.o
AR security/selinux/built-in.a
CC lib/is_single_threaded.o
AR security/built-in.a
AR sound/x86/built-in.a
CC kernel/time/sleep_timeout.o
AR sound/xen/built-in.a
CC net/ethtool/phc_vclocks.o
CC net/core/dst.o
CC kernel/time/timekeeping.o
CC crypto/scompress.o
CC drivers/acpi/acpica/exfield.o
AR net/xfrm/built-in.a
CC net/wireless/ibss.o
CC fs/fat/inode.o
AR net/sunrpc/auth_gss/built-in.a
CC net/ipv4/inet_timewait_sock.o
CC net/mac80211/scan.o
AR kernel/futex/built-in.a
CC block/blk-rq-qos.o
CC drivers/tty/tty_io.o
CC lib/klist.o
CC drivers/pci/pci-sysfs.o
CC drivers/dma/dw/idma32.o
CC io_uring/statx.o
CC kernel/cgroup/rdma.o
CC net/core/netevent.o
CC fs/proc/vmcore.o
CC drivers/virtio/virtio_ring.o
CC fs/fat/misc.o
CC kernel/cgroup/cpuset.o
CC drivers/acpi/acpica/exfldio.o
CC fs/fat/nfs.o
CC arch/x86/kernel/ioport.o
CC sound/core/seq_device.o
CC drivers/virtio/virtio_anchor.o
CC drivers/tty/n_tty.o
AR drivers/tty/hvc/built-in.a
AR kernel/irq/built-in.a
CC lib/kobject.o
CC crypto/algboss.o
CC mm/list_lru.o
CC sound/hda/hdac_device.o
CC drivers/dma/dw/acpi.o
CC drivers/char/hw_random/core.o
AR drivers/iommu/amd/built-in.a
AR drivers/iommu/intel/built-in.a
AR drivers/gpu/host1x/built-in.a
AR drivers/iommu/arm/arm-smmu/built-in.a
CC drivers/tty/serial/8250/8250_platform.o
AR drivers/iommu/arm/arm-smmu-v3/built-in.a
AR drivers/iommu/arm/built-in.a
CC drivers/tty/vt/vc_screen.o
AR drivers/iommu/iommufd/built-in.a
AR drivers/iommu/riscv/built-in.a
CC drivers/iommu/iommu.o
AR drivers/gpu/drm/tests/built-in.a
AR drivers/gpu/drm/arm/built-in.a
AR drivers/gpu/drm/clients/built-in.a
CC net/ethtool/mm.o
CC drivers/gpu/drm/display/drm_display_helper_mod.o
CC fs/netfs/write_issue.o
CC drivers/connector/cn_queue.o
CC drivers/acpi/acpica/exmisc.o
CC block/disk-events.o
CC drivers/pci/slot.o
AR drivers/gpu/vga/built-in.a
CC drivers/iommu/iommu-traces.o
AR sound/core/built-in.a
CC drivers/tty/tty_ioctl.o
CC io_uring/timeout.o
CC drivers/char/agp/backend.o
CC net/wireless/sme.o
CC drivers/acpi/wakeup.o
CC drivers/char/mem.o
CC drivers/gpu/drm/display/drm_dp_dual_mode_helper.o
CC lib/kobject_uevent.o
CC net/mac80211/offchannel.o
CC lib/logic_pio.o
CC fs/proc/kmsg.o
AR drivers/dma/dw/built-in.a
CC drivers/dma/hsu/hsu.o
CC fs/ext4/file.o
CC net/ipv6/addrlabel.o
CC drivers/acpi/acpica/exmutex.o
CC mm/workingset.o
CC net/netfilter/nf_conntrack_ftp.o
CC drivers/char/hw_random/intel-rng.o
CC drivers/tty/tty_ldisc.o
CC net/mac80211/ht.o
CC net/ipv4/inet_connection_sock.o
CC drivers/tty/serial/8250/8250_pnp.o
CC crypto/testmgr.o
CC drivers/tty/vt/selection.o
CC kernel/time/ntp.o
CC block/blk-ia-ranges.o
CC fs/fat/namei_vfat.o
CC sound/hda/hdac_sysfs.o
CC lib/maple_tree.o
CC net/sunrpc/auth_tls.o
CC kernel/trace/trace_seq.o
CC fs/proc/page.o
CC drivers/acpi/acpica/exnames.o
CC drivers/pci/pci-acpi.o
CC drivers/base/power/sysfs.o
CC net/sunrpc/auth_unix.o
CC drivers/char/agp/generic.o
CC net/ethtool/module.o
CC drivers/base/power/generic_ops.o
CC drivers/connector/connector.o
CC mm/debug.o
CC arch/x86/kernel/cpu/topology.o
AR drivers/dma/idxd/built-in.a
AR drivers/dma/amd/built-in.a
CC drivers/block/loop.o
CC net/sunrpc/svc.o
CC drivers/char/hw_random/amd-rng.o
CC drivers/acpi/sleep.o
AR fs/netfs/built-in.a
CC drivers/gpu/drm/display/drm_dp_helper.o
CC drivers/tty/tty_buffer.o
CC crypto/cmac.o
AR drivers/dma/hsu/built-in.a
CC drivers/block/virtio_blk.o
AR drivers/dma/mediatek/built-in.a
CC drivers/virtio/virtio_pci_modern_dev.o
AR drivers/dma/qcom/built-in.a
AR drivers/dma/stm32/built-in.a
CC drivers/misc/eeprom/eeprom_93cx6.o
AR drivers/dma/ti/built-in.a
AR drivers/dma/xilinx/built-in.a
CC drivers/acpi/acpica/exoparg1.o
CC drivers/gpu/drm/ttm/ttm_tt.o
CC drivers/dma/dmaengine.o
CC drivers/tty/serial/8250/8250_rsa.o
AR drivers/mfd/built-in.a
CC drivers/dma/virt-dma.o
CC io_uring/fdinfo.o
CC kernel/trace/trace_stat.o
CC block/early-lookup.o
CC drivers/tty/vt/keyboard.o
CC fs/ext4/fsmap.o
CC drivers/char/hw_random/geode-rng.o
CC drivers/iommu/iommu-sysfs.o
CC kernel/time/clocksource.o
CC drivers/base/power/common.o
CC kernel/cgroup/misc.o
CC sound/hda/hdac_regmap.o
CC net/core/neighbour.o
AR fs/proc/built-in.a
CC net/ipv6/route.o
CC mm/gup.o
CC arch/x86/kernel/cpu/proc.o
CC crypto/hmac.o
AR drivers/misc/eeprom/built-in.a
AR drivers/misc/cb710/built-in.a
AR drivers/misc/lis3lv02d/built-in.a
AR drivers/misc/cardreader/built-in.a
CC drivers/connector/cn_proc.o
AR drivers/misc/keba/built-in.a
AR drivers/misc/built-in.a
CC sound/hda/hdac_controller.o
CC drivers/acpi/acpica/exoparg2.o
CC drivers/base/firmware_loader/builtin/main.o
CC sound/hda/hdac_stream.o
CC sound/hda/array.o
CC drivers/gpu/drm/display/drm_dp_mst_topology.o
CC net/mac80211/agg-tx.o
CC net/netfilter/nf_conntrack_irc.o
CC fs/fat/namei_msdos.o
CC drivers/tty/serial/8250/8250_port.o
CC drivers/char/random.o
CC net/ipv6/ip6_fib.o
CC drivers/pci/iomap.o
CC fs/isofs/namei.o
CC drivers/virtio/virtio_pci_legacy_dev.o
CC drivers/char/agp/isoch.o
CC drivers/base/power/qos.o
CC kernel/cgroup/debug.o
CC drivers/acpi/acpica/exoparg3.o
CC drivers/char/hw_random/via-rng.o
CC drivers/iommu/dma-iommu.o
CC net/ethtool/cmis_fw_update.o
CC block/bounce.o
AR drivers/base/firmware_loader/builtin/built-in.a
CC drivers/base/firmware_loader/main.o
CC drivers/gpu/drm/ttm/ttm_bo.o
CC drivers/gpu/drm/display/drm_dsc_helper.o
CC kernel/trace/trace_printk.o
CC io_uring/cancel.o
CC net/sunrpc/svcsock.o
CC arch/x86/kernel/cpu/feat_ctl.o
CC drivers/virtio/virtio_pci_modern.o
CC crypto/crypto_null.o
CC kernel/time/jiffies.o
CC net/netfilter/nf_conntrack_sip.o
CC drivers/acpi/acpica/exoparg6.o
AR drivers/char/hw_random/built-in.a
CC fs/nfs/client.o
CC arch/x86/kernel/dumpstack.o
CC drivers/dma/acpi-dma.o
AR drivers/block/built-in.a
CC net/ipv4/tcp.o
CC drivers/tty/serial/serial_base_bus.o
CC block/bsg.o
CC net/core/rtnetlink.o
CC fs/isofs/inode.o
CC drivers/pci/quirks.o
CC arch/x86/kernel/cpu/intel.o
CC drivers/char/agp/amd64-agp.o
CC sound/hda/hdmi_chmap.o
CC kernel/time/timer_list.o
CC drivers/acpi/acpica/exprep.o
CC mm/mmap_lock.o
AR drivers/connector/built-in.a
CC net/ipv6/ipv6_sockglue.o
CC drivers/tty/vt/vt.o
CC mm/highmem.o
AR kernel/cgroup/built-in.a
CC net/ipv6/ndisc.o
AR fs/fat/built-in.a
CC drivers/tty/serial/8250/8250_dma.o
CC crypto/md5.o
CC arch/x86/kernel/cpu/tsx.o
CC net/netfilter/nf_nat_core.o
CC kernel/trace/pid_list.o
CC drivers/pci/pci-label.o
CC net/ethtool/cmis_cdb.o
AR drivers/base/firmware_loader/built-in.a
CC drivers/gpu/drm/ttm/ttm_bo_util.o
CC drivers/acpi/device_sysfs.o
CC net/netfilter/nf_nat_proto.o
CC net/ipv4/tcp_input.o
CC drivers/acpi/acpica/exregion.o
CC drivers/base/power/runtime.o
CC io_uring/waitid.o
AR drivers/dma/built-in.a
CC drivers/virtio/virtio_pci_common.o
CC drivers/char/misc.o
CC fs/ext4/fsync.o
CC kernel/bpf/core.o
CC net/wireless/chan.o
CC drivers/tty/tty_port.o
CC block/blk-cgroup.o
AR sound/virtio/built-in.a
CC drivers/tty/serial/serial_ctrl.o
CC kernel/time/timeconv.o
CC arch/x86/kernel/cpu/intel_epb.o
CC crypto/sha256_generic.o
CC drivers/iommu/iova.o
CC drivers/base/power/wakeirq.o
CC drivers/char/agp/intel-agp.o
CC drivers/acpi/acpica/exresnte.o
CC drivers/acpi/acpica/exresolv.o
CC fs/ext4/hash.o
CC drivers/gpu/drm/i915/i915_config.o
CC sound/hda/trace.o
CC mm/memory.o
CC drivers/tty/serial/8250/8250_dwlib.o
CC sound/hda/hdac_component.o
CC io_uring/register.o
CC io_uring/truncate.o
CC kernel/time/timecounter.o
CC fs/isofs/dir.o
CC kernel/time/alarmtimer.o
CC arch/x86/kernel/cpu/amd.o
CC kernel/trace/trace_sched_switch.o
CC drivers/base/power/main.o
CC drivers/gpu/drm/i915/i915_driver.o
CC fs/ext4/ialloc.o
AR drivers/gpu/drm/renesas/rcar-du/built-in.a
AR drivers/gpu/drm/renesas/rz-du/built-in.a
AR drivers/gpu/drm/renesas/built-in.a
AR drivers/gpu/drm/omapdrm/built-in.a
CC net/wireless/ethtool.o
CC lib/memcat_p.o
CC drivers/acpi/acpica/exresop.o
CC drivers/gpu/drm/ttm/ttm_bo_vm.o
CC fs/nfs/dir.o
CC crypto/sha512_generic.o
CC kernel/events/core.o
CC drivers/base/power/wakeup.o
CC drivers/virtio/virtio_pci_legacy.o
CC net/ethtool/pse-pd.o
CC kernel/trace/trace_nop.o
CC net/ipv4/tcp_output.o
CC net/mac80211/agg-rx.o
CC mm/mincore.o
CC drivers/gpu/drm/display/drm_hdcp_helper.o
CC drivers/char/agp/intel-gtt.o
AR drivers/iommu/built-in.a
CC net/sunrpc/svcauth.o
CC drivers/char/virtio_console.o
CC drivers/tty/serial/8250/8250_pcilib.o
CC drivers/acpi/acpica/exserial.o
CC crypto/sha3_generic.o
CC fs/isofs/util.o
CC arch/x86/kernel/cpu/hygon.o
CC drivers/virtio/virtio_pci_admin_legacy_io.o
CC sound/hda/hdac_i915.o
CC net/netfilter/nf_nat_helper.o
CC net/sunrpc/svcauth_unix.o
CC kernel/fork.o
CC fs/nfs/file.o
COPY drivers/tty/vt/defkeymap.c
CC lib/nmi_backtrace.o
CC block/blk-ioprio.o
CC net/ethtool/plca.o
CC drivers/gpu/drm/ttm/ttm_module.o
CC drivers/pci/vgaarb.o
CC drivers/gpu/drm/ttm/ttm_execbuf_util.o
CC drivers/acpi/acpica/exstore.o
CC net/mac80211/vht.o
CC kernel/events/ring_buffer.o
CC crypto/ecb.o
CC kernel/time/posix-timers.o
CC net/ipv4/tcp_timer.o
CC drivers/gpu/drm/display/drm_hdmi_helper.o
CC kernel/trace/blktrace.o
CC arch/x86/kernel/cpu/centaur.o
CC drivers/tty/serial/8250/8250_early.o
CC fs/isofs/rock.o
CC drivers/tty/vt/consolemap.o
CC drivers/base/power/wakeup_stats.o
CC drivers/acpi/acpica/exstoren.o
CC net/wireless/mesh.o
CC drivers/virtio/virtio_input.o
CC drivers/base/power/trace.o
CC sound/hda/intel-dsp-config.o
AR drivers/char/agp/built-in.a
CC arch/x86/kernel/cpu/transmeta.o
CC io_uring/memmap.o
CC drivers/acpi/device_pm.o
CC crypto/cbc.o
CC drivers/tty/tty_mutex.o
CC drivers/gpu/drm/ttm/ttm_range_manager.o
CC net/sunrpc/addr.o
CC block/blk-iolatency.o
CC drivers/gpu/drm/i915/i915_drm_client.o
CC block/blk-iocost.o
CC drivers/acpi/acpica/exstorob.o
CC net/ethtool/phy.o
CC drivers/tty/serial/8250/8250_exar.o
CC drivers/gpu/drm/display/drm_scdc_helper.o
CC net/ipv6/udp.o
CC drivers/char/hpet.o
CC fs/exportfs/expfs.o
AR drivers/pci/built-in.a
CC lib/objpool.o
CC crypto/ctr.o
AR kernel/bpf/built-in.a
CC kernel/exec_domain.o
CC arch/x86/kernel/cpu/zhaoxin.o
CC net/netfilter/nf_nat_masquerade.o
CC drivers/acpi/acpica/exsystem.o
CC fs/nfs/getroot.o
AR drivers/gpu/drm/tilcdc/built-in.a
CC net/core/utils.o
CC sound/hda/intel-nhlt.o
CC fs/isofs/export.o
AR drivers/base/power/built-in.a
HOSTCC drivers/tty/vt/conmakehash
CC drivers/base/regmap/regmap.o
CC drivers/virtio/virtio_dma_buf.o
CC drivers/base/regmap/regcache.o
CC drivers/gpu/drm/ttm/ttm_resource.o
CC fs/isofs/joliet.o
CC io_uring/io-wq.o
CC kernel/trace/trace_events.o
CC fs/ext4/indirect.o
CC kernel/time/posix-cpu-timers.o
CC kernel/time/posix-clock.o
CC drivers/tty/vt/defkeymap.o
CC net/netfilter/nf_nat_ftp.o
CC lib/plist.o
CC crypto/gcm.o
CC drivers/acpi/acpica/extrace.o
CC arch/x86/kernel/cpu/vortex.o
CC lib/radix-tree.o
CC kernel/events/callchain.o
CC drivers/gpu/drm/ttm/ttm_pool.o
AR fs/exportfs/built-in.a
CC drivers/tty/serial/serial_port.o
CC sound/sound_core.o
CONMK drivers/tty/vt/consolemap_deftbl.c
CC drivers/tty/vt/consolemap_deftbl.o
AR drivers/gpu/drm/display/built-in.a
CC drivers/gpu/drm/i915/i915_getparam.o
AR drivers/tty/vt/built-in.a
CC net/mac80211/he.o
CC arch/x86/kernel/nmi.o
AR drivers/nfc/built-in.a
AR drivers/dax/hmem/built-in.a
AR drivers/dax/built-in.a
CC sound/hda/intel-sdw-acpi.o
CC kernel/trace/trace_export.o
CC drivers/tty/serial/8250/8250_lpss.o
CC block/mq-deadline.o
CC net/ipv6/udplite.o
CC fs/ext4/inline.o
AR drivers/virtio/built-in.a
CC net/wireless/ap.o
CC drivers/char/nvram.o
CC io_uring/futex.o
CC drivers/acpi/acpica/exutils.o
CC drivers/base/regmap/regcache-rbtree.o
CC net/mac80211/s1g.o
CC arch/x86/kernel/cpu/perfctr-watchdog.o
AR net/ethtool/built-in.a
CC drivers/dma-buf/dma-buf.o
CC fs/isofs/compress.o
CC net/ipv4/tcp_ipv4.o
CC net/sunrpc/rpcb_clnt.o
CC net/sunrpc/timer.o
CC fs/nfs/inode.o
CC net/ipv4/tcp_minisocks.o
CC drivers/tty/tty_ldsem.o
AR sound/hda/built-in.a
CC crypto/ccm.o
CC mm/mlock.o
CC sound/last.o
CC drivers/acpi/acpica/hwacpi.o
CC block/kyber-iosched.o
CC net/core/link_watch.o
CC drivers/gpu/drm/ttm/ttm_device.o
CC kernel/trace/trace_event_perf.o
CC fs/lockd/clntlock.o
CC net/sunrpc/xdr.o
CC drivers/gpu/drm/ttm/ttm_sys_manager.o
CC lib/ratelimit.o
CC net/mac80211/ibss.o
CC drivers/tty/serial/8250/8250_mid.o
CC kernel/time/itimer.o
CC net/netfilter/nf_nat_irc.o
CC arch/x86/kernel/cpu/vmware.o
CC block/blk-mq-pci.o
CC drivers/gpu/drm/i915/i915_ioctl.o
AR sound/built-in.a
CC drivers/acpi/acpica/hwesleep.o
CC net/core/filter.o
CC drivers/gpu/drm/ttm/ttm_agp_backend.o
CC drivers/tty/serial/earlycon.o
AR drivers/char/built-in.a
CC net/sunrpc/sunrpc_syms.o
CC drivers/gpu/drm/i915/i915_irq.o
CC io_uring/napi.o
CC net/ipv6/raw.o
CC lib/rbtree.o
CC fs/nfs/super.o
CC drivers/tty/serial/8250/8250_pci.o
CC net/netfilter/nf_nat_sip.o
CC kernel/panic.o
AR fs/isofs/built-in.a
CC fs/ext4/inode.o
CC drivers/gpu/drm/virtio/virtgpu_drv.o
AR drivers/gpu/drm/imx/built-in.a
CC net/ipv4/tcp_cong.o
CC drivers/acpi/acpica/hwgpe.o
CC drivers/dma-buf/dma-fence.o
CC net/mac80211/iface.o
CC arch/x86/kernel/cpu/hypervisor.o
CC lib/seq_buf.o
CC mm/mmap.o
CC crypto/aes_generic.o
CC kernel/time/clockevents.o
AR drivers/gpu/drm/i2c/built-in.a
CC net/wireless/trace.o
CC kernel/events/hw_breakpoint.o
CC kernel/cpu.o
AR drivers/gpu/drm/ttm/built-in.a
CC drivers/base/regmap/regcache-flat.o
CC drivers/gpu/drm/virtio/virtgpu_kms.o
CC drivers/acpi/proc.o
AR drivers/cxl/core/built-in.a
CC kernel/events/uprobes.o
AR drivers/cxl/built-in.a
CC crypto/crc32c_generic.o
CC arch/x86/kernel/cpu/mshyperv.o
CC kernel/time/tick-common.o
CC drivers/dma-buf/dma-fence-array.o
CC lib/siphash.o
CC drivers/acpi/acpica/hwregs.o
CC net/sunrpc/cache.o
CC kernel/trace/trace_events_filter.o
CC fs/lockd/clntproc.o
CC net/wireless/ocb.o
AR drivers/base/test/built-in.a
CC drivers/acpi/bus.o
CC fs/ext4/ioctl.o
CC fs/nls/nls_base.o
CC fs/lockd/clntxdr.o
CC fs/nfs/io.o
CC net/ipv6/icmp.o
CC fs/nfs/direct.o
CC drivers/gpu/drm/virtio/virtgpu_gem.o
CC block/blk-mq-virtio.o
CC kernel/exit.o
CC drivers/base/regmap/regcache-maple.o
CC drivers/acpi/glue.o
CC drivers/gpu/drm/i915/i915_mitigations.o
CC lib/string.o
CC drivers/acpi/acpica/hwsleep.o
CC crypto/authenc.o
CC net/mac80211/link.o
AR io_uring/built-in.a
CC net/core/sock_diag.o
CC mm/mmu_gather.o
CC kernel/trace/trace_events_trigger.o
CC drivers/base/component.o
CC fs/nls/nls_cp437.o
CC lib/timerqueue.o
CC drivers/tty/serial/8250/8250_pericom.o
CC net/netfilter/x_tables.o
CC drivers/dma-buf/dma-fence-chain.o
CC arch/x86/kernel/cpu/debugfs.o
CC arch/x86/kernel/ldt.o
CC net/ipv4/tcp_metrics.o
CC lib/union_find.o
CC drivers/acpi/acpica/hwvalid.o
CC lib/vsprintf.o
CC lib/win_minmax.o
CC kernel/time/tick-broadcast.o
CC drivers/base/core.o
CC block/blk-mq-debugfs.o
CC fs/nls/nls_ascii.o
CC drivers/base/regmap/regmap-debugfs.o
CC drivers/base/bus.o
CC drivers/gpu/drm/virtio/virtgpu_vram.o
CC block/blk-pm.o
CC drivers/base/dd.o
CC drivers/acpi/acpica/hwxface.o
CC drivers/gpu/drm/virtio/virtgpu_display.o
CC drivers/dma-buf/dma-fence-unwrap.o
CC drivers/base/syscore.o
CC block/holder.o
CC fs/nls/nls_iso8859-1.o
CC kernel/trace/trace_eprobe.o
CC drivers/gpu/drm/i915/i915_module.o
CC mm/mprotect.o
CC arch/x86/kernel/cpu/bus_lock.o
CC kernel/time/tick-broadcast-hrtimer.o
AR drivers/tty/serial/8250/built-in.a
AR drivers/tty/serial/built-in.a
CC fs/nfs/pagelist.o
CC drivers/gpu/drm/i915/i915_params.o
CC drivers/tty/tty_baudrate.o
CC fs/lockd/host.o
CC crypto/authencesn.o
CC drivers/gpu/drm/virtio/virtgpu_vq.o
CC fs/ext4/mballoc.o
AR drivers/gpu/drm/panel/built-in.a
CC net/mac80211/rate.o
CC drivers/acpi/acpica/hwxfsleep.o
CC net/ipv6/mcast.o
CC fs/nls/nls_utf8.o
CC net/sunrpc/rpc_pipe.o
CC lib/xarray.o
CC fs/nfs/read.o
AR drivers/base/regmap/built-in.a
CC drivers/dma-buf/dma-resv.o
CC net/netfilter/xt_tcpudp.o
CC fs/ext4/migrate.o
CC kernel/time/tick-oneshot.o
CC arch/x86/kernel/cpu/capflags.o
AR fs/unicode/built-in.a
CC net/mac80211/michael.o
CC drivers/dma-buf/sync_file.o
CC fs/lockd/svc.o
CC lib/lockref.o
CC net/wireless/pmsr.o
CC drivers/tty/tty_jobctrl.o
AR block/built-in.a
CC fs/nfs/symlink.o
CC fs/nfs/unlink.o
CC drivers/acpi/acpica/hwpci.o
AR fs/nls/built-in.a
CC drivers/acpi/acpica/nsaccess.o
CC net/ipv4/tcp_fastopen.o
AR kernel/events/built-in.a
CC drivers/base/driver.o
CC drivers/macintosh/mac_hid.o
AR drivers/scsi/pcmcia/built-in.a
CC fs/autofs/init.o
CC drivers/scsi/scsi.o
CC kernel/time/tick-sched.o
CC fs/9p/vfs_super.o
AR arch/x86/kernel/cpu/built-in.a
CC arch/x86/kernel/setup.o
CC drivers/scsi/hosts.o
CC fs/lockd/svclock.o
CC fs/nfs/write.o
AR drivers/gpu/drm/bridge/analogix/built-in.a
AR drivers/gpu/drm/bridge/cadence/built-in.a
AR drivers/gpu/drm/bridge/imx/built-in.a
AR drivers/gpu/drm/bridge/synopsys/built-in.a
AR drivers/gpu/drm/bridge/built-in.a
CC drivers/tty/n_null.o
CC drivers/tty/pty.o
CC drivers/gpu/drm/i915/i915_pci.o
CC drivers/tty/tty_audit.o
CC crypto/lzo.o
CC mm/mremap.o
CC drivers/acpi/acpica/nsalloc.o
CC kernel/trace/trace_kprobe.o
CC drivers/tty/sysrq.o
CC net/ipv6/reassembly.o
CC fs/autofs/inode.o
CC kernel/time/timer_migration.o
CC net/netfilter/xt_CONNSECMARK.o
CC fs/9p/vfs_inode.o
AR drivers/nvme/common/built-in.a
AR drivers/dma-buf/built-in.a
CC net/mac80211/tkip.o
AR drivers/nvme/host/built-in.a
CC fs/nfs/namespace.o
AR drivers/nvme/target/built-in.a
AR drivers/macintosh/built-in.a
AR drivers/nvme/built-in.a
CC fs/lockd/svcshare.o
AR fs/hostfs/built-in.a
CC drivers/gpu/drm/virtio/virtgpu_fence.o
GEN net/wireless/shipped-certs.c
CC kernel/time/vsyscall.o
CC fs/9p/vfs_inode_dotl.o
CC kernel/trace/error_report-traces.o
CC net/ipv6/tcp_ipv6.o
CC drivers/acpi/acpica/nsarguments.o
CC net/netfilter/xt_NFLOG.o
CC fs/lockd/svcproc.o
CC arch/x86/kernel/x86_init.o
CC drivers/gpu/drm/i915/i915_scatterlist.o
CC mm/msync.o
CC mm/page_vma_mapped.o
CC drivers/base/class.o
CC crypto/lzo-rle.o
CC net/netfilter/xt_SECMARK.o
CC fs/autofs/root.o
CC drivers/ata/libata-core.o
CC net/sunrpc/sysfs.o
CC drivers/acpi/acpica/nsconvert.o
AR drivers/net/phy/mediatek/built-in.a
AR drivers/net/phy/qcom/built-in.a
CC drivers/net/phy/mdio-boardinfo.o
AR drivers/net/pse-pd/built-in.a
AR drivers/gpu/drm/hisilicon/built-in.a
CC drivers/net/mdio/acpi_mdio.o
AR drivers/net/pcs/built-in.a
CC kernel/softirq.o
CC drivers/acpi/scan.o
CC fs/nfs/mount_clnt.o
CC net/core/dev_ioctl.o
CC net/ipv4/tcp_rate.o
AR drivers/tty/built-in.a
CC net/core/tso.o
CC drivers/scsi/scsi_ioctl.o
CC drivers/gpu/drm/virtio/virtgpu_object.o
CC net/mac80211/aes_cmac.o
AR drivers/net/ethernet/3com/built-in.a
CC fs/ext4/mmp.o
CC drivers/net/ethernet/8390/ne2k-pci.o
CC lib/bcd.o
CC crypto/rng.o
CC drivers/net/phy/stubs.o
CC lib/sort.o
CC drivers/acpi/acpica/nsdump.o
CC arch/x86/kernel/i8259.o
AR drivers/gpu/drm/mxsfb/built-in.a
CC drivers/net/phy/mdio_devres.o
CC net/mac80211/aes_gmac.o
CC drivers/gpu/drm/i915/i915_switcheroo.o
CC net/sunrpc/svc_xprt.o
CC net/ipv6/ping.o
CC lib/parser.o
CC fs/9p/vfs_addr.o
CC fs/9p/vfs_file.o
CC drivers/gpu/drm/virtio/virtgpu_debugfs.o
CC mm/pagewalk.o
CC net/ipv4/tcp_recovery.o
CC drivers/base/platform.o
CC drivers/firewire/init_ohci1394_dma.o
CC drivers/base/cpu.o
AR drivers/gpu/drm/tiny/built-in.a
CC drivers/cdrom/cdrom.o
CC drivers/acpi/acpica/nseval.o
CC net/netfilter/xt_TCPMSS.o
CC fs/autofs/symlink.o
CC drivers/net/mdio/fwnode_mdio.o
CC fs/debugfs/inode.o
CC fs/lockd/svcsubs.o
CC lib/debug_locks.o
CC net/ipv4/tcp_ulp.o
CC arch/x86/kernel/irqinit.o
CC net/sunrpc/xprtmultipath.o
CC kernel/time/timekeeping_debug.o
CC lib/random32.o
CC kernel/trace/power-traces.o
CC drivers/acpi/acpica/nsinit.o
CC lib/bust_spinlocks.o
CC drivers/scsi/scsicam.o
AR drivers/net/wireless/admtek/built-in.a
AR drivers/net/wireless/ath/built-in.a
AR drivers/net/wireless/atmel/built-in.a
AR drivers/net/wireless/broadcom/built-in.a
AR drivers/net/usb/built-in.a
CC fs/9p/vfs_dir.o
AR drivers/net/wireless/intel/built-in.a
CC net/ipv6/exthdrs.o
AR drivers/net/wireless/intersil/built-in.a
CC drivers/gpu/drm/virtio/virtgpu_plane.o
AR drivers/net/wireless/marvell/built-in.a
CC crypto/drbg.o
AR drivers/net/wireless/mediatek/built-in.a
CC kernel/resource.o
CC drivers/scsi/scsi_error.o
CC drivers/net/phy/phy.o
AR drivers/net/wireless/microchip/built-in.a
CC fs/tracefs/inode.o
AR drivers/firewire/built-in.a
CC drivers/net/phy/phy-c45.o
AR drivers/net/wireless/purelifi/built-in.a
CC net/netfilter/xt_conntrack.o
AR drivers/net/wireless/quantenna/built-in.a
CC drivers/net/phy/phy-core.o
AR drivers/net/wireless/ralink/built-in.a
AR drivers/net/wireless/realtek/built-in.a
CC drivers/net/ethernet/8390/8390.o
AR drivers/net/wireless/rsi/built-in.a
CC kernel/time/namespace.o
AR drivers/net/wireless/silabs/built-in.a
AR drivers/net/wireless/st/built-in.a
AR drivers/net/wireless/ti/built-in.a
CC drivers/net/mii.o
AR drivers/net/wireless/zydas/built-in.a
AR drivers/net/wireless/virtual/built-in.a
AR drivers/net/wireless/built-in.a
AR drivers/net/ethernet/adaptec/built-in.a
CC drivers/gpu/drm/virtio/virtgpu_ioctl.o
CC fs/nfs/nfstrace.o
CC fs/debugfs/file.o
CC fs/autofs/waitq.o
CC drivers/acpi/acpica/nsload.o
CC kernel/sysctl.o
CC drivers/gpu/drm/i915/i915_sysfs.o
CC fs/lockd/mon.o
CC mm/pgtable-generic.o
CC net/mac80211/fils_aead.o
AR drivers/net/ethernet/agere/built-in.a
AR drivers/auxdisplay/built-in.a
CC net/ipv6/datagram.o
CC drivers/base/firmware.o
CC drivers/net/phy/phy_device.o
CC net/core/sock_reuseport.o
CC net/sunrpc/stats.o
CC lib/kasprintf.o
AR drivers/net/mdio/built-in.a
CC fs/ext4/move_extent.o
AR drivers/gpu/drm/xlnx/built-in.a
CC net/ipv4/tcp_offload.o
CC drivers/ata/libata-scsi.o
CC drivers/acpi/acpica/nsnames.o
CC arch/x86/kernel/jump_label.o
CC fs/tracefs/event_inode.o
CC drivers/gpu/drm/i915/i915_utils.o
CC fs/9p/vfs_dentry.o
CC drivers/base/init.o
CC fs/lockd/trace.o
CC lib/bitmap.o
CC [M] fs/efivarfs/inode.o
CC crypto/jitterentropy.o
AR kernel/time/built-in.a
CC fs/lockd/xdr.o
CC net/ipv6/ip6_flowlabel.o
CC drivers/pcmcia/cs.o
CC crypto/jitterentropy-kcapi.o
CC net/wireless/shipped-certs.o
CC drivers/pcmcia/socket_sysfs.o
CC drivers/acpi/acpica/nsobject.o
CC mm/rmap.o
CC fs/autofs/expire.o
CC net/netfilter/xt_policy.o
CC drivers/gpu/drm/virtio/virtgpu_prime.o
AR drivers/gpu/drm/gud/built-in.a
CC kernel/trace/rpm-traces.o
AR drivers/cdrom/built-in.a
AR drivers/net/ethernet/8390/built-in.a
CC net/ipv6/inet6_connection_sock.o
AR drivers/net/ethernet/alacritech/built-in.a
CC drivers/acpi/mipi-disco-img.o
CC net/ipv4/tcp_plb.o
AR drivers/net/ethernet/alteon/built-in.a
AR drivers/net/ethernet/amazon/built-in.a
CC arch/x86/kernel/irq_work.o
AR drivers/net/ethernet/amd/built-in.a
AR drivers/net/ethernet/aquantia/built-in.a
CC kernel/trace/trace_dynevent.o
AR drivers/net/ethernet/arc/built-in.a
CC fs/autofs/dev-ioctl.o
AR drivers/net/ethernet/asix/built-in.a
AR drivers/net/ethernet/atheros/built-in.a
AR drivers/net/ethernet/cadence/built-in.a
CC drivers/net/ethernet/broadcom/bnx2.o
CC drivers/net/ethernet/broadcom/tg3.o
CC fs/9p/v9fs.o
CC drivers/base/map.o
AR fs/debugfs/built-in.a
CC crypto/ghash-generic.o
CC drivers/base/devres.o
CC drivers/acpi/acpica/nsparse.o
CC drivers/scsi/scsi_lib.o
CC [M] fs/efivarfs/file.o
CC drivers/acpi/resource.o
CC drivers/pcmcia/cardbus.o
CC net/ipv4/datagram.o
CC fs/ext4/namei.o
CC net/sunrpc/sysctl.o
CC net/mac80211/cfg.o
CC lib/scatterlist.o
AR fs/tracefs/built-in.a
CC kernel/capability.o
CC drivers/gpu/drm/i915/intel_clock_gating.o
CC fs/nfs/export.o
CC net/core/fib_notifier.o
CC net/netfilter/xt_state.o
CC mm/vmalloc.o
CC fs/9p/fid.o
CC drivers/scsi/constants.o
CC drivers/acpi/acpica/nspredef.o
CC net/ipv4/raw.o
CC crypto/hash_info.o
CC drivers/scsi/scsi_lib_dma.o
CC crypto/rsapubkey.asn1.o
CC drivers/gpu/drm/virtio/virtgpu_trace_points.o
CC fs/9p/xattr.o
CC net/core/xdp.o
CC crypto/rsaprivkey.asn1.o
CC lib/list_sort.o
AR crypto/built-in.a
CC [M] net/netfilter/nf_log_syslog.o
CC net/mac80211/ethtool.o
CC fs/open.o
CC fs/nfs/sysfs.o
CC [M] fs/efivarfs/super.o
AR drivers/net/ethernet/brocade/built-in.a
CC arch/x86/kernel/probe_roms.o
CC fs/lockd/clnt4xdr.o
CC mm/vma.o
CC drivers/pcmcia/ds.o
CC drivers/acpi/acpi_processor.o
CC kernel/trace/trace_probe.o
AR fs/autofs/built-in.a
CC drivers/base/attribute_container.o
AR drivers/gpu/drm/solomon/built-in.a
CC fs/nfs/fs_context.o
CC kernel/ptrace.o
CC kernel/user.o
CC drivers/usb/common/common.o
CC drivers/acpi/acpica/nsprepkg.o
CC net/ipv4/udp.o
CC drivers/net/phy/linkmode.o
CC drivers/net/loopback.o
CC net/ipv6/udp_offload.o
CC drivers/acpi/processor_core.o
CC net/core/flow_offload.o
CC [M] drivers/gpu/drm/scheduler/sched_main.o
CC drivers/acpi/acpica/nsrepair.o
AR net/sunrpc/built-in.a
AR drivers/net/ethernet/cavium/common/built-in.a
CC [M] fs/efivarfs/vars.o
CC [M] net/netfilter/xt_mark.o
AR drivers/net/ethernet/cavium/thunder/built-in.a
AR drivers/net/ethernet/cavium/liquidio/built-in.a
AR fs/9p/built-in.a
AR drivers/net/ethernet/cavium/octeon/built-in.a
CC drivers/usb/common/debug.o
AR drivers/net/ethernet/cavium/built-in.a
CC drivers/gpu/drm/virtio/virtgpu_submit.o
CC [M] drivers/gpu/drm/scheduler/sched_fence.o
CC drivers/base/transport_class.o
CC fs/nfs/nfsroot.o
CC lib/uuid.o
CC net/mac80211/rx.o
AR drivers/net/ethernet/chelsio/built-in.a
AR drivers/net/ethernet/cisco/built-in.a
CC drivers/acpi/processor_pdc.o
HOSTCC drivers/gpu/drm/xe/xe_gen_wa_oob
CC fs/ext4/page-io.o
CC fs/nfs/sysctl.o
CC arch/x86/kernel/sys_ia32.o
CC lib/iov_iter.o
CC drivers/ata/libata-eh.o
CC drivers/gpu/drm/i915/intel_cpu_info.o
CC [M] net/netfilter/xt_nat.o
CC drivers/gpu/drm/i915/intel_device_info.o
GEN xe_wa_oob.c xe_wa_oob.h
CC [M] drivers/gpu/drm/xe/xe_bb.o
AR drivers/usb/common/built-in.a
CC drivers/usb/core/usb.o
CC drivers/usb/core/hub.o
CC drivers/acpi/acpica/nsrepair2.o
CC drivers/base/topology.o
CC fs/nfs/nfs3super.o
CC arch/x86/kernel/ksysfs.o
CC drivers/input/serio/serio.o
CC drivers/scsi/scsi_scan.o
CC drivers/net/phy/phy_link_topology.o
CC drivers/pcmcia/pcmcia_resource.o
CC drivers/gpu/drm/drm_atomic.o
AR drivers/net/ethernet/cortina/built-in.a
CC fs/read_write.o
CC net/ipv6/seg6.o
CC fs/lockd/xdr4.o
CC fs/ext4/readpage.o
LD [M] fs/efivarfs/efivarfs.o
CC kernel/trace/trace_uprobe.o
CC lib/clz_ctz.o
CC mm/process_vm_access.o
CC drivers/acpi/acpica/nssearch.o
GEN drivers/scsi/scsi_devinfo_tbl.c
CC drivers/acpi/ec.o
CC net/ipv4/udplite.o
AR drivers/gpu/drm/virtio/built-in.a
CC drivers/base/container.o
CC [M] drivers/gpu/drm/xe/xe_bo.o
CC drivers/input/keyboard/atkbd.o
CC drivers/input/serio/i8042.o
CC drivers/ata/libata-transport.o
CC kernel/signal.o
CC drivers/rtc/lib.o
CC drivers/i2c/algos/i2c-algo-bit.o
CC net/core/gro.o
CC drivers/usb/core/hcd.o
CC drivers/i2c/busses/i2c-i801.o
CC arch/x86/kernel/bootflag.o
AR drivers/i2c/muxes/built-in.a
AR drivers/usb/phy/built-in.a
CC mm/page_alloc.o
AR drivers/i3c/built-in.a
CC lib/bsearch.o
CC drivers/acpi/acpica/nsutils.o
CC net/core/netdev-genl.o
CC drivers/acpi/dock.o
CC drivers/base/property.o
CC fs/nfs/nfs3client.o
CC [M] drivers/gpu/drm/scheduler/sched_entity.o
CC drivers/gpu/drm/i915/intel_memory_region.o
CC [M] net/netfilter/xt_LOG.o
CC drivers/net/phy/mdio_bus.o
CC drivers/net/netconsole.o
CC drivers/input/mouse/psmouse-base.o
CC drivers/ata/libata-trace.o
CC net/ipv6/fib6_notifier.o
CC drivers/rtc/class.o
CC fs/file_table.o
CC drivers/acpi/acpica/nswalk.o
CC drivers/pcmcia/cistpl.o
CC drivers/scsi/scsi_devinfo.o
CC arch/x86/kernel/e820.o
CC net/ipv6/rpl.o
AR drivers/media/i2c/built-in.a
CC fs/lockd/svc4proc.o
AR drivers/media/tuners/built-in.a
AR drivers/media/rc/keymaps/built-in.a
AR drivers/media/rc/built-in.a
AR drivers/net/ethernet/dec/tulip/built-in.a
AR drivers/net/ethernet/dec/built-in.a
CC drivers/input/serio/serport.o
CC drivers/gpu/drm/drm_atomic_uapi.o
AR drivers/input/joystick/built-in.a
AR drivers/media/common/b2c2/built-in.a
CC drivers/i2c/i2c-boardinfo.o
AR drivers/media/common/saa7146/built-in.a
CC drivers/ata/libata-sata.o
AR drivers/media/common/siano/built-in.a
AR drivers/input/keyboard/built-in.a
CC drivers/acpi/pci_root.o
AR drivers/media/common/v4l2-tpg/built-in.a
AR drivers/media/common/videobuf2/built-in.a
AR drivers/media/common/built-in.a
AR drivers/i2c/algos/built-in.a
CC drivers/input/serio/libps2.o
AR drivers/media/platform/allegro-dvt/built-in.a
LD [M] drivers/gpu/drm/scheduler/gpu-sched.o
CC drivers/net/phy/mdio_device.o
AR drivers/media/platform/amphion/built-in.a
AR drivers/media/platform/amlogic/meson-ge2d/built-in.a
AR drivers/media/platform/amlogic/built-in.a
CC drivers/acpi/pci_link.o
AR drivers/media/platform/aspeed/built-in.a
AR drivers/media/platform/atmel/built-in.a
AR drivers/media/platform/broadcom/built-in.a
CC drivers/acpi/acpica/nsxfeval.o
AR drivers/media/platform/cadence/built-in.a
AR drivers/media/platform/chips-media/coda/built-in.a
AR drivers/media/platform/chips-media/wave5/built-in.a
AR drivers/media/platform/chips-media/built-in.a
AR drivers/media/platform/imagination/built-in.a
CC drivers/usb/core/urb.o
AR drivers/media/platform/intel/built-in.a
AR drivers/media/platform/marvell/built-in.a
AR drivers/media/pci/ttpci/built-in.a
CC fs/ext4/resize.o
AR drivers/media/pci/b2c2/built-in.a
AR drivers/media/platform/mediatek/jpeg/built-in.a
AR drivers/media/pci/pluto2/built-in.a
AR drivers/media/platform/mediatek/mdp/built-in.a
AR drivers/media/pci/dm1105/built-in.a
AR drivers/media/platform/mediatek/vcodec/common/built-in.a
AR drivers/media/pci/pt1/built-in.a
CC drivers/usb/core/message.o
AR drivers/media/platform/mediatek/vcodec/encoder/built-in.a
AR drivers/media/pci/pt3/built-in.a
AR drivers/media/platform/mediatek/vcodec/decoder/built-in.a
AR drivers/media/platform/mediatek/vcodec/built-in.a
AR drivers/media/pci/mantis/built-in.a
CC drivers/rtc/interface.o
AR drivers/media/pci/ngene/built-in.a
AR drivers/media/platform/mediatek/vpu/built-in.a
AR drivers/media/pci/ddbridge/built-in.a
AR drivers/media/platform/mediatek/mdp3/built-in.a
AR drivers/media/pci/saa7146/built-in.a
AR drivers/media/platform/mediatek/built-in.a
AR drivers/media/pci/smipcie/built-in.a
AR drivers/i2c/busses/built-in.a
CC drivers/input/mouse/synaptics.o
AR drivers/media/pci/netup_unidvb/built-in.a
AR drivers/media/platform/microchip/built-in.a
CC drivers/usb/core/driver.o
AR drivers/media/platform/nuvoton/built-in.a
AR drivers/media/pci/intel/ipu3/built-in.a
CC fs/nfs/nfs3proc.o
AR drivers/media/pci/intel/ivsc/built-in.a
AR drivers/media/platform/nvidia/tegra-vde/built-in.a
CC drivers/gpu/drm/i915/intel_pcode.o
AR drivers/media/pci/intel/built-in.a
AR drivers/media/platform/nvidia/built-in.a
CC lib/find_bit.o
AR drivers/media/pci/built-in.a
AR drivers/media/platform/nxp/dw100/built-in.a
CC drivers/usb/core/config.o
AR drivers/media/platform/nxp/imx-jpeg/built-in.a
AR drivers/media/platform/nxp/imx8-isi/built-in.a
AR drivers/media/platform/nxp/built-in.a
AR drivers/media/platform/qcom/camss/built-in.a
AR drivers/media/platform/qcom/venus/built-in.a
AR drivers/media/platform/qcom/built-in.a
AR drivers/media/platform/raspberrypi/pisp_be/built-in.a
AR drivers/media/platform/raspberrypi/rp1-cfe/built-in.a
CC drivers/usb/core/file.o
CC drivers/scsi/scsi_sysctl.o
AR drivers/media/platform/raspberrypi/built-in.a
CC drivers/base/cacheinfo.o
AR drivers/media/platform/renesas/rcar-vin/built-in.a
AR drivers/media/platform/renesas/rzg2l-cru/built-in.a
CC kernel/trace/rethook.o
CC [M] net/netfilter/xt_MASQUERADE.o
AR drivers/media/platform/renesas/vsp1/built-in.a
CC drivers/acpi/acpica/nsxfname.o
AR drivers/media/platform/rockchip/rga/built-in.a
AR drivers/media/platform/renesas/built-in.a
AR drivers/media/platform/rockchip/rkisp1/built-in.a
AR drivers/media/platform/rockchip/built-in.a
CC drivers/scsi/scsi_proc.o
CC drivers/gpu/drm/drm_auth.o
CC lib/llist.o
CC drivers/base/swnode.o
AR drivers/media/platform/samsung/exynos-gsc/built-in.a
AR drivers/media/platform/samsung/exynos4-is/built-in.a
AR drivers/media/platform/samsung/s3c-camif/built-in.a
AR drivers/media/platform/samsung/s5p-g2d/built-in.a
AR drivers/media/platform/samsung/s5p-jpeg/built-in.a
CC drivers/base/auxiliary.o
AR drivers/media/platform/samsung/s5p-mfc/built-in.a
CC drivers/i2c/i2c-core-base.o
AR drivers/media/platform/samsung/built-in.a
CC drivers/usb/mon/mon_main.o
CC drivers/pcmcia/pcmcia_cis.o
CC lib/lwq.o
CC net/core/netdev-genl-gen.o
AR drivers/media/platform/st/sti/bdisp/built-in.a
CC net/ipv4/udp_offload.o
AR drivers/media/platform/sunxi/sun4i-csi/built-in.a
AR drivers/media/platform/st/sti/c8sectpfe/built-in.a
AR drivers/input/serio/built-in.a
AR drivers/media/platform/sunxi/sun6i-csi/built-in.a
AR drivers/media/platform/st/sti/delta/built-in.a
CC drivers/base/devtmpfs.o
AR drivers/media/platform/sunxi/sun6i-mipi-csi2/built-in.a
AR drivers/media/platform/st/sti/hva/built-in.a
AR drivers/media/platform/sunxi/sun8i-a83t-mipi-csi2/built-in.a
AR drivers/media/platform/st/stm32/built-in.a
AR drivers/media/platform/sunxi/sun8i-di/built-in.a
CC lib/memweight.o
AR drivers/media/platform/st/built-in.a
AR drivers/media/platform/sunxi/sun8i-rotate/built-in.a
CC fs/super.o
AR drivers/media/platform/sunxi/built-in.a
AR drivers/media/platform/ti/am437x/built-in.a
CC net/ipv6/ioam6.o
AR drivers/media/platform/ti/cal/built-in.a
AR drivers/media/platform/ti/vpe/built-in.a
AR drivers/media/platform/ti/davinci/built-in.a
CC net/mac80211/spectmgmt.o
AR drivers/media/platform/ti/j721e-csi2rx/built-in.a
CC arch/x86/kernel/pci-dma.o
CC drivers/gpu/drm/drm_blend.o
CC [M] drivers/gpu/drm/xe/xe_bo_evict.o
AR drivers/media/platform/ti/omap/built-in.a
CC lib/kfifo.o
AR drivers/media/platform/ti/omap3isp/built-in.a
AR drivers/media/platform/ti/built-in.a
CC drivers/net/phy/swphy.o
AR drivers/media/platform/verisilicon/built-in.a
AR drivers/media/platform/via/built-in.a
AR drivers/media/platform/xilinx/built-in.a
AR drivers/media/platform/built-in.a
CC net/ipv6/sysctl_net_ipv6.o
CC drivers/pcmcia/rsrc_mgr.o
AR net/wireless/built-in.a
CC fs/lockd/procfs.o
CC drivers/pcmcia/rsrc_nonstatic.o
CC lib/percpu-refcount.o
AR drivers/media/usb/b2c2/built-in.a
CC drivers/acpi/acpica/nsxfobj.o
CC fs/ext4/super.o
AR drivers/media/usb/dvb-usb/built-in.a
AR drivers/media/usb/dvb-usb-v2/built-in.a
AR drivers/media/usb/s2255/built-in.a
AR drivers/media/usb/siano/built-in.a
CC arch/x86/kernel/quirks.o
AR drivers/media/usb/ttusb-budget/built-in.a
AR drivers/media/usb/ttusb-dec/built-in.a
CC arch/x86/kernel/kdebugfs.o
AR drivers/media/usb/built-in.a
CC net/ipv6/xfrm6_policy.o
CC arch/x86/kernel/alternative.o
CC [M] drivers/gpu/drm/xe/xe_devcoredump.o
CC drivers/input/mouse/focaltech.o
AR drivers/media/mmc/siano/built-in.a
AR drivers/media/mmc/built-in.a
CC net/ipv4/arp.o
CC drivers/ata/libata-sff.o
AR drivers/media/firewire/built-in.a
AR drivers/media/spi/built-in.a
AR drivers/media/test-drivers/built-in.a
AR drivers/media/built-in.a
AR kernel/trace/built-in.a
CC drivers/rtc/nvmem.o
CC net/ipv6/xfrm6_state.o
CC drivers/gpu/drm/drm_bridge.o
CC net/core/gso.o
CC drivers/scsi/scsi_debugfs.o
CC drivers/usb/mon/mon_stat.o
CC drivers/acpi/acpica/psargs.o
CC drivers/gpu/drm/i915/intel_region_ttm.o
CC kernel/sys.o
CC fs/char_dev.o
CC drivers/scsi/scsi_trace.o
CC drivers/rtc/dev.o
CC drivers/pcmcia/yenta_socket.o
CC lib/rhashtable.o
CC fs/nfs/nfs3xdr.o
CC drivers/ata/libata-pmp.o
CC [M] net/netfilter/xt_addrtype.o
CC fs/ext4/symlink.o
CC drivers/base/module.o
CC kernel/umh.o
CC net/ipv6/xfrm6_input.o
CC drivers/acpi/acpica/psloop.o
CC drivers/usb/core/buffer.o
CC drivers/input/mouse/alps.o
CC drivers/acpi/pci_irq.o
CC drivers/net/virtio_net.o
AR fs/lockd/built-in.a
CC drivers/i2c/i2c-core-smbus.o
CC drivers/usb/mon/mon_text.o
CC drivers/net/phy/fixed_phy.o
CC drivers/scsi/scsi_logging.o
CC drivers/input/mouse/byd.o
CC kernel/workqueue.o
CC drivers/ata/libata-acpi.o
CC net/mac80211/tx.o
CC [M] drivers/gpu/drm/xe/xe_device.o
AR drivers/pps/clients/built-in.a
CC drivers/pps/pps.o
AR drivers/pps/generators/built-in.a
CC fs/nfs/nfs3acl.o
CC drivers/usb/host/pci-quirks.o
CC drivers/ptp/ptp_clock.o
CC drivers/base/auxiliary_sysfs.o
CC drivers/base/devcoredump.o
CC drivers/acpi/acpica/psobject.o
CC drivers/input/mouse/logips2pp.o
CC drivers/ptp/ptp_chardev.o
CC lib/base64.o
CC drivers/rtc/proc.o
CC drivers/acpi/acpi_apd.o
CC drivers/acpi/acpi_platform.o
CC drivers/gpu/drm/i915/intel_runtime_pm.o
CC kernel/pid.o
AR drivers/net/ethernet/dlink/built-in.a
CC drivers/acpi/acpica/psopcode.o
CC drivers/rtc/sysfs.o
CC fs/nfs/nfs4proc.o
CC mm/page_frag_cache.o
CC arch/x86/kernel/i8253.o
CC fs/stat.o
CC drivers/usb/core/sysfs.o
CC net/core/net-sysfs.o
CC drivers/base/platform-msi.o
CC drivers/usb/class/usblp.o
CC drivers/ptp/ptp_sysfs.o
CC arch/x86/kernel/hw_breakpoint.o
CC drivers/ata/libata-pata-timings.o
CC drivers/usb/mon/mon_bin.o
CC fs/exec.o
CC drivers/scsi/scsi_pm.o
CC drivers/pps/kapi.o
CC lib/once.o
CC drivers/acpi/acpica/psopinfo.o
CC drivers/base/physical_location.o
CC drivers/net/phy/realtek.o
CC mm/init-mm.o
CC net/ipv6/xfrm6_output.o
AR drivers/net/ethernet/emulex/built-in.a
CC fs/pipe.o
AR drivers/pcmcia/built-in.a
CC drivers/power/supply/power_supply_core.o
CC [M] drivers/gpu/drm/xe/xe_device_sysfs.o
CC drivers/ptp/ptp_vclock.o
CC mm/memblock.o
AR net/netfilter/built-in.a
CC drivers/ata/ahci.o
CC lib/refcount.o
CC net/ipv4/icmp.o
CC fs/nfs/nfs4xdr.o
CC drivers/usb/host/ehci-hcd.o
AR drivers/input/tablet/built-in.a
CC drivers/gpu/drm/drm_cache.o
CC drivers/rtc/rtc-mc146818-lib.o
CC drivers/acpi/acpica/psparse.o
CC net/ipv6/xfrm6_protocol.o
CC drivers/power/supply/power_supply_sysfs.o
CC fs/namei.o
CC drivers/usb/core/endpoint.o
CC drivers/net/net_failover.o
CC lib/rcuref.o
CC drivers/i2c/i2c-core-acpi.o
CC drivers/base/trace.o
CC drivers/usb/host/ehci-pci.o
CC drivers/acpi/acpi_pnp.o
CC drivers/pps/sysfs.o
CC drivers/usb/core/devio.o
CC drivers/usb/storage/scsiglue.o
CC lib/usercopy.o
CC drivers/scsi/scsi_bsg.o
CC drivers/ptp/ptp_kvm_x86.o
CC drivers/ptp/ptp_kvm_common.o
AR drivers/usb/class/built-in.a
CC drivers/input/mouse/lifebook.o
AR drivers/usb/misc/built-in.a
CC drivers/power/supply/power_supply_leds.o
CC drivers/acpi/acpica/psscope.o
CC arch/x86/kernel/tsc.o
CC drivers/rtc/rtc-cmos.o
CC drivers/gpu/drm/i915/intel_sbi.o
CC fs/fcntl.o
CC [M] drivers/gpu/drm/xe/xe_dma_buf.o
CC mm/slub.o
CC drivers/hwmon/hwmon.o
CC arch/x86/kernel/tsc_msr.o
CC net/core/hotdata.o
CC drivers/acpi/power.o
AR drivers/usb/mon/built-in.a
AR drivers/pps/built-in.a
CC lib/errseq.o
CC [M] drivers/gpu/drm/xe/xe_drm_client.o
AR drivers/net/ethernet/engleder/built-in.a
CC arch/x86/kernel/io_delay.o
CC drivers/i2c/i2c-smbus.o
CC drivers/ata/libahci.o
CC net/ipv6/netfilter.o
CC drivers/scsi/scsi_common.o
CC lib/bucket_locks.o
CC drivers/acpi/acpica/pstree.o
CC lib/generic-radix-tree.o
CC drivers/power/supply/power_supply_hwmon.o
CC drivers/acpi/acpica/psutils.o
AR drivers/net/phy/built-in.a
CC drivers/scsi/scsi_transport_spi.o
AR drivers/base/built-in.a
CC drivers/input/mouse/trackpoint.o
CC drivers/gpu/drm/i915/intel_step.o
CC drivers/usb/core/notify.o
CC mm/madvise.o
CC lib/bitmap-str.o
CC kernel/task_work.o
AR drivers/net/ethernet/ezchip/built-in.a
CC net/ipv4/devinet.o
CC drivers/input/mouse/cypress_ps2.o
CC arch/x86/kernel/rtc.o
AR drivers/ptp/built-in.a
CC [M] drivers/gpu/drm/xe/xe_exec.o
CC drivers/usb/storage/protocol.o
CC mm/page_io.o
CC drivers/usb/early/ehci-dbgp.o
AR drivers/net/ethernet/fujitsu/built-in.a
CC drivers/input/mouse/psmouse-smbus.o
AR drivers/net/ethernet/fungible/built-in.a
CC drivers/usb/core/generic.o
CC mm/swap_state.o
CC lib/string_helpers.o
CC drivers/gpu/drm/i915/intel_uncore.o
CC drivers/acpi/acpica/pswalk.o
CC fs/nfs/nfs4state.o
CC net/ipv4/af_inet.o
CC drivers/gpu/drm/drm_color_mgmt.o
CC [M] drivers/gpu/drm/xe/xe_execlist.o
AR drivers/power/supply/built-in.a
CC arch/x86/kernel/resource.o
AR drivers/power/built-in.a
AR drivers/net/ethernet/google/built-in.a
CC net/ipv6/proc.o
CC drivers/acpi/acpica/psxface.o
AR drivers/rtc/built-in.a
AR drivers/net/ethernet/hisilicon/built-in.a
CC mm/swapfile.o
AR drivers/i2c/built-in.a
CC fs/ioctl.o
CC drivers/gpu/drm/drm_connector.o
CC drivers/ata/ata_piix.o
CC net/core/netdev_rx_queue.o
CC drivers/usb/host/ohci-hcd.o
CC fs/ext4/sysfs.o
CC drivers/usb/storage/transport.o
CC net/mac80211/key.o
AS arch/x86/kernel/irqflags.o
CC mm/swap_slots.o
CC arch/x86/kernel/static_call.o
AR drivers/net/ethernet/huawei/built-in.a
CC net/ipv4/igmp.o
CC drivers/acpi/acpica/rsaddr.o
CC drivers/scsi/virtio_scsi.o
CC net/core/net-procfs.o
AR drivers/hwmon/built-in.a
CC net/ipv6/syncookies.o
CC fs/nfs/nfs4renewd.o
CC [M] drivers/gpu/drm/xe/xe_exec_queue.o
CC net/mac80211/util.o
CC lib/hexdump.o
CC kernel/extable.o
AR drivers/input/mouse/built-in.a
AR drivers/input/touchscreen/built-in.a
AR drivers/input/misc/built-in.a
AR drivers/usb/early/built-in.a
CC drivers/input/input.o
CC mm/dmapool.o
CC fs/nfs/nfs4super.o
CC drivers/usb/core/quirks.o
CC fs/readdir.o
CC drivers/acpi/event.o
CC drivers/acpi/acpica/rscalc.o
CC lib/kstrtox.o
CC arch/x86/kernel/process.o
CC mm/hugetlb.o
CC kernel/params.o
CC drivers/ata/pata_amd.o
CC fs/ext4/xattr.o
CC net/mac80211/parse.o
AR drivers/thermal/broadcom/built-in.a
AR drivers/thermal/renesas/built-in.a
AR drivers/thermal/samsung/built-in.a
CC drivers/thermal/intel/intel_tcc.o
CC net/core/netpoll.o
AR drivers/thermal/st/built-in.a
CC drivers/gpu/drm/drm_crtc.o
CC drivers/usb/host/ohci-pci.o
CC fs/nfs/nfs4file.o
CC drivers/gpu/drm/i915/intel_uncore_trace.o
CC drivers/acpi/acpica/rscreate.o
CC drivers/input/input-compat.o
CC lib/iomap.o
AR drivers/net/ethernet/broadcom/built-in.a
CC drivers/usb/storage/usb.o
CC drivers/thermal/intel/therm_throt.o
CC drivers/net/ethernet/intel/e1000/e1000_main.o
CC arch/x86/kernel/ptrace.o
CC drivers/acpi/evged.o
AR drivers/net/ethernet/i825xx/built-in.a
CC net/mac80211/wme.o
CC drivers/scsi/sd.o
CC drivers/usb/core/devices.o
CC [M] drivers/thermal/intel/x86_pkg_temp_thermal.o
CC drivers/net/ethernet/intel/e1000e/82571.o
CC arch/x86/kernel/tls.o
CC drivers/gpu/drm/drm_displayid.o
CC drivers/input/input-mt.o
CC drivers/acpi/acpica/rsdumpinfo.o
AR drivers/thermal/qcom/built-in.a
CC fs/nfs/delegation.o
CC net/mac80211/chan.o
CC fs/select.o
CC drivers/gpu/drm/i915/intel_wakeref.o
CC net/ipv6/calipso.o
CC drivers/net/ethernet/intel/e1000/e1000_hw.o
CC drivers/acpi/sysfs.o
CC fs/dcache.o
CC drivers/ata/pata_oldpiix.o
CC kernel/kthread.o
CC fs/nfs/nfs4idmap.o
CC fs/inode.o
CC [M] drivers/gpu/drm/xe/xe_force_wake.o
CC fs/attr.o
AR drivers/net/ethernet/microsoft/built-in.a
CC drivers/net/ethernet/intel/e100.o
CC drivers/acpi/acpica/rsinfo.o
CC drivers/usb/storage/initializers.o
CC lib/iomap_copy.o
CC mm/mmu_notifier.o
CC drivers/usb/host/uhci-hcd.o
CC fs/ext4/xattr_hurd.o
CC lib/devres.o
CC drivers/usb/core/phy.o
CC drivers/acpi/property.o
CC net/ipv4/fib_frontend.o
CC [M] drivers/gpu/drm/xe/xe_ggtt.o
CC [M] drivers/gpu/drm/xe/xe_gpu_scheduler.o
CC drivers/input/input-poller.o
CC drivers/acpi/acpica/rsio.o
CC fs/bad_inode.o
CC drivers/net/ethernet/intel/e1000/e1000_ethtool.o
CC arch/x86/kernel/step.o
AR drivers/thermal/intel/built-in.a
AR drivers/thermal/tegra/built-in.a
AR drivers/thermal/mediatek/built-in.a
CC drivers/thermal/thermal_core.o
CC drivers/gpu/drm/i915/vlv_sideband.o
CC drivers/usb/core/port.o
CC drivers/input/ff-core.o
AR drivers/net/ethernet/litex/built-in.a
CC drivers/ata/pata_sch.o
CC fs/ext4/xattr_trusted.o
CC net/core/fib_rules.o
CC drivers/net/ethernet/intel/e1000/e1000_param.o
CC lib/check_signature.o
CC drivers/gpu/drm/drm_drv.o
CC drivers/usb/storage/sierra_ms.o
CC drivers/acpi/acpica/rsirq.o
AR drivers/watchdog/built-in.a
CC drivers/scsi/sr.o
CC drivers/gpu/drm/i915/vlv_suspend.o
CC fs/ext4/xattr_user.o
CC fs/ext4/fast_commit.o
CC lib/interval_tree.o
AR drivers/net/ethernet/marvell/octeon_ep/built-in.a
AR drivers/net/ethernet/marvell/octeon_ep_vf/built-in.a
AR drivers/net/ethernet/marvell/octeontx2/built-in.a
AR drivers/net/ethernet/marvell/prestera/built-in.a
AR drivers/net/ethernet/mellanox/built-in.a
CC drivers/net/ethernet/marvell/sky2.o
CC [M] drivers/gpu/drm/xe/xe_gsc.o
CC drivers/md/md.o
CC drivers/net/ethernet/intel/e1000e/ich8lan.o
CC [M] drivers/gpu/drm/xe/xe_gsc_debugfs.o
CC drivers/usb/host/xhci.o
CC drivers/usb/storage/option_ms.o
CC drivers/scsi/sr_ioctl.o
CC arch/x86/kernel/i8237.o
CC arch/x86/kernel/stacktrace.o
CC drivers/acpi/acpica/rslist.o
CC lib/assoc_array.o
CC drivers/acpi/acpica/rsmemory.o
CC drivers/input/touchscreen.o
CC net/ipv4/fib_semantics.o
CC drivers/gpu/drm/i915/soc/intel_dram.o
CC drivers/ata/pata_mpiix.o
CC kernel/sys_ni.o
CC drivers/usb/host/xhci-mem.o
CC fs/nfs/callback.o
CC drivers/usb/core/hcd-pci.o
CC net/ipv6/ah6.o
CC drivers/acpi/acpica/rsmisc.o
CC drivers/ata/ata_generic.o
CC drivers/thermal/thermal_sysfs.o
CC drivers/gpu/drm/i915/soc/intel_gmch.o
CC drivers/cpufreq/cpufreq.o
CC drivers/cpuidle/governors/menu.o
CC arch/x86/kernel/reboot.o
CC drivers/usb/storage/usual-tables.o
CC kernel/nsproxy.o
CC arch/x86/kernel/msr.o
CC drivers/gpu/drm/drm_dumb_buffers.o
CC drivers/usb/host/xhci-ext-caps.o
CC fs/nfs/callback_xdr.o
CC drivers/usb/core/usb-acpi.o
CC drivers/input/ff-memless.o
CC drivers/net/ethernet/intel/e1000e/80003es2lan.o
CC drivers/cpuidle/cpuidle.o
CC lib/bitrev.o
CC drivers/cpuidle/governors/haltpoll.o
CC mm/migrate.o
CC [M] drivers/gpu/drm/xe/xe_gsc_proxy.o
CC drivers/scsi/sr_vendor.o
CC fs/ext4/orphan.o
CC drivers/acpi/acpica/rsserial.o
CC drivers/acpi/acpica/rsutils.o
CC drivers/acpi/acpica/rsxface.o
CC arch/x86/kernel/cpuid.o
CC net/core/net-traces.o
CC net/mac80211/trace.o
CC drivers/cpufreq/freq_table.o
CC drivers/usb/host/xhci-ring.o
CC lib/crc-ccitt.o
AR drivers/net/ethernet/meta/built-in.a
CC fs/nfs/callback_proc.o
CC kernel/notifier.o
CC drivers/thermal/thermal_trip.o
AR drivers/usb/storage/built-in.a
CC drivers/thermal/thermal_helpers.o
CC fs/nfs/nfs4namespace.o
AR drivers/ata/built-in.a
CC drivers/cpuidle/driver.o
AR drivers/net/ethernet/micrel/built-in.a
CC drivers/scsi/sg.o
CC drivers/gpu/drm/drm_edid.o
CC arch/x86/kernel/early-quirks.o
CC mm/page_counter.o
CC net/core/selftests.o
AR drivers/net/ethernet/intel/e1000/built-in.a
CC drivers/gpu/drm/i915/soc/intel_pch.o
CC net/ipv4/fib_trie.o
CC drivers/gpu/drm/i915/soc/intel_rom.o
CC drivers/acpi/acpica/tbdata.o
CC kernel/ksysfs.o
AR drivers/usb/core/built-in.a
CC fs/nfs/nfs4getroot.o
CC lib/crc16.o
CC drivers/usb/host/xhci-hub.o
CC drivers/cpufreq/cpufreq_performance.o
CC drivers/input/sparse-keymap.o
CC net/core/ptp_classifier.o
CC drivers/md/md-bitmap.o
AR drivers/net/ethernet/microchip/built-in.a
CC drivers/cpufreq/cpufreq_userspace.o
CC fs/ext4/acl.o
CC kernel/cred.o
CC fs/nfs/nfs4client.o
CC drivers/thermal/thermal_thresholds.o
CC fs/file.o
HOSTCC lib/gen_crc32table
CC net/ipv6/esp6.o
CC mm/hugetlb_cgroup.o
CC lib/xxhash.o
AR drivers/cpuidle/governors/built-in.a
CC lib/genalloc.o
CC fs/ext4/xattr_security.o
CC [M] drivers/gpu/drm/xe/xe_gsc_submit.o
CC drivers/acpi/acpica/tbfadt.o
CC drivers/gpu/drm/i915/i915_memcpy.o
CC drivers/input/vivaldi-fmap.o
CC drivers/cpuidle/governor.o
CC drivers/cpuidle/sysfs.o
CC arch/x86/kernel/smp.o
CC arch/x86/kernel/smpboot.o
CC net/core/netprio_cgroup.o
CC drivers/acpi/debugfs.o
CC drivers/scsi/scsi_sysfs.o
CC fs/nfs/nfs4session.o
CC drivers/md/md-autodetect.o
CC drivers/thermal/thermal_hwmon.o
CC drivers/input/input-leds.o
CC net/mac80211/mlme.o
CC net/core/netclassid_cgroup.o
CC drivers/net/ethernet/intel/e1000e/mac.o
CC drivers/acpi/acpica/tbfind.o
CC drivers/acpi/acpica/tbinstal.o
CC drivers/acpi/acpica/tbprint.o
CC drivers/acpi/acpi_lpat.o
CC net/core/dst_cache.o
CC drivers/usb/host/xhci-dbg.o
CC fs/nfs/dns_resolve.o
CC drivers/input/evdev.o
CC [M] drivers/gpu/drm/xe/xe_gt.o
CC drivers/gpu/drm/i915/i915_mm.o
CC drivers/cpufreq/cpufreq_ondemand.o
CC drivers/net/ethernet/intel/e1000e/manage.o
CC lib/percpu_counter.o
CC drivers/thermal/gov_step_wise.o
CC kernel/reboot.o
CC net/ipv6/sit.o
CC drivers/cpuidle/poll_state.o
CC drivers/gpu/drm/drm_eld.o
CC mm/early_ioremap.o
AR fs/ext4/built-in.a
CC drivers/gpu/drm/i915/i915_sw_fence.o
CC fs/nfs/nfs4trace.o
CC fs/nfs/nfs4sysctl.o
CC drivers/acpi/acpica/tbutils.o
CC drivers/cpufreq/cpufreq_governor.o
CC drivers/net/ethernet/intel/e1000e/nvm.o
CC lib/audit.o
CC net/ipv4/fib_notifier.o
CC kernel/async.o
CC fs/filesystems.o
CC arch/x86/kernel/tsc_sync.o
AR drivers/net/ethernet/marvell/built-in.a
CC arch/x86/kernel/setup_percpu.o
CC drivers/cpuidle/cpuidle-haltpoll.o
CC net/ipv4/inet_fragment.o
CC drivers/thermal/gov_user_space.o
CC net/core/gro_cells.o
CC drivers/cpufreq/cpufreq_governor_attr_set.o
CC mm/secretmem.o
CC arch/x86/kernel/mpparse.o
CC net/ipv6/addrconf_core.o
CC drivers/acpi/acpica/tbxface.o
CC net/ipv6/exthdrs_core.o
CC arch/x86/kernel/trace_clock.o
AR drivers/mmc/built-in.a
CC drivers/gpu/drm/i915/i915_sw_fence_work.o
CC drivers/usb/host/xhci-trace.o
CC net/ipv6/ip6_checksum.o
CC net/ipv4/ping.o
CC [M] drivers/gpu/drm/xe/xe_gt_ccs_mode.o
CC drivers/gpu/drm/i915/i915_syncmap.o
CC lib/syscall.o
CC kernel/range.o
AR drivers/scsi/built-in.a
AR drivers/cpuidle/built-in.a
CC kernel/smpboot.o
CC net/ipv6/ip6_icmp.o
CC fs/namespace.o
CC drivers/gpu/drm/i915/i915_user_extensions.o
AR drivers/thermal/built-in.a
CC net/mac80211/tdls.o
CC drivers/cpufreq/acpi-cpufreq.o
CC [M] drivers/gpu/drm/xe/xe_gt_clock.o
AR drivers/net/ethernet/mscc/built-in.a
CC net/ipv6/output_core.o
CC drivers/net/ethernet/intel/e1000e/phy.o
CC drivers/usb/host/xhci-debugfs.o
CC lib/errname.o
AR drivers/input/built-in.a
CC drivers/gpu/drm/drm_encoder.o
CC arch/x86/kernel/trace.o
CC [M] drivers/gpu/drm/xe/xe_gt_freq.o
AR drivers/net/ethernet/myricom/built-in.a
CC kernel/ucount.o
CC drivers/net/ethernet/intel/e1000e/param.o
CC net/ipv4/ip_tunnel_core.o
CC drivers/acpi/acpica/tbxfload.o
CC drivers/usb/host/xhci-pci.o
CC mm/hmm.o
AR drivers/net/ethernet/natsemi/built-in.a
CC net/ipv6/protocol.o
CC net/ipv4/gre_offload.o
CC drivers/acpi/acpi_pcc.o
CC arch/x86/kernel/rethook.o
CC drivers/cpufreq/amd-pstate.o
CC drivers/cpufreq/amd-pstate-trace.o
CC mm/memfd.o
CC net/mac80211/ocb.o
CC net/ipv4/metrics.o
CC drivers/gpu/drm/i915/i915_debugfs.o
CC drivers/gpu/drm/i915/i915_debugfs_params.o
CC lib/nlattr.o
CC net/mac80211/airtime.o
CC drivers/gpu/drm/drm_file.o
CC arch/x86/kernel/vmcore_info_32.o
CC kernel/regset.o
CC drivers/acpi/acpica/tbxfroot.o
CC drivers/md/dm.o
CC drivers/acpi/ac.o
CC fs/seq_file.o
AR drivers/net/ethernet/neterion/built-in.a
CC mm/ptdump.o
CC net/ipv6/ip6_offload.o
CC drivers/acpi/acpica/utaddress.o
CC drivers/cpufreq/intel_pstate.o
CC kernel/ksyms_common.o
CC drivers/md/dm-table.o
CC [M] drivers/gpu/drm/xe/xe_gt_idle.o
CC lib/cpu_rmap.o
CC net/mac80211/eht.o
CC fs/xattr.o
AR drivers/net/ethernet/netronome/built-in.a
CC mm/execmem.o
CC net/core/failover.o
CC net/ipv4/netlink.o
CC drivers/net/ethernet/intel/e1000e/ethtool.o
AR drivers/ufs/built-in.a
CC drivers/gpu/drm/i915/i915_pmu.o
CC arch/x86/kernel/machine_kexec_32.o
CC lib/dynamic_queue_limits.o
CC drivers/acpi/button.o
AR drivers/net/ethernet/ni/built-in.a
CC net/ipv4/nexthop.o
CC drivers/acpi/acpica/utalloc.o
CC drivers/net/ethernet/nvidia/forcedeth.o
CC kernel/groups.o
CC drivers/md/dm-target.o
CC kernel/kcmp.o
CC drivers/net/ethernet/intel/e1000e/netdev.o
CC fs/libfs.o
CC drivers/md/dm-linear.o
AR drivers/firmware/arm_ffa/built-in.a
AR drivers/firmware/arm_scmi/built-in.a
CC [M] drivers/gpu/drm/xe/xe_gt_mcr.o
AR drivers/firmware/broadcom/built-in.a
AR drivers/firmware/cirrus/built-in.a
AR drivers/firmware/meson/built-in.a
AR drivers/firmware/microchip/built-in.a
CC drivers/firmware/efi/efi-bgrt.o
CC fs/fs-writeback.o
CC drivers/gpu/drm/drm_fourcc.o
CC drivers/acpi/acpica/utascii.o
CC drivers/firmware/efi/libstub/efi-stub-helper.o
AS arch/x86/kernel/relocate_kernel_32.o
CC drivers/md/dm-stripe.o
CC arch/x86/kernel/crash_dump_32.o
AR drivers/firmware/imx/built-in.a
CC net/mac80211/led.o
CC drivers/firmware/efi/libstub/gop.o
CC drivers/firmware/efi/efi.o
AR mm/built-in.a
CC drivers/md/dm-ioctl.o
CC drivers/acpi/fan_core.o
CC arch/x86/kernel/crash.o
AR drivers/firmware/psci/built-in.a
CC drivers/acpi/acpica/utbuffer.o
CC [M] drivers/gpu/drm/xe/xe_gt_pagefault.o
AR drivers/net/ethernet/oki-semi/built-in.a
CC drivers/md/dm-io.o
CC net/mac80211/pm.o
AR drivers/firmware/qcom/built-in.a
CC lib/glob.o
CC kernel/freezer.o
CC drivers/firmware/efi/libstub/secureboot.o
CC net/ipv6/tcpv6_offload.o
CC drivers/net/ethernet/intel/e1000e/ptp.o
CC net/mac80211/rc80211_minstrel_ht.o
AR drivers/net/ethernet/packetengines/built-in.a
CC arch/x86/kernel/module.o
CC drivers/acpi/fan_attr.o
CC drivers/md/dm-kcopyd.o
CC net/ipv6/exthdrs_offload.o
CC net/ipv6/inet6_hashtables.o
AR net/core/built-in.a
CC drivers/gpu/drm/drm_framebuffer.o
AR drivers/firmware/smccc/built-in.a
CC [M] drivers/gpu/drm/xe/xe_gt_sysfs.o
CC [M] drivers/gpu/drm/xe/xe_gt_throttle.o
CC drivers/acpi/acpica/utcksum.o
AR drivers/usb/host/built-in.a
AR drivers/usb/built-in.a
CC drivers/gpu/drm/drm_gem.o
CC drivers/acpi/fan_hwmon.o
CC drivers/acpi/acpica/utcopy.o
CC drivers/gpu/drm/i915/gt/gen2_engine_cs.o
CC lib/strncpy_from_user.o
CC kernel/profile.o
CC net/ipv4/udp_tunnel_stub.o
AR drivers/net/ethernet/qlogic/built-in.a
CC [M] drivers/gpu/drm/xe/xe_gt_tlb_invalidation.o
CC net/mac80211/wbrf.o
CC drivers/firmware/efi/vars.o
AR drivers/crypto/stm32/built-in.a
AR drivers/crypto/xilinx/built-in.a
AR drivers/crypto/hisilicon/built-in.a
CC drivers/firmware/efi/reboot.o
AR drivers/crypto/starfive/built-in.a
AR drivers/crypto/intel/keembay/built-in.a
CC drivers/firmware/efi/memattr.o
AR drivers/crypto/intel/ixp4xx/built-in.a
AR drivers/crypto/intel/built-in.a
CC drivers/firmware/efi/tpm.o
AR drivers/crypto/built-in.a
CC net/ipv4/ip_tunnel.o
CC lib/strnlen_user.o
AR drivers/net/ethernet/qualcomm/emac/built-in.a
AR drivers/net/ethernet/qualcomm/built-in.a
AR fs/nfs/built-in.a
CC [M] drivers/gpu/drm/xe/xe_gt_topology.o
CC drivers/firmware/efi/libstub/tpm.o
CC kernel/stacktrace.o
CC net/ipv6/mcast_snoop.o
CC drivers/acpi/acpica/utexcep.o
CC arch/x86/kernel/doublefault_32.o
CC drivers/net/ethernet/realtek/8139too.o
AR drivers/net/ethernet/renesas/built-in.a
CC lib/net_utils.o
AR drivers/net/ethernet/rdc/built-in.a
CC lib/sg_pool.o
CC drivers/firmware/efi/memmap.o
CC drivers/firmware/efi/capsule.o
CC drivers/acpi/acpi_video.o
AR drivers/firmware/tegra/built-in.a
CC drivers/firmware/efi/libstub/file.o
CC fs/pnode.o
CC drivers/gpu/drm/drm_ioctl.o
AR drivers/cpufreq/built-in.a
CC drivers/firmware/efi/esrt.o
AR drivers/firmware/xilinx/built-in.a
CC drivers/firmware/efi/libstub/mem.o
CC kernel/dma.o
CC drivers/firmware/dmi_scan.o
CC arch/x86/kernel/early_printk.o
CC drivers/clocksource/acpi_pm.o
CC lib/stackdepot.o
CC drivers/acpi/acpica/utdebug.o
CC drivers/clocksource/i8253.o
CC arch/x86/kernel/hpet.o
CC drivers/firmware/dmi-id.o
CC net/ipv4/sysctl_net_ipv4.o
CC drivers/acpi/video_detect.o
CC fs/splice.o
CC fs/sync.o
CC drivers/net/ethernet/realtek/r8169_main.o
CC [M] drivers/gpu/drm/xe/xe_guc.o
CC drivers/gpu/drm/i915/gt/gen6_engine_cs.o
CC arch/x86/kernel/amd_nb.o
CC fs/utimes.o
CC drivers/md/dm-sysfs.o
CC drivers/hid/usbhid/hid-core.o
CC drivers/acpi/acpica/utdecode.o
CC drivers/hid/hid-core.o
CC drivers/acpi/acpica/utdelete.o
AR drivers/net/ethernet/rocker/built-in.a
CC drivers/mailbox/mailbox.o
CC drivers/net/ethernet/realtek/r8169_firmware.o
AR drivers/platform/x86/amd/built-in.a
CC kernel/smp.o
AR drivers/platform/x86/intel/built-in.a
CC drivers/platform/x86/wmi.o
CC drivers/gpu/drm/drm_lease.o
AR drivers/platform/surface/built-in.a
CC drivers/hid/hid-input.o
CC drivers/hid/hid-quirks.o
CC drivers/firmware/memmap.o
CC fs/d_path.o
CC drivers/platform/x86/wmi-bmof.o
CC drivers/platform/x86/eeepc-laptop.o
CC drivers/firmware/efi/libstub/random.o
CC drivers/acpi/acpica/uterror.o
AR drivers/net/ethernet/samsung/built-in.a
CC drivers/net/ethernet/realtek/r8169_phy_config.o
CC lib/asn1_decoder.o
CC drivers/md/dm-stats.o
CC drivers/hid/usbhid/hiddev.o
AR drivers/clocksource/built-in.a
AR net/ipv6/built-in.a
AR drivers/perf/built-in.a
CC fs/stack.o
CC drivers/gpu/drm/drm_managed.o
CC drivers/gpu/drm/i915/gt/gen6_ppgtt.o
CC drivers/hid/hid-debug.o
GEN lib/oid_registry_data.c
CC drivers/platform/x86/p2sb.o
CC arch/x86/kernel/kvm.o
CC drivers/hid/usbhid/hid-pidff.o
CC fs/fs_struct.o
CC drivers/acpi/acpica/uteval.o
CC [M] drivers/gpu/drm/xe/xe_guc_ads.o
CC drivers/mailbox/pcc.o
CC drivers/firmware/efi/runtime-wrappers.o
CC drivers/md/dm-rq.o
CC drivers/gpu/drm/i915/gt/gen7_renderclear.o
AR drivers/hwtracing/intel_th/built-in.a
CC drivers/hid/hidraw.o
CC drivers/acpi/acpica/utglobal.o
CC lib/ucs2_string.o
CC lib/sbitmap.o
CC fs/statfs.o
CC fs/fs_pin.o
CC drivers/firmware/efi/libstub/randomalloc.o
CC drivers/acpi/processor_driver.o
CC lib/group_cpus.o
AR drivers/net/ethernet/nvidia/built-in.a
CC lib/fw_table.o
CC drivers/gpu/drm/i915/gt/gen8_engine_cs.o
CC drivers/gpu/drm/i915/gt/gen8_ppgtt.o
CC drivers/firmware/efi/capsule-loader.o
CC [M] drivers/gpu/drm/xe/xe_guc_buf.o
CC net/ipv4/proc.o
AR drivers/android/built-in.a
CC arch/x86/kernel/kvmclock.o
CC drivers/md/dm-io-rewind.o
CC drivers/firmware/efi/libstub/pci.o
CC fs/nsfs.o
CC drivers/acpi/acpica/uthex.o
AR drivers/nvmem/layouts/built-in.a
CC drivers/nvmem/core.o
CC [M] drivers/gpu/drm/xe/xe_guc_capture.o
CC drivers/firmware/efi/earlycon.o
AR lib/lib.a
CC [M] drivers/gpu/drm/xe/xe_guc_ct.o
AR drivers/net/ethernet/seeq/built-in.a
CC [M] drivers/gpu/drm/xe/xe_guc_db_mgr.o
AR drivers/net/ethernet/silan/built-in.a
CC drivers/hid/hid-generic.o
CC drivers/md/dm-builtin.o
AR drivers/mailbox/built-in.a
AR drivers/net/ethernet/sis/built-in.a
CC arch/x86/kernel/paravirt.o
CC drivers/gpu/drm/drm_mm.o
CC kernel/uid16.o
AR drivers/platform/x86/built-in.a
AR drivers/platform/built-in.a
GEN lib/crc32table.h
CC fs/fs_types.o
CC lib/oid_registry.o
CC drivers/md/dm-raid1.o
CC drivers/gpu/drm/i915/gt/intel_breadcrumbs.o
CC lib/crc32.o
CC drivers/hid/hid-a4tech.o
CC net/ipv4/fib_rules.o
CC arch/x86/kernel/pvclock.o
CC drivers/acpi/acpica/utids.o
CC kernel/kallsyms.o
CC drivers/firmware/efi/libstub/skip_spaces.o
CC arch/x86/kernel/pcspeaker.o
CC drivers/gpu/drm/i915/gt/intel_context.o
CC drivers/firmware/efi/libstub/lib-cmdline.o
CC drivers/md/dm-log.o
CC drivers/gpu/drm/drm_mode_config.o
CC drivers/gpu/drm/i915/gt/intel_context_sseu.o
CC drivers/acpi/processor_thermal.o
CC net/ipv4/ipmr.o
CC kernel/acct.o
CC fs/fs_context.o
CC drivers/acpi/processor_idle.o
CC drivers/hid/hid-apple.o
CC arch/x86/kernel/check.o
CC drivers/acpi/acpica/utinit.o
CC drivers/acpi/processor_throttling.o
CC drivers/firmware/efi/libstub/lib-ctype.o
AR drivers/hid/usbhid/built-in.a
CC drivers/hid/hid-belkin.o
CC kernel/vmcore_info.o
CC drivers/firmware/efi/libstub/alignedmem.o
CC kernel/elfcorehdr.o
AR drivers/net/ethernet/sfc/built-in.a
CC drivers/md/dm-region-hash.o
AR drivers/net/ethernet/smsc/built-in.a
CC [M] drivers/gpu/drm/xe/xe_guc_hwconfig.o
CC fs/fs_parser.o
CC net/ipv4/ipmr_base.o
CC drivers/gpu/drm/i915/gt/intel_engine_cs.o
CC drivers/acpi/processor_perflib.o
AR lib/built-in.a
CC drivers/hid/hid-cherry.o
CC kernel/crash_reserve.o
CC fs/fsopen.o
CC drivers/md/dm-zero.o
AR drivers/firmware/efi/built-in.a
CC net/ipv4/syncookies.o
CC drivers/firmware/efi/libstub/relocate.o
CC drivers/acpi/acpica/utlock.o
CC arch/x86/kernel/uprobes.o
CC drivers/gpu/drm/i915/gt/intel_engine_heartbeat.o
CC drivers/firmware/efi/libstub/printk.o
CC drivers/acpi/acpica/utmath.o
AR drivers/net/ethernet/socionext/built-in.a
CC drivers/gpu/drm/drm_mode_object.o
CC fs/init.o
CC drivers/acpi/container.o
CC net/ipv4/tunnel4.o
CC [M] drivers/gpu/drm/xe/xe_guc_id_mgr.o
AR drivers/nvmem/built-in.a
CC drivers/firmware/efi/libstub/vsprintf.o
CC drivers/gpu/drm/i915/gt/intel_engine_pm.o
AR net/mac80211/built-in.a
CC kernel/kexec_core.o
CC drivers/hid/hid-chicony.o
CC drivers/firmware/efi/libstub/x86-stub.o
AR drivers/net/ethernet/intel/e1000e/built-in.a
AR drivers/net/ethernet/stmicro/built-in.a
AR drivers/net/ethernet/intel/built-in.a
AR drivers/net/ethernet/sun/built-in.a
CC drivers/gpu/drm/i915/gt/intel_engine_user.o
CC arch/x86/kernel/perf_regs.o
CC drivers/acpi/thermal_lib.o
AR drivers/net/ethernet/tehuti/built-in.a
CC [M] drivers/gpu/drm/xe/xe_guc_klv_helpers.o
AR drivers/net/ethernet/ti/built-in.a
CC fs/kernel_read_file.o
CC drivers/acpi/acpica/utmisc.o
CC kernel/crash_core.o
CC drivers/acpi/acpica/utmutex.o
CC drivers/gpu/drm/i915/gt/intel_execlists_submission.o
CC fs/mnt_idmapping.o
CC arch/x86/kernel/tracepoint.o
CC kernel/kexec.o
CC net/ipv4/ipconfig.o
CC drivers/acpi/acpica/utnonansi.o
CC net/ipv4/netfilter.o
AR drivers/net/ethernet/vertexcom/built-in.a
CC kernel/utsname.o
CC drivers/gpu/drm/i915/gt/intel_ggtt.o
CC drivers/firmware/efi/libstub/smbios.o
CC kernel/pid_namespace.o
CC arch/x86/kernel/itmt.o
CC kernel/stop_machine.o
CC drivers/acpi/acpica/utobject.o
CC arch/x86/kernel/umip.o
CC drivers/gpu/drm/drm_modes.o
AR drivers/net/ethernet/via/built-in.a
CC [M] drivers/gpu/drm/xe/xe_guc_log.o
AR drivers/md/built-in.a
CC net/ipv4/tcp_cubic.o
CC fs/remap_range.o
CC [M] drivers/gpu/drm/xe/xe_guc_pc.o
CC arch/x86/kernel/unwind_frame.o
CC drivers/acpi/acpica/utosi.o
CC drivers/hid/hid-cypress.o
CC net/ipv4/tcp_sigpool.o
CC drivers/acpi/thermal.o
AR drivers/net/ethernet/wangxun/built-in.a
AR drivers/net/ethernet/wiznet/built-in.a
CC drivers/gpu/drm/drm_modeset_lock.o
CC drivers/hid/hid-ezkey.o
CC fs/pidfs.o
AR drivers/net/ethernet/realtek/built-in.a
AR drivers/net/ethernet/xilinx/built-in.a
STUBCPY drivers/firmware/efi/libstub/alignedmem.stub.o
STUBCPY drivers/firmware/efi/libstub/efi-stub-helper.stub.o
AR drivers/net/ethernet/xircom/built-in.a
CC [M] drivers/gpu/drm/xe/xe_guc_submit.o
CC drivers/gpu/drm/i915/gt/intel_ggtt_fencing.o
CC drivers/acpi/nhlt.o
AR drivers/net/ethernet/synopsys/built-in.a
CC fs/buffer.o
CC drivers/gpu/drm/drm_plane.o
AR drivers/net/ethernet/pensando/built-in.a
AR drivers/net/ethernet/built-in.a
CC drivers/gpu/drm/drm_prime.o
STUBCPY drivers/firmware/efi/libstub/file.stub.o
AR drivers/net/built-in.a
CC drivers/gpu/drm/i915/gt/intel_gt.o
CC drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.o
CC drivers/acpi/acpi_memhotplug.o
CC drivers/hid/hid-gyration.o
CC drivers/acpi/acpica/utownerid.o
CC drivers/acpi/acpica/utpredef.o
CC drivers/gpu/drm/drm_print.o
STUBCPY drivers/firmware/efi/libstub/gop.stub.o
STUBCPY drivers/firmware/efi/libstub/lib-cmdline.stub.o
CC net/ipv4/cipso_ipv4.o
CC drivers/acpi/ioapic.o
CC drivers/gpu/drm/drm_property.o
CC drivers/acpi/acpica/utresdecode.o
STUBCPY drivers/firmware/efi/libstub/lib-ctype.stub.o
STUBCPY drivers/firmware/efi/libstub/mem.stub.o
CC kernel/audit.o
STUBCPY drivers/firmware/efi/libstub/pci.stub.o
STUBCPY drivers/firmware/efi/libstub/printk.stub.o
STUBCPY drivers/firmware/efi/libstub/random.stub.o
STUBCPY drivers/firmware/efi/libstub/randomalloc.stub.o
STUBCPY drivers/firmware/efi/libstub/relocate.stub.o
STUBCPY drivers/firmware/efi/libstub/secureboot.stub.o
STUBCPY drivers/firmware/efi/libstub/skip_spaces.stub.o
CC drivers/acpi/acpica/utresrc.o
STUBCPY drivers/firmware/efi/libstub/smbios.stub.o
STUBCPY drivers/firmware/efi/libstub/tpm.stub.o
CC drivers/hid/hid-ite.o
CC fs/mpage.o
CC drivers/acpi/acpica/utstate.o
STUBCPY drivers/firmware/efi/libstub/vsprintf.stub.o
CC drivers/acpi/battery.o
STUBCPY drivers/firmware/efi/libstub/x86-stub.stub.o
AR drivers/firmware/efi/libstub/lib.a
CC drivers/hid/hid-kensington.o
CC fs/proc_namespace.o
AR drivers/firmware/built-in.a
CC drivers/gpu/drm/drm_rect.o
CC net/ipv4/xfrm4_policy.o
CC [M] drivers/gpu/drm/xe/xe_heci_gsc.o
CC kernel/auditfilter.o
AR arch/x86/kernel/built-in.a
CC drivers/acpi/acpica/utstring.o
CC drivers/acpi/bgrt.o
CC drivers/gpu/drm/i915/gt/intel_gt_ccs_mode.o
AR arch/x86/built-in.a
CC kernel/auditsc.o
CC net/ipv4/xfrm4_state.o
CC drivers/hid/hid-lg.o
CC [M] drivers/gpu/drm/xe/xe_hw_engine.o
CC drivers/acpi/acpica/utstrsuppt.o
CC net/ipv4/xfrm4_input.o
CC drivers/acpi/spcr.o
CC kernel/audit_watch.o
CC fs/direct-io.o
CC drivers/gpu/drm/i915/gt/intel_gt_clock_utils.o
CC drivers/acpi/acpica/utstrtoul64.o
CC [M] drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.o
CC net/ipv4/xfrm4_output.o
CC drivers/gpu/drm/drm_syncobj.o
CC drivers/hid/hid-lgff.o
CC drivers/acpi/acpica/utxface.o
CC [M] drivers/gpu/drm/xe/xe_hw_engine_group.o
CC drivers/gpu/drm/i915/gt/intel_gt_debugfs.o
CC net/ipv4/xfrm4_protocol.o
CC drivers/hid/hid-lg4ff.o
CC drivers/acpi/acpica/utxfinit.o
CC kernel/audit_fsnotify.o
CC fs/eventpoll.o
CC drivers/gpu/drm/drm_sysfs.o
CC drivers/hid/hid-lg-g15.o
CC drivers/hid/hid-microsoft.o
CC drivers/acpi/acpica/utxferror.o
CC drivers/acpi/acpica/utxfmutex.o
CC [M] drivers/gpu/drm/xe/xe_hw_fence.o
CC drivers/hid/hid-monterey.o
CC fs/anon_inodes.o
CC drivers/gpu/drm/drm_trace_points.o
CC [M] drivers/gpu/drm/xe/xe_huc.o
CC drivers/hid/hid-ntrig.o
CC drivers/gpu/drm/i915/gt/intel_gt_engines_debugfs.o
CC drivers/hid/hid-pl.o
CC fs/signalfd.o
CC kernel/audit_tree.o
CC [M] drivers/gpu/drm/xe/xe_irq.o
CC drivers/gpu/drm/drm_vblank.o
CC drivers/hid/hid-petalynx.o
CC drivers/gpu/drm/i915/gt/intel_gt_irq.o
CC fs/timerfd.o
CC kernel/kprobes.o
CC drivers/gpu/drm/i915/gt/intel_gt_mcr.o
CC [M] drivers/gpu/drm/xe/xe_lrc.o
AR drivers/acpi/acpica/built-in.a
AR drivers/acpi/built-in.a
CC drivers/hid/hid-redragon.o
CC drivers/gpu/drm/drm_vblank_work.o
CC drivers/hid/hid-samsung.o
CC fs/eventfd.o
CC kernel/seccomp.o
CC [M] drivers/gpu/drm/xe/xe_migrate.o
CC kernel/relay.o
CC drivers/gpu/drm/i915/gt/intel_gt_pm.o
CC drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.o
CC [M] drivers/gpu/drm/xe/xe_mmio.o
CC drivers/gpu/drm/drm_vma_manager.o
CC drivers/hid/hid-sony.o
CC fs/aio.o
CC kernel/utsname_sysctl.o
CC drivers/gpu/drm/i915/gt/intel_gt_pm_irq.o
CC drivers/hid/hid-sunplus.o
CC fs/locks.o
CC drivers/gpu/drm/drm_writeback.o
CC [M] drivers/gpu/drm/xe/xe_mocs.o
CC kernel/delayacct.o
CC fs/binfmt_misc.o
CC drivers/gpu/drm/i915/gt/intel_gt_requests.o
CC [M] drivers/gpu/drm/xe/xe_module.o
CC kernel/taskstats.o
CC drivers/gpu/drm/drm_panel.o
CC kernel/tsacct.o
CC drivers/gpu/drm/i915/gt/intel_gt_sysfs.o
CC drivers/gpu/drm/drm_pci.o
CC fs/binfmt_script.o
AR net/ipv4/built-in.a
CC kernel/tracepoint.o
CC drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.o
AR net/built-in.a
CC drivers/gpu/drm/drm_debugfs.o
CC drivers/hid/hid-topseed.o
CC kernel/irq_work.o
CC drivers/gpu/drm/drm_debugfs_crc.o
CC drivers/gpu/drm/i915/gt/intel_gtt.o
CC kernel/static_call.o
CC [M] drivers/gpu/drm/xe/xe_oa.o
CC fs/binfmt_elf.o
CC drivers/gpu/drm/drm_panel_orientation_quirks.o
CC drivers/gpu/drm/i915/gt/intel_llc.o
CC kernel/padata.o
CC drivers/gpu/drm/drm_buddy.o
CC drivers/gpu/drm/drm_gem_shmem_helper.o
CC kernel/jump_label.o
CC drivers/gpu/drm/i915/gt/intel_lrc.o
CC fs/mbcache.o
CC [M] drivers/gpu/drm/xe/xe_observation.o
CC kernel/context_tracking.o
CC drivers/gpu/drm/drm_atomic_helper.o
CC kernel/iomem.o
CC fs/posix_acl.o
CC drivers/gpu/drm/i915/gt/intel_migrate.o
CC [M] drivers/gpu/drm/xe/xe_pat.o
CC drivers/gpu/drm/i915/gt/intel_mocs.o
CC fs/coredump.o
CC drivers/gpu/drm/drm_atomic_state_helper.o
CC kernel/rseq.o
CC fs/drop_caches.o
CC drivers/gpu/drm/i915/gt/intel_ppgtt.o
CC [M] drivers/gpu/drm/xe/xe_pci.o
CC fs/sysctls.o
CC drivers/gpu/drm/drm_crtc_helper.o
CC drivers/gpu/drm/i915/gt/intel_rc6.o
CC drivers/gpu/drm/drm_damage_helper.o
CC drivers/gpu/drm/i915/gt/intel_region_lmem.o
CC [M] drivers/gpu/drm/xe/xe_pcode.o
CC drivers/gpu/drm/drm_flip_work.o
CC fs/fhandle.o
CC [M] drivers/gpu/drm/xe/xe_pm.o
CC drivers/gpu/drm/i915/gt/intel_renderstate.o
CC drivers/gpu/drm/drm_format_helper.o
CC drivers/gpu/drm/i915/gt/intel_reset.o
CC [M] drivers/gpu/drm/xe/xe_preempt_fence.o
CC drivers/gpu/drm/i915/gt/intel_ring.o
CC drivers/gpu/drm/drm_gem_atomic_helper.o
CC [M] drivers/gpu/drm/xe/xe_pt.o
AR drivers/hid/built-in.a
CC drivers/gpu/drm/i915/gt/intel_ring_submission.o
CC drivers/gpu/drm/drm_gem_framebuffer_helper.o
CC [M] drivers/gpu/drm/xe/xe_pt_walk.o
CC [M] drivers/gpu/drm/xe/xe_query.o
CC drivers/gpu/drm/drm_kms_helper_common.o
CC drivers/gpu/drm/i915/gt/intel_rps.o
CC drivers/gpu/drm/i915/gt/intel_sa_media.o
CC drivers/gpu/drm/drm_modeset_helper.o
CC drivers/gpu/drm/i915/gt/intel_sseu.o
CC [M] drivers/gpu/drm/xe/xe_range_fence.o
CC drivers/gpu/drm/drm_plane_helper.o
CC drivers/gpu/drm/i915/gt/intel_sseu_debugfs.o
CC drivers/gpu/drm/drm_probe_helper.o
CC drivers/gpu/drm/i915/gt/intel_timeline.o
CC [M] drivers/gpu/drm/xe/xe_reg_sr.o
CC drivers/gpu/drm/drm_self_refresh_helper.o
CC drivers/gpu/drm/i915/gt/intel_tlb.o
CC drivers/gpu/drm/drm_simple_kms_helper.o
CC [M] drivers/gpu/drm/xe/xe_reg_whitelist.o
CC drivers/gpu/drm/i915/gt/intel_wopcm.o
CC [M] drivers/gpu/drm/xe/xe_rtp.o
CC drivers/gpu/drm/bridge/panel.o
CC drivers/gpu/drm/i915/gt/intel_workarounds.o
CC [M] drivers/gpu/drm/xe/xe_ring_ops.o
CC [M] drivers/gpu/drm/xe/xe_sa.o
AR kernel/built-in.a
CC drivers/gpu/drm/drm_mipi_dsi.o
CC drivers/gpu/drm/i915/gt/shmem_utils.o
CC [M] drivers/gpu/drm/xe/xe_sched_job.o
CC [M] drivers/gpu/drm/xe/xe_step.o
CC [M] drivers/gpu/drm/drm_exec.o
CC [M] drivers/gpu/drm/xe/xe_sync.o
CC [M] drivers/gpu/drm/xe/xe_tile.o
CC [M] drivers/gpu/drm/xe/xe_tile_sysfs.o
CC [M] drivers/gpu/drm/drm_gpuvm.o
CC drivers/gpu/drm/i915/gt/sysfs_engines.o
CC [M] drivers/gpu/drm/xe/xe_trace.o
CC [M] drivers/gpu/drm/xe/xe_trace_bo.o
CC drivers/gpu/drm/i915/gt/intel_ggtt_gmch.o
CC [M] drivers/gpu/drm/xe/xe_trace_guc.o
CC [M] drivers/gpu/drm/xe/xe_trace_lrc.o
CC [M] drivers/gpu/drm/drm_suballoc.o
CC [M] drivers/gpu/drm/xe/xe_ttm_sys_mgr.o
CC drivers/gpu/drm/i915/gt/gen6_renderstate.o
CC [M] drivers/gpu/drm/xe/xe_ttm_stolen_mgr.o
CC [M] drivers/gpu/drm/xe/xe_ttm_vram_mgr.o
CC drivers/gpu/drm/i915/gt/gen7_renderstate.o
CC drivers/gpu/drm/i915/gt/gen8_renderstate.o
CC [M] drivers/gpu/drm/xe/xe_tuning.o
CC drivers/gpu/drm/i915/gt/gen9_renderstate.o
CC [M] drivers/gpu/drm/drm_gem_ttm_helper.o
CC drivers/gpu/drm/i915/gem/i915_gem_busy.o
AR fs/built-in.a
CC [M] drivers/gpu/drm/xe/xe_uc.o
CC drivers/gpu/drm/i915/gem/i915_gem_clflush.o
CC [M] drivers/gpu/drm/xe/xe_uc_fw.o
CC [M] drivers/gpu/drm/xe/xe_vm.o
CC [M] drivers/gpu/drm/xe/xe_vram.o
CC drivers/gpu/drm/i915/gem/i915_gem_context.o
CC drivers/gpu/drm/i915/gem/i915_gem_create.o
CC [M] drivers/gpu/drm/xe/xe_vram_freq.o
CC drivers/gpu/drm/i915/gem/i915_gem_dmabuf.o
CC [M] drivers/gpu/drm/xe/xe_vsec.o
CC [M] drivers/gpu/drm/xe/xe_wait_user_fence.o
CC drivers/gpu/drm/i915/gem/i915_gem_domain.o
CC [M] drivers/gpu/drm/xe/xe_wa.o
CC drivers/gpu/drm/i915/gem/i915_gem_execbuffer.o
CC [M] drivers/gpu/drm/xe/xe_wopcm.o
CC [M] drivers/gpu/drm/xe/xe_hmm.o
CC drivers/gpu/drm/i915/gem/i915_gem_internal.o
CC drivers/gpu/drm/i915/gem/i915_gem_lmem.o
CC [M] drivers/gpu/drm/xe/xe_hwmon.o
CC [M] drivers/gpu/drm/xe/xe_gt_sriov_vf.o
CC drivers/gpu/drm/i915/gem/i915_gem_mman.o
CC drivers/gpu/drm/i915/gem/i915_gem_object.o
CC [M] drivers/gpu/drm/xe/xe_guc_relay.o
LD [M] drivers/gpu/drm/drm_suballoc_helper.o
CC drivers/gpu/drm/i915/gem/i915_gem_pages.o
CC [M] drivers/gpu/drm/xe/xe_memirq.o
CC drivers/gpu/drm/i915/gem/i915_gem_phys.o
CC [M] drivers/gpu/drm/xe/xe_sriov.o
CC drivers/gpu/drm/i915/gem/i915_gem_pm.o
CC [M] drivers/gpu/drm/xe/xe_sriov_vf.o
CC drivers/gpu/drm/i915/gem/i915_gem_region.o
CC drivers/gpu/drm/i915/gem/i915_gem_shmem.o
CC [M] drivers/gpu/drm/xe/display/ext/i915_irq.o
CC [M] drivers/gpu/drm/xe/display/ext/i915_utils.o
CC drivers/gpu/drm/i915/gem/i915_gem_shrinker.o
CC [M] drivers/gpu/drm/xe/display/intel_bo.o
CC drivers/gpu/drm/i915/gem/i915_gem_stolen.o
CC drivers/gpu/drm/i915/gem/i915_gem_throttle.o
CC [M] drivers/gpu/drm/xe/display/intel_fb_bo.o
LD [M] drivers/gpu/drm/drm_ttm_helper.o
CC [M] drivers/gpu/drm/xe/display/intel_fbdev_fb.o
CC drivers/gpu/drm/i915/gem/i915_gem_tiling.o
CC [M] drivers/gpu/drm/xe/display/xe_display.o
CC [M] drivers/gpu/drm/xe/display/xe_display_misc.o
CC drivers/gpu/drm/i915/gem/i915_gem_ttm.o
CC [M] drivers/gpu/drm/xe/display/xe_display_rps.o
CC [M] drivers/gpu/drm/xe/display/xe_display_wa.o
CC drivers/gpu/drm/i915/gem/i915_gem_ttm_move.o
CC [M] drivers/gpu/drm/xe/display/xe_dsb_buffer.o
CC [M] drivers/gpu/drm/xe/display/xe_fb_pin.o
CC drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.o
CC drivers/gpu/drm/i915/gem/i915_gem_userptr.o
CC [M] drivers/gpu/drm/xe/display/xe_hdcp_gsc.o
CC [M] drivers/gpu/drm/xe/display/xe_plane_initial.o
CC drivers/gpu/drm/i915/gem/i915_gem_wait.o
CC [M] drivers/gpu/drm/xe/display/xe_tdf.o
CC [M] drivers/gpu/drm/xe/i915-soc/intel_dram.o
CC drivers/gpu/drm/i915/gem/i915_gemfs.o
CC [M] drivers/gpu/drm/xe/i915-soc/intel_pch.o
CC drivers/gpu/drm/i915/i915_active.o
CC drivers/gpu/drm/i915/i915_cmd_parser.o
CC drivers/gpu/drm/i915/i915_deps.o
CC drivers/gpu/drm/i915/i915_gem.o
CC [M] drivers/gpu/drm/xe/i915-soc/intel_rom.o
CC [M] drivers/gpu/drm/xe/i915-display/icl_dsi.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_alpm.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_atomic.o
CC drivers/gpu/drm/i915/i915_gem_evict.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_atomic_plane.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_audio.o
CC drivers/gpu/drm/i915/i915_gem_gtt.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_backlight.o
CC drivers/gpu/drm/i915/i915_gem_ww.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_bios.o
CC drivers/gpu/drm/i915/i915_query.o
CC drivers/gpu/drm/i915/i915_request.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_bw.o
CC drivers/gpu/drm/i915/i915_scheduler.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_cdclk.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_color.o
CC drivers/gpu/drm/i915/i915_trace_points.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_combo_phy.o
CC drivers/gpu/drm/i915/i915_ttm_buddy_manager.o
CC drivers/gpu/drm/i915/i915_vma.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_connector.o
CC drivers/gpu/drm/i915/i915_vma_resource.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_fw.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_crtc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_crtc_state_dump.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_cursor.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_proxy.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_cx0_phy.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_ddi.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_heci_cmd_submit.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_ddi_buf_trans.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_ads.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_capture.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_conversion.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_ct.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_fw.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_device.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_hwconfig.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_driver.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_irq.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_params.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_power.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_log.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_log_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_power_map.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_rc.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_submission.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_power_well.o
CC drivers/gpu/drm/i915/gt/uc/intel_huc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_trace.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_wa.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dkl_phy.o
CC drivers/gpu/drm/i915/gt/uc/intel_huc_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dmc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_aux.o
CC drivers/gpu/drm/i915/gt/uc/intel_huc_fw.o
CC drivers/gpu/drm/i915/gt/uc/intel_uc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_aux_backlight.o
CC drivers/gpu/drm/i915/gt/uc/intel_uc_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_hdcp.o
CC drivers/gpu/drm/i915/gt/uc/intel_uc_fw.o
CC drivers/gpu/drm/i915/gt/intel_gsc.o
CC drivers/gpu/drm/i915/i915_hwmon.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_link_training.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_mst.o
CC drivers/gpu/drm/i915/display/hsw_ips.o
CC drivers/gpu/drm/i915/display/i9xx_plane.o
CC drivers/gpu/drm/i915/display/i9xx_display_sr.o
CC drivers/gpu/drm/i915/display/i9xx_wm.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_test.o
CC drivers/gpu/drm/i915/display/intel_alpm.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dpll.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dpll_mgr.o
CC drivers/gpu/drm/i915/display/intel_atomic.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dpt_common.o
CC drivers/gpu/drm/i915/display/intel_atomic_plane.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_drrs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dsb.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dsi.o
CC drivers/gpu/drm/i915/display/intel_audio.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dsi_dcs_backlight.o
CC drivers/gpu/drm/i915/display/intel_bios.o
CC drivers/gpu/drm/i915/display/intel_bo.o
CC drivers/gpu/drm/i915/display/intel_bw.o
CC drivers/gpu/drm/i915/display/intel_cdclk.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dsi_vbt.o
CC drivers/gpu/drm/i915/display/intel_color.o
CC drivers/gpu/drm/i915/display/intel_combo_phy.o
CC drivers/gpu/drm/i915/display/intel_connector.o
CC drivers/gpu/drm/i915/display/intel_crtc.o
CC drivers/gpu/drm/i915/display/intel_crtc_state_dump.o
CC drivers/gpu/drm/i915/display/intel_cursor.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_encoder.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_fb.o
CC drivers/gpu/drm/i915/display/intel_display.o
CC drivers/gpu/drm/i915/display/intel_display_conversion.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_fbc.o
CC drivers/gpu/drm/i915/display/intel_display_driver.o
CC drivers/gpu/drm/i915/display/intel_display_irq.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_fdi.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_fifo_underrun.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_frontbuffer.o
CC drivers/gpu/drm/i915/display/intel_display_params.o
CC drivers/gpu/drm/i915/display/intel_display_power.o
CC drivers/gpu/drm/i915/display/intel_display_power_map.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_global_state.o
CC drivers/gpu/drm/i915/display/intel_display_power_well.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_gmbus.o
CC drivers/gpu/drm/i915/display/intel_display_reset.o
CC drivers/gpu/drm/i915/display/intel_display_rps.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hdcp.o
CC drivers/gpu/drm/i915/display/intel_display_snapshot.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hdcp_gsc_message.o
CC drivers/gpu/drm/i915/display/intel_display_wa.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hdmi.o
CC drivers/gpu/drm/i915/display/intel_dmc.o
CC drivers/gpu/drm/i915/display/intel_dmc_wl.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hotplug.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hotplug_irq.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hti.o
CC drivers/gpu/drm/i915/display/intel_dpio_phy.o
CC drivers/gpu/drm/i915/display/intel_dpll.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_link_bw.o
CC drivers/gpu/drm/i915/display/intel_dpll_mgr.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_lspcon.o
CC drivers/gpu/drm/i915/display/intel_dpt.o
CC drivers/gpu/drm/i915/display/intel_dpt_common.o
CC drivers/gpu/drm/i915/display/intel_drrs.o
CC drivers/gpu/drm/i915/display/intel_dsb.o
CC drivers/gpu/drm/i915/display/intel_dsb_buffer.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_modeset_lock.o
CC drivers/gpu/drm/i915/display/intel_fb.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_modeset_setup.o
CC drivers/gpu/drm/i915/display/intel_fb_bo.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_modeset_verify.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_panel.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_pfit.o
CC drivers/gpu/drm/i915/display/intel_fb_pin.o
CC drivers/gpu/drm/i915/display/intel_fbc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_pmdemand.o
CC drivers/gpu/drm/i915/display/intel_fdi.o
CC drivers/gpu/drm/i915/display/intel_fifo_underrun.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_pps.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_psr.o
CC drivers/gpu/drm/i915/display/intel_frontbuffer.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_qp_tables.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_quirks.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_snps_hdmi_pll.o
CC drivers/gpu/drm/i915/display/intel_global_state.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_snps_phy.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_tc.o
CC drivers/gpu/drm/i915/display/intel_hdcp.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_vblank.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_vdsc.o
CC drivers/gpu/drm/i915/display/intel_hdcp_gsc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_vga.o
CC drivers/gpu/drm/i915/display/intel_hdcp_gsc_message.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_vrr.o
CC drivers/gpu/drm/i915/display/intel_hotplug.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dmc_wl.o
CC drivers/gpu/drm/i915/display/intel_hotplug_irq.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_wm.o
CC [M] drivers/gpu/drm/xe/i915-display/skl_scaler.o
CC drivers/gpu/drm/i915/display/intel_hti.o
CC drivers/gpu/drm/i915/display/intel_link_bw.o
CC drivers/gpu/drm/i915/display/intel_load_detect.o
CC [M] drivers/gpu/drm/xe/i915-display/skl_universal_plane.o
CC drivers/gpu/drm/i915/display/intel_lpe_audio.o
CC drivers/gpu/drm/i915/display/intel_modeset_lock.o
CC [M] drivers/gpu/drm/xe/i915-display/skl_watermark.o
CC drivers/gpu/drm/i915/display/intel_modeset_setup.o
CC drivers/gpu/drm/i915/display/intel_modeset_verify.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_acpi.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_opregion.o
CC [M] drivers/gpu/drm/xe/xe_debugfs.o
CC [M] drivers/gpu/drm/xe/xe_gt_debugfs.o
CC drivers/gpu/drm/i915/display/intel_overlay.o
CC [M] drivers/gpu/drm/xe/xe_gt_sriov_vf_debugfs.o
CC [M] drivers/gpu/drm/xe/xe_gt_stats.o
CC drivers/gpu/drm/i915/display/intel_pch_display.o
CC [M] drivers/gpu/drm/xe/xe_guc_debugfs.o
CC [M] drivers/gpu/drm/xe/xe_huc_debugfs.o
CC [M] drivers/gpu/drm/xe/xe_uc_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_debugfs.o
CC drivers/gpu/drm/i915/display/intel_pch_refclk.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_debugfs_params.o
CC drivers/gpu/drm/i915/display/intel_plane_initial.o
CC drivers/gpu/drm/i915/display/intel_pmdemand.o
CC drivers/gpu/drm/i915/display/intel_psr.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_pipe_crc.o
CC drivers/gpu/drm/i915/display/intel_quirks.o
CC drivers/gpu/drm/i915/display/intel_sprite.o
CC drivers/gpu/drm/i915/display/intel_sprite_uapi.o
CC drivers/gpu/drm/i915/display/intel_tc.o
CC drivers/gpu/drm/i915/display/intel_vblank.o
CC drivers/gpu/drm/i915/display/intel_vga.o
CC drivers/gpu/drm/i915/display/intel_wm.o
CC drivers/gpu/drm/i915/display/skl_scaler.o
CC drivers/gpu/drm/i915/display/skl_universal_plane.o
CC drivers/gpu/drm/i915/display/skl_watermark.o
CC drivers/gpu/drm/i915/display/intel_acpi.o
CC drivers/gpu/drm/i915/display/intel_opregion.o
CC drivers/gpu/drm/i915/display/intel_display_debugfs.o
CC drivers/gpu/drm/i915/display/intel_display_debugfs_params.o
CC drivers/gpu/drm/i915/display/intel_pipe_crc.o
CC drivers/gpu/drm/i915/display/dvo_ch7017.o
CC drivers/gpu/drm/i915/display/dvo_ch7xxx.o
CC drivers/gpu/drm/i915/display/dvo_ivch.o
CC drivers/gpu/drm/i915/display/dvo_ns2501.o
CC drivers/gpu/drm/i915/display/dvo_sil164.o
CC drivers/gpu/drm/i915/display/dvo_tfp410.o
CC drivers/gpu/drm/i915/display/g4x_dp.o
CC drivers/gpu/drm/i915/display/g4x_hdmi.o
CC drivers/gpu/drm/i915/display/icl_dsi.o
CC drivers/gpu/drm/i915/display/intel_backlight.o
CC drivers/gpu/drm/i915/display/intel_crt.o
CC drivers/gpu/drm/i915/display/intel_cx0_phy.o
CC drivers/gpu/drm/i915/display/intel_ddi.o
CC drivers/gpu/drm/i915/display/intel_ddi_buf_trans.o
CC drivers/gpu/drm/i915/display/intel_display_device.o
CC drivers/gpu/drm/i915/display/intel_display_trace.o
CC drivers/gpu/drm/i915/display/intel_dkl_phy.o
CC drivers/gpu/drm/i915/display/intel_dp.o
CC drivers/gpu/drm/i915/display/intel_dp_aux.o
CC drivers/gpu/drm/i915/display/intel_dp_aux_backlight.o
CC drivers/gpu/drm/i915/display/intel_dp_hdcp.o
CC drivers/gpu/drm/i915/display/intel_dp_link_training.o
CC drivers/gpu/drm/i915/display/intel_dp_mst.o
CC drivers/gpu/drm/i915/display/intel_dp_test.o
CC drivers/gpu/drm/i915/display/intel_dsi.o
CC drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.o
CC drivers/gpu/drm/i915/display/intel_dsi_vbt.o
CC drivers/gpu/drm/i915/display/intel_dvo.o
CC drivers/gpu/drm/i915/display/intel_encoder.o
CC drivers/gpu/drm/i915/display/intel_gmbus.o
CC drivers/gpu/drm/i915/display/intel_hdmi.o
CC drivers/gpu/drm/i915/display/intel_lspcon.o
CC drivers/gpu/drm/i915/display/intel_lvds.o
CC drivers/gpu/drm/i915/display/intel_panel.o
CC drivers/gpu/drm/i915/display/intel_pfit.o
CC drivers/gpu/drm/i915/display/intel_pps.o
CC drivers/gpu/drm/i915/display/intel_qp_tables.o
CC drivers/gpu/drm/i915/display/intel_sdvo.o
CC drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.o
CC drivers/gpu/drm/i915/display/intel_snps_phy.o
CC drivers/gpu/drm/i915/display/intel_tv.o
CC drivers/gpu/drm/i915/display/intel_vdsc.o
CC drivers/gpu/drm/i915/display/intel_vrr.o
CC drivers/gpu/drm/i915/display/vlv_dsi.o
CC drivers/gpu/drm/i915/display/vlv_dsi_pll.o
CC drivers/gpu/drm/i915/i915_perf.o
CC drivers/gpu/drm/i915/pxp/intel_pxp.o
CC drivers/gpu/drm/i915/pxp/intel_pxp_huc.o
CC drivers/gpu/drm/i915/pxp/intel_pxp_tee.o
CC drivers/gpu/drm/i915/i915_gpu_error.o
CC drivers/gpu/drm/i915/i915_vgpu.o
LD [M] drivers/gpu/drm/xe/xe.o
AR drivers/gpu/drm/i915/built-in.a
AR drivers/gpu/drm/built-in.a
AR drivers/gpu/built-in.a
AR drivers/built-in.a
AR built-in.a
AR vmlinux.a
LD vmlinux.o
OBJCOPY modules.builtin.modinfo
GEN modules.builtin
MODPOST Module.symvers
CC .vmlinux.export.o
CC [M] fs/efivarfs/efivarfs.mod.o
CC [M] .module-common.o
CC [M] drivers/gpu/drm/drm_exec.mod.o
CC [M] drivers/gpu/drm/drm_gpuvm.mod.o
CC [M] drivers/gpu/drm/drm_suballoc_helper.mod.o
CC [M] drivers/gpu/drm/drm_ttm_helper.mod.o
CC [M] drivers/gpu/drm/scheduler/gpu-sched.mod.o
CC [M] drivers/gpu/drm/xe/xe.mod.o
CC [M] drivers/thermal/intel/x86_pkg_temp_thermal.mod.o
CC [M] net/netfilter/nf_log_syslog.mod.o
CC [M] net/netfilter/xt_nat.mod.o
CC [M] net/netfilter/xt_mark.mod.o
CC [M] net/netfilter/xt_LOG.mod.o
CC [M] net/netfilter/xt_MASQUERADE.mod.o
CC [M] net/netfilter/xt_addrtype.mod.o
CC [M] net/ipv4/netfilter/iptable_nat.mod.o
LD [M] fs/efivarfs/efivarfs.ko
LD [M] drivers/gpu/drm/drm_exec.ko
LD [M] drivers/gpu/drm/drm_gpuvm.ko
LD [M] drivers/thermal/intel/x86_pkg_temp_thermal.ko
LD [M] net/netfilter/xt_MASQUERADE.ko
LD [M] net/ipv4/netfilter/iptable_nat.ko
LD [M] drivers/gpu/drm/drm_suballoc_helper.ko
LD [M] net/netfilter/nf_log_syslog.ko
LD [M] net/netfilter/xt_addrtype.ko
LD [M] drivers/gpu/drm/xe/xe.ko
LD [M] net/netfilter/xt_LOG.ko
LD [M] net/netfilter/xt_nat.ko
LD [M] net/netfilter/xt_mark.ko
LD [M] drivers/gpu/drm/scheduler/gpu-sched.ko
LD [M] drivers/gpu/drm/drm_ttm_helper.ko
UPD include/generated/utsversion.h
CC init/version-timestamp.o
KSYMS .tmp_vmlinux0.kallsyms.S
AS .tmp_vmlinux0.kallsyms.o
LD .tmp_vmlinux1
NM .tmp_vmlinux1.syms
KSYMS .tmp_vmlinux1.kallsyms.S
AS .tmp_vmlinux1.kallsyms.o
LD .tmp_vmlinux2
NM .tmp_vmlinux2.syms
KSYMS .tmp_vmlinux2.kallsyms.S
AS .tmp_vmlinux2.kallsyms.o
LD vmlinux
NM System.map
SORTTAB vmlinux
RELOCS arch/x86/boot/compressed/vmlinux.relocs
RSTRIP vmlinux
CC arch/x86/boot/a20.o
AS arch/x86/boot/bioscall.o
CC arch/x86/boot/cmdline.o
AS arch/x86/boot/copy.o
HOSTCC arch/x86/boot/mkcpustr
CC arch/x86/boot/cpuflags.o
CC arch/x86/boot/cpucheck.o
CC arch/x86/boot/early_serial_console.o
CC arch/x86/boot/edd.o
CC arch/x86/boot/main.o
CC arch/x86/boot/memory.o
CC arch/x86/boot/pm.o
AS arch/x86/boot/pmjump.o
CC arch/x86/boot/printf.o
CC arch/x86/boot/regs.o
CC arch/x86/boot/string.o
CC arch/x86/boot/tty.o
CC arch/x86/boot/video.o
CC arch/x86/boot/video-mode.o
CC arch/x86/boot/version.o
CC arch/x86/boot/video-vga.o
CC arch/x86/boot/video-vesa.o
CC arch/x86/boot/video-bios.o
HOSTCC arch/x86/boot/tools/build
CPUSTR arch/x86/boot/cpustr.h
LDS arch/x86/boot/compressed/vmlinux.lds
AS arch/x86/boot/compressed/kernel_info.o
CC arch/x86/boot/cpu.o
AS arch/x86/boot/compressed/head_32.o
VOFFSET arch/x86/boot/compressed/../voffset.h
CC arch/x86/boot/compressed/string.o
CC arch/x86/boot/compressed/cmdline.o
CC arch/x86/boot/compressed/error.o
OBJCOPY arch/x86/boot/compressed/vmlinux.bin
HOSTCC arch/x86/boot/compressed/mkpiggy
CC arch/x86/boot/compressed/cpuflags.o
CC arch/x86/boot/compressed/early_serial_console.o
CC arch/x86/boot/compressed/kaslr.o
CC arch/x86/boot/compressed/acpi.o
CC arch/x86/boot/compressed/efi.o
GZIP arch/x86/boot/compressed/vmlinux.bin.gz
CC arch/x86/boot/compressed/misc.o
MKPIGGY arch/x86/boot/compressed/piggy.S
AS arch/x86/boot/compressed/piggy.o
LD arch/x86/boot/compressed/vmlinux
ZOFFSET arch/x86/boot/zoffset.h
OBJCOPY arch/x86/boot/vmlinux.bin
AS arch/x86/boot/header.o
LD arch/x86/boot/setup.elf
OBJCOPY arch/x86/boot/setup.bin
BUILD arch/x86/boot/bzImage
Kernel: arch/x86/boot/bzImage is ready (#1)
run-parts: executing /workspace/ci/hooks/20-kernel-doc
+ SRC_DIR=/workspace/kernel
+ cd /workspace/kernel
+ find drivers/gpu/drm/xe/ -name '*.[ch]' -not -path 'drivers/gpu/drm/xe/display/*'
+ xargs ./scripts/kernel-doc -Werror -none include/uapi/drm/xe_drm.h
All hooks done
^ permalink raw reply [flat|nested] 35+ messages in thread
* ✓ CI.checksparse: success for drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
` (9 preceding siblings ...)
2025-01-23 11:15 ` ✓ CI.Hooks: " Patchwork
@ 2025-01-23 11:17 ` Patchwork
2025-01-23 11:45 ` ✓ Xe.CI.BAT: " Patchwork
2025-01-24 0:02 ` ✗ Xe.CI.Full: failure " Patchwork
12 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2025-01-23 11:17 UTC (permalink / raw)
To: Philipp Stanner; +Cc: intel-xe
== Series Details ==
Series: drm/sched: Use struct for drm_sched_init() params
URL : https://patchwork.freedesktop.org/series/143883/
State : success
== Summary ==
+ trap cleanup EXIT
+ KERNEL=/kernel
+ MT=/root/linux/maintainer-tools
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools /root/linux/maintainer-tools
Cloning into '/root/linux/maintainer-tools'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ make -C /root/linux/maintainer-tools
make: Entering directory '/root/linux/maintainer-tools'
cc -O2 -g -Wextra -o remap-log remap-log.c
make: Leaving directory '/root/linux/maintainer-tools'
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ /root/linux/maintainer-tools/dim sparse --fast 79f0f76f7f03b0667e10fe2fa3a75b2f8727b8de
Sparse version: 0.6.4 (Ubuntu: 0.6.4-4ubuntu3)
Fast mode used, each commit won't be checked separately.
Okay!
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 35+ messages in thread
* ✓ Xe.CI.BAT: success for drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
` (10 preceding siblings ...)
2025-01-23 11:17 ` ✓ CI.checksparse: " Patchwork
@ 2025-01-23 11:45 ` Patchwork
2025-01-24 0:02 ` ✗ Xe.CI.Full: failure " Patchwork
12 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2025-01-23 11:45 UTC (permalink / raw)
To: Philipp Stanner; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 3560 bytes --]
== Series Details ==
Series: drm/sched: Use struct for drm_sched_init() params
URL : https://patchwork.freedesktop.org/series/143883/
State : success
== Summary ==
CI Bug Log - changes from xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4_BAT -> xe-pw-143883v1_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (9 -> 8)
------------------------------
Missing (1): bat-adlp-vm
Known issues
------------
Here are the changes found in xe-pw-143883v1_BAT that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit:
- bat-adlp-vf: NOTRUN -> [SKIP][1] ([Intel XE#2229] / [Intel XE#455]) +1 other test skip
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/bat-adlp-vf/igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit.html
* igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit:
- bat-adlp-vf: NOTRUN -> [SKIP][2] ([Intel XE#2229])
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/bat-adlp-vf/igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit.html
#### Possible fixes ####
* igt@xe_live_ktest@xe_migrate:
- bat-adlp-vf: [SKIP][3] ([Intel XE#1192]) -> [PASS][4] +1 other test pass
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/bat-adlp-vf/igt@xe_live_ktest@xe_migrate.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/bat-adlp-vf/igt@xe_live_ktest@xe_migrate.html
* igt@xe_pat@pat-index-xelp@render:
- bat-adlp-vf: [DMESG-WARN][5] ([Intel XE#3970] / [Intel XE#4078]) -> [PASS][6] +1 other test pass
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/bat-adlp-vf/igt@xe_pat@pat-index-xelp@render.html
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/bat-adlp-vf/igt@xe_pat@pat-index-xelp@render.html
* igt@xe_vm@shared-pte-page:
- bat-adlp-vf: [DMESG-WARN][7] ([Intel XE#4078]) -> [PASS][8]
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/bat-adlp-vf/igt@xe_vm@shared-pte-page.html
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/bat-adlp-vf/igt@xe_vm@shared-pte-page.html
#### Warnings ####
* igt@xe_live_ktest@xe_bo:
- bat-adlp-vf: [SKIP][9] ([Intel XE#1192]) -> [SKIP][10] ([Intel XE#2229] / [Intel XE#455])
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/bat-adlp-vf/igt@xe_live_ktest@xe_bo.html
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/bat-adlp-vf/igt@xe_live_ktest@xe_bo.html
[Intel XE#1192]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1192
[Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
[Intel XE#3970]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3970
[Intel XE#4078]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4078
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
Build changes
-------------
* Linux: xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4 -> xe-pw-143883v1
IGT_8207: 9f36f9f9e8825a67b762630c2b31628ddcda5c10 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4: 33906a7032dfca356fd7309e6abbdf5a7ea97db4
xe-pw-143883v1: 143883v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/index.html
[-- Attachment #2: Type: text/html, Size: 4610 bytes --]
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-23 11:10 ` Maíra Canal
@ 2025-01-23 12:13 ` Philipp Stanner
2025-01-23 12:29 ` Maíra Canal
0 siblings, 1 reply; 35+ messages in thread
From: Philipp Stanner @ 2025-01-23 12:13 UTC (permalink / raw)
To: Maíra Canal, Philipp Stanner, Alex Deucher,
Christian König, Xinhui Pan, David Airlie, Simona Vetter,
Lucas Stach, Russell King, Christian Gmeiner, Frank Binns,
Matt Coster, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Qiang Yu, Rob Clark, Sean Paul, Konrad Dybcio, Abhinav Kumar,
Dmitry Baryshkov, Marijn Suijten, Karol Herbst, Lyude Paul,
Danilo Krummrich, Boris Brezillon, Rob Herring, Steven Price,
Liviu Dudau, Luben Tuikov, Matthew Brost, Melissa Wen,
Lucas De Marchi, Thomas Hellström, Rodrigo Vivi,
Sunil Khatri, Lijo Lazar, Mario Limonciello, Ma Jun, Yunxiang Li
Cc: amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe
On Thu, 2025-01-23 at 08:10 -0300, Maíra Canal wrote:
> Hi Philipp,
>
> On 23/01/25 05:10, Philipp Stanner wrote:
> > On Wed, 2025-01-22 at 19:07 -0300, Maíra Canal wrote:
> > > Hi Philipp,
> > >
> > > On 22/01/25 11:08, Philipp Stanner wrote:
> > > > drm_sched_init() has a great many parameters and upcoming new
> > > > functionality for the scheduler might add even more. Generally,
> > > > the
> > > > great number of parameters reduces readability and has already
> > > > caused
> > > > one missnaming in:
> > > >
> > > > commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in
> > > > nouveau_sched_init()").
> > > >
> > > > Introduce a new struct for the scheduler init parameters and
> > > > port
> > > > all
> > > > users.
> > > >
> > > > Signed-off-by: Philipp Stanner <phasta@kernel.org>
>
> [...]
>
> > >
> > > > diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
> > > > b/drivers/gpu/drm/v3d/v3d_sched.c
> > > > index 99ac4995b5a1..716e6d074d87 100644
> > > > --- a/drivers/gpu/drm/v3d/v3d_sched.c
> > > > +++ b/drivers/gpu/drm/v3d/v3d_sched.c
> > > > @@ -814,67 +814,124 @@ static const struct
> > > > drm_sched_backend_ops
> > > > v3d_cpu_sched_ops = {
> > > > .free_job = v3d_cpu_job_free
> > > > };
> > > >
> > > > +/*
> > > > + * v3d's scheduler instances are all identical, except for ops
> > > > and
> > > > name.
> > > > + */
> > > > +static void
> > > > +v3d_common_sched_init(struct drm_sched_init_params *params,
> > > > struct
> > > > device *dev)
> > > > +{
> > > > + memset(params, 0, sizeof(struct
> > > > drm_sched_init_params));
> > > > +
> > > > + params->submit_wq = NULL; /* Use the system_wq. */
> > > > + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
> > > > + params->credit_limit = 1;
> > > > + params->hang_limit = 0;
> > > > + params->timeout = msecs_to_jiffies(500);
> > > > + params->timeout_wq = NULL; /* Use the system_wq. */
> > > > + params->score = NULL;
> > > > + params->dev = dev;
> > > > +}
> > >
> > > Could we use only one function that takes struct v3d_dev *v3d,
> > > enum
> > > v3d_queue, and sched_ops as arguments (instead of one function
> > > per
> > > queue)? You can get the name of the scheduler by concatenating
> > > "v3d_"
> > > to
> > > the return of v3d_queue_to_string().
> > >
> > > I believe it would make the code much simpler.
> >
> > Hello,
> >
> > so just to get that right:
> > You'd like to have one universal function that switch-cases over an
> > enum, sets the ops and creates the name with string concatenation?
> >
> > I'm not convinced that this is simpler than a few small functions,
> > but
> > it's not my component, so…
> >
> > Whatever we'll do will be simpler than the existing code, though.
> > Right
> > now no reader can see at first glance whether all those schedulers
> > are
> > identically parametrized or not.
> >
>
> This is my proposal (just a quick draft, please check if it
> compiles):
>
> diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
> b/drivers/gpu/drm/v3d/v3d_sched.c
> index 961465128d80..7cc45a0c6ca0 100644
> --- a/drivers/gpu/drm/v3d/v3d_sched.c
> +++ b/drivers/gpu/drm/v3d/v3d_sched.c
> @@ -820,67 +820,62 @@ static const struct drm_sched_backend_ops
> v3d_cpu_sched_ops = {
> .free_job = v3d_cpu_job_free
> };
>
> +static int
> +v3d_sched_queue_init(struct v3d_dev *v3d, enum v3d_queue queue,
> + const struct drm_sched_backend_ops *ops, const
Is it a queue, though?
How about _v3d_sched_init()?
P.
> char
> *name)
> +{
> + struct drm_sched_init_params params = {
> + .submit_wq = NULL,
> + .num_rqs = DRM_SCHED_PRIORITY_COUNT,
> + .credit_limit = 1,
> + .hang_limit = 0,
> + .timeout = msecs_to_jiffies(500),
> + .timeout_wq = NULL,
> + .score = NULL,
> + .dev = v3d->drm.dev,
> + };
> +
> + params.ops = ops;
> + params.name = name;
> +
> + return drm_sched_init(&v3d->queue[queue].sched, ¶ms);
> +}
> +
> int
> v3d_sched_init(struct v3d_dev *v3d)
> {
> - int hw_jobs_limit = 1;
> - int job_hang_limit = 0;
> - int hang_limit_ms = 500;
> int ret;
>
> - ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
> - &v3d_bin_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - hw_jobs_limit, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_bin", v3d->drm.dev);
> + ret = v3d_sched_queue_init(v3d, V3D_BIN, &v3d_bin_sched_ops,
> + "v3d_bin");
> if (ret)
> return ret;
>
> - ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
> - &v3d_render_sched_ops, NULL,
> - DRM_SCHED_PRIORITY_COUNT,
> - hw_jobs_limit, job_hang_limit,
> - msecs_to_jiffies(hang_limit_ms), NULL,
> - NULL, "v3d_render", v3d->drm.dev);
> + ret = v3d_sched_queue_init(v3d, V3D_RENDER,
> &v3d_render_sched_ops,
> + "v3d_render");
> if (ret)
> goto fail;
>
> [...]
>
> At least for me, this looks much simpler than one function for each
> V3D queue.
>
> Best Regards,
> - Maíra
>
> > P.
> >
> >
> > >
> > > Best Regards,
> > > - Maíra
> > >
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
2025-01-23 12:13 ` Philipp Stanner
@ 2025-01-23 12:29 ` Maíra Canal
0 siblings, 0 replies; 35+ messages in thread
From: Maíra Canal @ 2025-01-23 12:29 UTC (permalink / raw)
To: phasta, Alex Deucher, Christian König, Xinhui Pan,
David Airlie, Simona Vetter, Lucas Stach, Russell King,
Christian Gmeiner, Frank Binns, Matt Coster, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Qiang Yu, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
Karol Herbst, Lyude Paul, Danilo Krummrich, Boris Brezillon,
Rob Herring, Steven Price, Liviu Dudau, Luben Tuikov,
Matthew Brost, Melissa Wen, Lucas De Marchi,
Thomas Hellström, Rodrigo Vivi, Sunil Khatri, Lijo Lazar,
Mario Limonciello, Ma Jun, Yunxiang Li
Cc: amd-gfx, dri-devel, linux-kernel, etnaviv, lima, linux-arm-msm,
freedreno, nouveau, intel-xe, mcanal
Hi Philipp,
On 23/01/25 09:13, Philipp Stanner wrote:
> On Thu, 2025-01-23 at 08:10 -0300, Maíra Canal wrote:
>> Hi Philipp,
>>
>> On 23/01/25 05:10, Philipp Stanner wrote:
>>> On Wed, 2025-01-22 at 19:07 -0300, Maíra Canal wrote:
>>>> Hi Philipp,
>>>>
>>>> On 22/01/25 11:08, Philipp Stanner wrote:
>>>>> drm_sched_init() has a great many parameters and upcoming new
>>>>> functionality for the scheduler might add even more. Generally,
>>>>> the
>>>>> great number of parameters reduces readability and has already
>>>>> caused
>>>>> one missnaming in:
>>>>>
>>>>> commit 6f1cacf4eba7 ("drm/nouveau: Improve variable name in
>>>>> nouveau_sched_init()").
>>>>>
>>>>> Introduce a new struct for the scheduler init parameters and
>>>>> port
>>>>> all
>>>>> users.
>>>>>
>>>>> Signed-off-by: Philipp Stanner <phasta@kernel.org>
>>
>> [...]
>>
>>>>
>>>>> diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
>>>>> b/drivers/gpu/drm/v3d/v3d_sched.c
>>>>> index 99ac4995b5a1..716e6d074d87 100644
>>>>> --- a/drivers/gpu/drm/v3d/v3d_sched.c
>>>>> +++ b/drivers/gpu/drm/v3d/v3d_sched.c
>>>>> @@ -814,67 +814,124 @@ static const struct
>>>>> drm_sched_backend_ops
>>>>> v3d_cpu_sched_ops = {
>>>>> .free_job = v3d_cpu_job_free
>>>>> };
>>>>>
>>>>> +/*
>>>>> + * v3d's scheduler instances are all identical, except for ops
>>>>> and
>>>>> name.
>>>>> + */
>>>>> +static void
>>>>> +v3d_common_sched_init(struct drm_sched_init_params *params,
>>>>> struct
>>>>> device *dev)
>>>>> +{
>>>>> + memset(params, 0, sizeof(struct
>>>>> drm_sched_init_params));
>>>>> +
>>>>> + params->submit_wq = NULL; /* Use the system_wq. */
>>>>> + params->num_rqs = DRM_SCHED_PRIORITY_COUNT;
>>>>> + params->credit_limit = 1;
>>>>> + params->hang_limit = 0;
>>>>> + params->timeout = msecs_to_jiffies(500);
>>>>> + params->timeout_wq = NULL; /* Use the system_wq. */
>>>>> + params->score = NULL;
>>>>> + params->dev = dev;
>>>>> +}
>>>>
>>>> Could we use only one function that takes struct v3d_dev *v3d,
>>>> enum
>>>> v3d_queue, and sched_ops as arguments (instead of one function
>>>> per
>>>> queue)? You can get the name of the scheduler by concatenating
>>>> "v3d_"
>>>> to
>>>> the return of v3d_queue_to_string().
>>>>
>>>> I believe it would make the code much simpler.
>>>
>>> Hello,
>>>
>>> so just to get that right:
>>> You'd like to have one universal function that switch-cases over an
>>> enum, sets the ops and creates the name with string concatenation?
>>>
>>> I'm not convinced that this is simpler than a few small functions,
>>> but
>>> it's not my component, so…
>>>
>>> Whatever we'll do will be simpler than the existing code, though.
>>> Right
>>> now no reader can see at first glance whether all those schedulers
>>> are
>>> identically parametrized or not.
>>>
>>
>> This is my proposal (just a quick draft, please check if it
>> compiles):
>>
>> diff --git a/drivers/gpu/drm/v3d/v3d_sched.c
>> b/drivers/gpu/drm/v3d/v3d_sched.c
>> index 961465128d80..7cc45a0c6ca0 100644
>> --- a/drivers/gpu/drm/v3d/v3d_sched.c
>> +++ b/drivers/gpu/drm/v3d/v3d_sched.c
>> @@ -820,67 +820,62 @@ static const struct drm_sched_backend_ops
>> v3d_cpu_sched_ops = {
>> .free_job = v3d_cpu_job_free
>> };
>>
>> +static int
>> +v3d_sched_queue_init(struct v3d_dev *v3d, enum v3d_queue queue,
>> + const struct drm_sched_backend_ops *ops, const
>
> Is it a queue, though?
In V3D, we use the abstraction of a queue for everything related to job
submission. For each queue, we have a scheduler instance, a different
IOCTL and such. The queues work independently and the synchronization
between them can be done through syncobjs.
>
> How about _v3d_sched_init()?
>
I'd prefer if you use a function name related to "queue", as it would
make more sense semantically.
Best Regards,
- Maíra
> P.
>
>> char
>> *name)
>> +{
>> + struct drm_sched_init_params params = {
>> + .submit_wq = NULL,
>> + .num_rqs = DRM_SCHED_PRIORITY_COUNT,
>> + .credit_limit = 1,
>> + .hang_limit = 0,
>> + .timeout = msecs_to_jiffies(500),
>> + .timeout_wq = NULL,
>> + .score = NULL,
>> + .dev = v3d->drm.dev,
>> + };
>> +
>> + params.ops = ops;
>> + params.name = name;
>> +
>> + return drm_sched_init(&v3d->queue[queue].sched, ¶ms);
>> +}
>> +
>> int
>> v3d_sched_init(struct v3d_dev *v3d)
>> {
>> - int hw_jobs_limit = 1;
>> - int job_hang_limit = 0;
>> - int hang_limit_ms = 500;
>> int ret;
>>
>> - ret = drm_sched_init(&v3d->queue[V3D_BIN].sched,
>> - &v3d_bin_sched_ops, NULL,
>> - DRM_SCHED_PRIORITY_COUNT,
>> - hw_jobs_limit, job_hang_limit,
>> - msecs_to_jiffies(hang_limit_ms), NULL,
>> - NULL, "v3d_bin", v3d->drm.dev);
>> + ret = v3d_sched_queue_init(v3d, V3D_BIN, &v3d_bin_sched_ops,
>> + "v3d_bin");
>> if (ret)
>> return ret;
>>
>> - ret = drm_sched_init(&v3d->queue[V3D_RENDER].sched,
>> - &v3d_render_sched_ops, NULL,
>> - DRM_SCHED_PRIORITY_COUNT,
>> - hw_jobs_limit, job_hang_limit,
>> - msecs_to_jiffies(hang_limit_ms), NULL,
>> - NULL, "v3d_render", v3d->drm.dev);
>> + ret = v3d_sched_queue_init(v3d, V3D_RENDER,
>> &v3d_render_sched_ops,
>> + "v3d_render");
>> if (ret)
>> goto fail;
>>
>> [...]
>>
>> At least for me, this looks much simpler than one function for each
>> V3D queue.
>>
>> Best Regards,
>> - Maíra
>>
>>> P.
>>>
>>>
>>>>
>>>> Best Regards,
>>>> - Maíra
>>>>
>>
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* ✗ Xe.CI.Full: failure for drm/sched: Use struct for drm_sched_init() params
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
` (11 preceding siblings ...)
2025-01-23 11:45 ` ✓ Xe.CI.BAT: " Patchwork
@ 2025-01-24 0:02 ` Patchwork
12 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2025-01-24 0:02 UTC (permalink / raw)
To: Philipp Stanner; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 77009 bytes --]
== Series Details ==
Series: drm/sched: Use struct for drm_sched_init() params
URL : https://patchwork.freedesktop.org/series/143883/
State : failure
== Summary ==
CI Bug Log - changes from xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4_full -> xe-pw-143883v1_full
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-143883v1_full absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-143883v1_full, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (4 -> 4)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-143883v1_full:
### IGT changes ###
#### Possible regressions ####
* igt@kms_content_protection@atomic-dpms@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [INCOMPLETE][1]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-3/igt@kms_content_protection@atomic-dpms@pipe-a-dp-2.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size:
- shard-dg2-set2: NOTRUN -> [DMESG-WARN][2] +1 other test dmesg-warn
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size.html
* igt@kms_setmode@invalid-clone-single-crtc-stealing@pipe-a-hdmi-a-6-dp-4:
- shard-dg2-set2: [PASS][3] -> [DMESG-WARN][4] +2 other tests dmesg-warn
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-433/igt@kms_setmode@invalid-clone-single-crtc-stealing@pipe-a-hdmi-a-6-dp-4.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-463/igt@kms_setmode@invalid-clone-single-crtc-stealing@pipe-a-hdmi-a-6-dp-4.html
* igt@xe_ccs@block-copy-uncompressed-inc-dimension:
- shard-bmg: [PASS][5] -> [INCOMPLETE][6]
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@xe_ccs@block-copy-uncompressed-inc-dimension.html
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-7/igt@xe_ccs@block-copy-uncompressed-inc-dimension.html
#### Warnings ####
* igt@kms_content_protection@atomic-dpms:
- shard-bmg: [SKIP][7] ([Intel XE#2341]) -> [INCOMPLETE][8]
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_content_protection@atomic-dpms.html
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-3/igt@kms_content_protection@atomic-dpms.html
* igt@kms_cursor_crc@cursor-sliding-128x42:
- shard-dg2-set2: [DMESG-WARN][9] ([Intel XE#1033]) -> [INCOMPLETE][10] +1 other test incomplete
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-434/igt@kms_cursor_crc@cursor-sliding-128x42.html
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-436/igt@kms_cursor_crc@cursor-sliding-128x42.html
Known issues
------------
Here are the changes found in xe-pw-143883v1_full that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
- shard-dg2-set2: NOTRUN -> [SKIP][11] ([Intel XE#623])
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
* igt@kms_async_flips@invalid-async-flip-atomic@pipe-b-hdmi-a-1:
- shard-adlp: [PASS][12] -> [DMESG-WARN][13] ([Intel XE#1033]) +3 other tests dmesg-warn
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-adlp-6/igt@kms_async_flips@invalid-async-flip-atomic@pipe-b-hdmi-a-1.html
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-4/igt@kms_async_flips@invalid-async-flip-atomic@pipe-b-hdmi-a-1.html
* igt@kms_big_fb@4-tiled-8bpp-rotate-270:
- shard-dg2-set2: NOTRUN -> [SKIP][14] ([Intel XE#316])
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_big_fb@4-tiled-8bpp-rotate-270.html
* igt@kms_big_fb@4-tiled-8bpp-rotate-90:
- shard-lnl: NOTRUN -> [SKIP][15] ([Intel XE#1407])
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_big_fb@4-tiled-8bpp-rotate-90.html
* igt@kms_big_fb@linear-8bpp-rotate-270:
- shard-adlp: NOTRUN -> [SKIP][16] ([Intel XE#316])
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_big_fb@linear-8bpp-rotate-270.html
* igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-hflip:
- shard-adlp: NOTRUN -> [FAIL][17] ([Intel XE#1874])
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-hflip.html
* igt@kms_big_fb@y-tiled-addfb-size-offset-overflow:
- shard-dg2-set2: NOTRUN -> [SKIP][18] ([Intel XE#607])
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_big_fb@y-tiled-addfb-size-offset-overflow.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180:
- shard-bmg: NOTRUN -> [SKIP][19] ([Intel XE#1124]) +1 other test skip
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
- shard-adlp: NOTRUN -> [DMESG-FAIL][20] ([Intel XE#1033]) +3 other tests dmesg-fail
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-hflip:
- shard-dg2-set2: NOTRUN -> [SKIP][21] ([Intel XE#1124]) +2 other tests skip
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-hflip.html
* igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p:
- shard-bmg: [PASS][22] -> [SKIP][23] ([Intel XE#2314] / [Intel XE#2894])
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-1/igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p.html
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p.html
* igt@kms_bw@connected-linear-tiling-4-displays-1920x1080p:
- shard-dg2-set2: NOTRUN -> [SKIP][24] ([Intel XE#2191]) +1 other test skip
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@kms_bw@connected-linear-tiling-4-displays-1920x1080p.html
* igt@kms_bw@linear-tiling-2-displays-3840x2160p:
- shard-dg2-set2: NOTRUN -> [SKIP][25] ([Intel XE#367]) +2 other tests skip
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_bw@linear-tiling-2-displays-3840x2160p.html
* igt@kms_ccs@bad-pixel-format-4-tiled-dg2-mc-ccs:
- shard-adlp: NOTRUN -> [SKIP][26] ([Intel XE#455] / [Intel XE#787]) +5 other tests skip
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_ccs@bad-pixel-format-4-tiled-dg2-mc-ccs.html
* igt@kms_ccs@bad-pixel-format-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][27] ([Intel XE#787]) +8 other tests skip
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_ccs@bad-pixel-format-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-1.html
* igt@kms_ccs@ccs-on-another-bo-y-tiled-gen12-mc-ccs:
- shard-lnl: NOTRUN -> [SKIP][28] ([Intel XE#2887]) +3 other tests skip
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_ccs@ccs-on-another-bo-y-tiled-gen12-mc-ccs.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4:
- shard-dg2-set2: [PASS][29] -> [DMESG-WARN][30] ([Intel XE#1033]) +4 other tests dmesg-warn
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-435/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4.html
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs@pipe-c-dp-4:
- shard-dg2-set2: [PASS][31] -> [INCOMPLETE][32] ([Intel XE#3862])
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-463/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs@pipe-c-dp-4.html
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs@pipe-c-dp-4.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs:
- shard-dg2-set2: NOTRUN -> [SKIP][33] ([Intel XE#3442])
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-mc-ccs:
- shard-bmg: NOTRUN -> [SKIP][34] ([Intel XE#2887]) +1 other test skip
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-mc-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-mc-ccs@pipe-b-hdmi-a-6:
- shard-dg2-set2: NOTRUN -> [SKIP][35] ([Intel XE#787]) +195 other tests skip
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-mc-ccs@pipe-b-hdmi-a-6.html
* igt@kms_ccs@random-ccs-data-4-tiled-mtl-mc-ccs@pipe-d-dp-2:
- shard-dg2-set2: NOTRUN -> [SKIP][36] ([Intel XE#455] / [Intel XE#787]) +37 other tests skip
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-432/igt@kms_ccs@random-ccs-data-4-tiled-mtl-mc-ccs@pipe-d-dp-2.html
* igt@kms_cdclk@mode-transition@pipe-d-dp-4:
- shard-dg2-set2: NOTRUN -> [SKIP][37] ([Intel XE#314]) +3 other tests skip
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-463/igt@kms_cdclk@mode-transition@pipe-d-dp-4.html
* igt@kms_chamelium_color@ctm-negative:
- shard-lnl: NOTRUN -> [SKIP][38] ([Intel XE#306])
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_chamelium_color@ctm-negative.html
* igt@kms_chamelium_color@ctm-red-to-blue:
- shard-dg2-set2: NOTRUN -> [SKIP][39] ([Intel XE#306])
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@kms_chamelium_color@ctm-red-to-blue.html
* igt@kms_chamelium_edid@hdmi-edid-change-during-hibernate:
- shard-bmg: NOTRUN -> [SKIP][40] ([Intel XE#2252]) +2 other tests skip
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_chamelium_edid@hdmi-edid-change-during-hibernate.html
* igt@kms_chamelium_frames@hdmi-cmp-planes-random:
- shard-adlp: NOTRUN -> [SKIP][41] ([Intel XE#373])
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_chamelium_frames@hdmi-cmp-planes-random.html
* igt@kms_chamelium_frames@hdmi-crc-planes-random:
- shard-lnl: NOTRUN -> [SKIP][42] ([Intel XE#373]) +1 other test skip
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-7/igt@kms_chamelium_frames@hdmi-crc-planes-random.html
* igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode:
- shard-dg2-set2: NOTRUN -> [SKIP][43] ([Intel XE#373]) +6 other tests skip
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode.html
* igt@kms_content_protection@atomic@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [FAIL][44] ([Intel XE#1178])
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-7/igt@kms_content_protection@atomic@pipe-a-dp-2.html
* igt@kms_content_protection@dp-mst-lic-type-0:
- shard-dg2-set2: NOTRUN -> [SKIP][45] ([Intel XE#307])
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_content_protection@dp-mst-lic-type-0.html
* igt@kms_content_protection@lic-type-0@pipe-a-dp-2:
- shard-dg2-set2: NOTRUN -> [FAIL][46] ([Intel XE#1178])
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-432/igt@kms_content_protection@lic-type-0@pipe-a-dp-2.html
* igt@kms_content_protection@type1:
- shard-bmg: NOTRUN -> [SKIP][47] ([Intel XE#2341])
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_content_protection@type1.html
* igt@kms_cursor_crc@cursor-offscreen-512x170:
- shard-lnl: NOTRUN -> [SKIP][48] ([Intel XE#2321])
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_cursor_crc@cursor-offscreen-512x170.html
* igt@kms_cursor_crc@cursor-offscreen-max-size:
- shard-adlp: NOTRUN -> [SKIP][49] ([Intel XE#455]) +3 other tests skip
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-8/igt@kms_cursor_crc@cursor-offscreen-max-size.html
* igt@kms_cursor_crc@cursor-onscreen-512x512:
- shard-adlp: NOTRUN -> [SKIP][50] ([Intel XE#308])
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_cursor_crc@cursor-onscreen-512x512.html
* igt@kms_cursor_crc@cursor-random-max-size:
- shard-bmg: NOTRUN -> [SKIP][51] ([Intel XE#2320])
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_cursor_crc@cursor-random-max-size.html
* igt@kms_cursor_crc@cursor-rapid-movement-512x512:
- shard-dg2-set2: NOTRUN -> [SKIP][52] ([Intel XE#308])
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_cursor_crc@cursor-rapid-movement-512x512.html
* igt@kms_cursor_crc@cursor-sliding-32x32:
- shard-lnl: NOTRUN -> [SKIP][53] ([Intel XE#1424]) +1 other test skip
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_cursor_crc@cursor-sliding-32x32.html
* igt@kms_cursor_edge_walk@256x256-right-edge@pipe-d-hdmi-a-3:
- shard-bmg: NOTRUN -> [DMESG-WARN][54] ([Intel XE#1033]) +3 other tests dmesg-warn
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_cursor_edge_walk@256x256-right-edge@pipe-d-hdmi-a-3.html
* igt@kms_cursor_legacy@2x-flip-vs-cursor-atomic:
- shard-bmg: [PASS][55] -> [SKIP][56] ([Intel XE#2291]) +1 other test skip
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-1/igt@kms_cursor_legacy@2x-flip-vs-cursor-atomic.html
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_cursor_legacy@2x-flip-vs-cursor-atomic.html
* igt@kms_cursor_legacy@2x-long-flip-vs-cursor-atomic:
- shard-dg2-set2: NOTRUN -> [INCOMPLETE][57] ([Intel XE#3226])
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_cursor_legacy@2x-long-flip-vs-cursor-atomic.html
* igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic:
- shard-dg2-set2: NOTRUN -> [SKIP][58] ([Intel XE#323])
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
* igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size:
- shard-lnl: NOTRUN -> [SKIP][59] ([Intel XE#309]) +1 other test skip
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size:
- shard-bmg: NOTRUN -> [SKIP][60] ([Intel XE#2291]) +1 other test skip
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-toggle:
- shard-bmg: [PASS][61] -> [DMESG-WARN][62] ([Intel XE#877])
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-3/igt@kms_cursor_legacy@cursorb-vs-flipa-toggle.html
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-5/igt@kms_cursor_legacy@cursorb-vs-flipa-toggle.html
* igt@kms_display_modes@extended-mode-basic:
- shard-bmg: NOTRUN -> [SKIP][63] ([Intel XE#2425])
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_display_modes@extended-mode-basic.html
* igt@kms_dp_aux_dev:
- shard-bmg: [PASS][64] -> [SKIP][65] ([Intel XE#3009])
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-5/igt@kms_dp_aux_dev.html
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_dp_aux_dev.html
* igt@kms_dsc@dsc-basic:
- shard-bmg: NOTRUN -> [SKIP][66] ([Intel XE#2244])
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_dsc@dsc-basic.html
* igt@kms_feature_discovery@display-3x:
- shard-dg2-set2: NOTRUN -> [SKIP][67] ([Intel XE#703])
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_feature_discovery@display-3x.html
* igt@kms_feature_discovery@display-4x:
- shard-dg2-set2: NOTRUN -> [SKIP][68] ([Intel XE#1138])
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_feature_discovery@display-4x.html
* igt@kms_flip@2x-absolute-wf_vblank-interruptible:
- shard-lnl: NOTRUN -> [SKIP][69] ([Intel XE#1421]) +1 other test skip
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_flip@2x-absolute-wf_vblank-interruptible.html
* igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ad-dp2-hdmi-a3:
- shard-bmg: [PASS][70] -> [FAIL][71] ([Intel XE#3321])
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-8/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ad-dp2-hdmi-a3.html
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-5/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ad-dp2-hdmi-a3.html
* igt@kms_flip@2x-nonexisting-fb:
- shard-bmg: [PASS][72] -> [SKIP][73] ([Intel XE#2316]) +6 other tests skip
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-5/igt@kms_flip@2x-nonexisting-fb.html
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_flip@2x-nonexisting-fb.html
* igt@kms_flip@2x-plain-flip-interruptible:
- shard-adlp: NOTRUN -> [SKIP][74] ([Intel XE#310])
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_flip@2x-plain-flip-interruptible.html
* igt@kms_flip@absolute-wf_vblank:
- shard-dg2-set2: NOTRUN -> [DMESG-WARN][75] ([Intel XE#1033]) +53 other tests dmesg-warn
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_flip@absolute-wf_vblank.html
* igt@kms_flip@flip-vs-expired-vblank-interruptible@b-dp2:
- shard-bmg: NOTRUN -> [FAIL][76] ([Intel XE#3321]) +2 other tests fail
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-7/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-dp2.html
* igt@kms_flip@flip-vs-expired-vblank@a-dp4:
- shard-dg2-set2: [PASS][77] -> [FAIL][78] ([Intel XE#301] / [Intel XE#3321]) +2 other tests fail
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-463/igt@kms_flip@flip-vs-expired-vblank@a-dp4.html
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@kms_flip@flip-vs-expired-vblank@a-dp4.html
* igt@kms_flip@wf_vblank-ts-check-interruptible:
- shard-lnl: [PASS][79] -> [FAIL][80] ([Intel XE#3149] / [Intel XE#886])
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-lnl-8/igt@kms_flip@wf_vblank-ts-check-interruptible.html
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-2/igt@kms_flip@wf_vblank-ts-check-interruptible.html
* igt@kms_flip@wf_vblank-ts-check-interruptible@c-edp1:
- shard-lnl: [PASS][81] -> [FAIL][82] ([Intel XE#886]) +2 other tests fail
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-lnl-8/igt@kms_flip@wf_vblank-ts-check-interruptible@c-edp1.html
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-2/igt@kms_flip@wf_vblank-ts-check-interruptible@c-edp1.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling:
- shard-dg2-set2: NOTRUN -> [SKIP][83] ([Intel XE#455]) +16 other tests skip
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-shrfb-pgflip-blt:
- shard-lnl: NOTRUN -> [SKIP][84] ([Intel XE#651]) +1 other test skip
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-draw-blt:
- shard-adlp: NOTRUN -> [SKIP][85] ([Intel XE#656]) +5 other tests skip
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][86] ([Intel XE#2312]) +7 other tests skip
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@drrs-indfb-scaledprimary:
- shard-dg2-set2: NOTRUN -> [SKIP][87] ([Intel XE#651]) +18 other tests skip
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@kms_frontbuffer_tracking@drrs-indfb-scaledprimary.html
* igt@kms_frontbuffer_tracking@fbc-1p-primscrn-indfb-pgflip-blt:
- shard-bmg: NOTRUN -> [SKIP][88] ([Intel XE#4141])
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-indfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-spr-indfb-move:
- shard-lnl: NOTRUN -> [SKIP][89] ([Intel XE#656]) +6 other tests skip
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-spr-indfb-move.html
* igt@kms_frontbuffer_tracking@fbcdrrs-modesetfrombusy:
- shard-bmg: NOTRUN -> [SKIP][90] ([Intel XE#2311]) +3 other tests skip
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcdrrs-modesetfrombusy.html
* igt@kms_frontbuffer_tracking@fbcdrrs-rgb101010-draw-render:
- shard-adlp: NOTRUN -> [SKIP][91] ([Intel XE#651]) +1 other test skip
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-8/igt@kms_frontbuffer_tracking@fbcdrrs-rgb101010-draw-render.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-cur-indfb-move:
- shard-bmg: NOTRUN -> [SKIP][92] ([Intel XE#2313]) +3 other tests skip
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-cur-indfb-move.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-shrfb-draw-mmap-wc:
- shard-dg2-set2: NOTRUN -> [SKIP][93] ([Intel XE#653]) +16 other tests skip
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-shrfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@plane-fbc-rte:
- shard-bmg: NOTRUN -> [SKIP][94] ([Intel XE#2350])
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_frontbuffer_tracking@plane-fbc-rte.html
* igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-shrfb-draw-blt:
- shard-adlp: NOTRUN -> [SKIP][95] ([Intel XE#653])
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-shrfb-draw-blt.html
* igt@kms_getfb@getfb2-accept-ccs:
- shard-adlp: NOTRUN -> [SKIP][96] ([Intel XE#1339])
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_getfb@getfb2-accept-ccs.html
* igt@kms_hdmi_inject@inject-4k:
- shard-lnl: NOTRUN -> [SKIP][97] ([Intel XE#1470])
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_hdmi_inject@inject-4k.html
* igt@kms_hdr@static-toggle-dpms:
- shard-lnl: NOTRUN -> [SKIP][98] ([Intel XE#1503]) +1 other test skip
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_hdr@static-toggle-dpms.html
* igt@kms_joiner@basic-ultra-joiner:
- shard-dg2-set2: NOTRUN -> [SKIP][99] ([Intel XE#2927])
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@kms_joiner@basic-ultra-joiner.html
* igt@kms_plane_cursor@primary@pipe-a-hdmi-a-2-size-256:
- shard-dg2-set2: NOTRUN -> [FAIL][100] ([Intel XE#616]) +2 other tests fail
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-432/igt@kms_plane_cursor@primary@pipe-a-hdmi-a-2-size-256.html
* igt@kms_plane_cursor@viewport:
- shard-dg2-set2: [PASS][101] -> [FAIL][102] ([Intel XE#616]) +1 other test fail
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-435/igt@kms_plane_cursor@viewport.html
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@kms_plane_cursor@viewport.html
* igt@kms_pm_backlight@fade:
- shard-dg2-set2: NOTRUN -> [SKIP][103] ([Intel XE#870]) +2 other tests skip
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_pm_backlight@fade.html
* igt@kms_pm_lpsp@kms-lpsp:
- shard-bmg: NOTRUN -> [SKIP][104] ([Intel XE#2499])
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_pm_lpsp@kms-lpsp.html
* igt@kms_pm_rpm@modeset-non-lpsp:
- shard-lnl: NOTRUN -> [SKIP][105] ([Intel XE#1439] / [Intel XE#3141])
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_pm_rpm@modeset-non-lpsp.html
- shard-adlp: NOTRUN -> [SKIP][106] ([Intel XE#836])
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-8/igt@kms_pm_rpm@modeset-non-lpsp.html
* igt@kms_psr2_sf@fbc-pr-cursor-plane-update-sf:
- shard-lnl: NOTRUN -> [SKIP][107] ([Intel XE#2893]) +1 other test skip
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_psr2_sf@fbc-pr-cursor-plane-update-sf.html
* igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-sf:
- shard-bmg: NOTRUN -> [SKIP][108] ([Intel XE#1489])
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-sf.html
* igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf:
- shard-dg2-set2: NOTRUN -> [SKIP][109] ([Intel XE#1489]) +5 other tests skip
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf.html
* igt@kms_psr2_sf@psr2-primary-plane-update-sf-dmg-area:
- shard-adlp: NOTRUN -> [SKIP][110] ([Intel XE#1489]) +1 other test skip
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_psr2_sf@psr2-primary-plane-update-sf-dmg-area.html
* igt@kms_psr@fbc-pr-cursor-render:
- shard-bmg: NOTRUN -> [SKIP][111] ([Intel XE#2234] / [Intel XE#2850]) +2 other tests skip
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_psr@fbc-pr-cursor-render.html
* igt@kms_psr@fbc-pr-sprite-render:
- shard-adlp: NOTRUN -> [SKIP][112] ([Intel XE#2850] / [Intel XE#929]) +1 other test skip
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_psr@fbc-pr-sprite-render.html
* igt@kms_psr@fbc-psr-no-drrs:
- shard-dg2-set2: NOTRUN -> [SKIP][113] ([Intel XE#2850] / [Intel XE#929]) +9 other tests skip
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_psr@fbc-psr-no-drrs.html
* igt@kms_rotation_crc@primary-4-tiled-reflect-x-180:
- shard-lnl: NOTRUN -> [SKIP][114] ([Intel XE#3414] / [Intel XE#3904])
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_rotation_crc@primary-4-tiled-reflect-x-180.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-270:
- shard-adlp: NOTRUN -> [SKIP][115] ([Intel XE#3414])
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_rotation_crc@primary-y-tiled-reflect-x-270.html
* igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90:
- shard-dg2-set2: NOTRUN -> [SKIP][116] ([Intel XE#3414])
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90.html
* igt@kms_setmode@basic-clone-single-crtc:
- shard-lnl: NOTRUN -> [SKIP][117] ([Intel XE#1435])
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_setmode@basic-clone-single-crtc.html
* igt@kms_setmode@invalid-clone-single-crtc:
- shard-bmg: [PASS][118] -> [SKIP][119] ([Intel XE#1435])
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-5/igt@kms_setmode@invalid-clone-single-crtc.html
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_setmode@invalid-clone-single-crtc.html
* igt@kms_vrr@flip-suspend:
- shard-bmg: NOTRUN -> [SKIP][120] ([Intel XE#1499])
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_vrr@flip-suspend.html
* igt@kms_vrr@lobf:
- shard-adlp: NOTRUN -> [SKIP][121] ([Intel XE#2168])
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_vrr@lobf.html
* igt@kms_writeback@writeback-fb-id-xrgb2101010:
- shard-lnl: NOTRUN -> [SKIP][122] ([Intel XE#756])
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_writeback@writeback-fb-id-xrgb2101010.html
* igt@xe_ccs@ctrl-surf-copy-new-ctx:
- shard-adlp: NOTRUN -> [SKIP][123] ([Intel XE#455] / [Intel XE#488])
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@xe_ccs@ctrl-surf-copy-new-ctx.html
* igt@xe_compute_preempt@compute-preempt:
- shard-dg2-set2: NOTRUN -> [SKIP][124] ([Intel XE#1280] / [Intel XE#455]) +1 other test skip
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@xe_compute_preempt@compute-preempt.html
* igt@xe_eudebug@basic-vm-access-parameters:
- shard-dg2-set2: NOTRUN -> [SKIP][125] ([Intel XE#2905]) +9 other tests skip
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@xe_eudebug@basic-vm-access-parameters.html
* igt@xe_eudebug_online@basic-breakpoint:
- shard-bmg: NOTRUN -> [SKIP][126] ([Intel XE#2905]) +2 other tests skip
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@xe_eudebug_online@basic-breakpoint.html
* igt@xe_eudebug_online@single-step-one:
- shard-adlp: NOTRUN -> [SKIP][127] ([Intel XE#2905]) +2 other tests skip
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-8/igt@xe_eudebug_online@single-step-one.html
- shard-lnl: NOTRUN -> [SKIP][128] ([Intel XE#2905]) +2 other tests skip
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@xe_eudebug_online@single-step-one.html
* igt@xe_evict@evict-beng-large-multi-vm:
- shard-lnl: NOTRUN -> [SKIP][129] ([Intel XE#688])
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@xe_evict@evict-beng-large-multi-vm.html
* igt@xe_evict@evict-large-multi-vm:
- shard-dg2-set2: NOTRUN -> [DMESG-WARN][130] ([Intel XE#1033] / [Intel XE#1473])
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@xe_evict@evict-large-multi-vm.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-basic:
- shard-lnl: NOTRUN -> [SKIP][131] ([Intel XE#1392]) +1 other test skip
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-basic.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-null-rebind:
- shard-bmg: NOTRUN -> [SKIP][132] ([Intel XE#2322]) +1 other test skip
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-null-rebind.html
* igt@xe_exec_basic@multigpu-no-exec-null:
- shard-adlp: NOTRUN -> [SKIP][133] ([Intel XE#1392]) +1 other test skip
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-8/igt@xe_exec_basic@multigpu-no-exec-null.html
* igt@xe_exec_basic@multigpu-once-null-rebind:
- shard-dg2-set2: [PASS][134] -> [SKIP][135] ([Intel XE#1392]) +7 other tests skip
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-433/igt@xe_exec_basic@multigpu-once-null-rebind.html
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-432/igt@xe_exec_basic@multigpu-once-null-rebind.html
* igt@xe_exec_fault_mode@once-bindexecqueue-rebind:
- shard-dg2-set2: NOTRUN -> [SKIP][136] ([Intel XE#288]) +17 other tests skip
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-463/igt@xe_exec_fault_mode@once-bindexecqueue-rebind.html
* igt@xe_exec_fault_mode@once-bindexecqueue-rebind-imm:
- shard-adlp: NOTRUN -> [SKIP][137] ([Intel XE#288]) +4 other tests skip
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@xe_exec_fault_mode@once-bindexecqueue-rebind-imm.html
* igt@xe_exec_threads@threads-bal-rebind:
- shard-lnl: [PASS][138] -> [INCOMPLETE][139] ([Intel XE#1169])
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-lnl-4/igt@xe_exec_threads@threads-bal-rebind.html
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-3/igt@xe_exec_threads@threads-bal-rebind.html
* igt@xe_live_ktest@xe_bo:
- shard-bmg: [PASS][140] -> [SKIP][141] ([Intel XE#1192])
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-8/igt@xe_live_ktest@xe_bo.html
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@xe_live_ktest@xe_bo.html
* igt@xe_live_ktest@xe_dma_buf:
- shard-lnl: NOTRUN -> [SKIP][142] ([Intel XE#1192])
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@xe_live_ktest@xe_dma_buf.html
* igt@xe_module_load@force-load:
- shard-dg2-set2: NOTRUN -> [SKIP][143] ([Intel XE#378])
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@xe_module_load@force-load.html
* igt@xe_module_load@load:
- shard-dg2-set2: ([PASS][144], [PASS][145], [PASS][146], [PASS][147], [PASS][148], [PASS][149], [PASS][150], [PASS][151], [PASS][152], [PASS][153], [PASS][154], [PASS][155], [PASS][156], [PASS][157], [PASS][158], [PASS][159], [PASS][160], [PASS][161], [PASS][162], [PASS][163], [PASS][164], [PASS][165], [PASS][166], [PASS][167], [PASS][168]) -> ([PASS][169], [PASS][170], [PASS][171], [PASS][172], [PASS][173], [PASS][174], [PASS][175], [SKIP][176], [PASS][177], [PASS][178], [PASS][179], [PASS][180], [PASS][181], [PASS][182], [PASS][183], [PASS][184], [PASS][185], [PASS][186], [PASS][187], [PASS][188], [PASS][189], [PASS][190], [PASS][191], [PASS][192]) ([Intel XE#378])
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-432/igt@xe_module_load@load.html
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-432/igt@xe_module_load@load.html
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-463/igt@xe_module_load@load.html
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-436/igt@xe_module_load@load.html
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-435/igt@xe_module_load@load.html
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-435/igt@xe_module_load@load.html
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-434/igt@xe_module_load@load.html
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-435/igt@xe_module_load@load.html
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-463/igt@xe_module_load@load.html
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-433/igt@xe_module_load@load.html
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-432/igt@xe_module_load@load.html
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-436/igt@xe_module_load@load.html
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-434/igt@xe_module_load@load.html
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-434/igt@xe_module_load@load.html
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-434/igt@xe_module_load@load.html
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-434/igt@xe_module_load@load.html
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-436/igt@xe_module_load@load.html
[161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-436/igt@xe_module_load@load.html
[162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-463/igt@xe_module_load@load.html
[163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-463/igt@xe_module_load@load.html
[164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-435/igt@xe_module_load@load.html
[165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-432/igt@xe_module_load@load.html
[166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-432/igt@xe_module_load@load.html
[167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-433/igt@xe_module_load@load.html
[168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-433/igt@xe_module_load@load.html
[169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-436/igt@xe_module_load@load.html
[170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-436/igt@xe_module_load@load.html
[171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-436/igt@xe_module_load@load.html
[172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@xe_module_load@load.html
[173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@xe_module_load@load.html
[174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@xe_module_load@load.html
[175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-436/igt@xe_module_load@load.html
[176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@xe_module_load@load.html
[177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@xe_module_load@load.html
[178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@xe_module_load@load.html
[179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@xe_module_load@load.html
[180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@xe_module_load@load.html
[181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@xe_module_load@load.html
[182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-463/igt@xe_module_load@load.html
[183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-463/igt@xe_module_load@load.html
[184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-463/igt@xe_module_load@load.html
[185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-463/igt@xe_module_load@load.html
[186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@xe_module_load@load.html
[187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@xe_module_load@load.html
[188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@xe_module_load@load.html
[189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-432/igt@xe_module_load@load.html
[190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-432/igt@xe_module_load@load.html
[191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-432/igt@xe_module_load@load.html
[192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-432/igt@xe_module_load@load.html
* igt@xe_oa@non-system-wide-paranoid:
- shard-dg2-set2: NOTRUN -> [SKIP][193] ([Intel XE#2541] / [Intel XE#3573]) +1 other test skip
[193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@xe_oa@non-system-wide-paranoid.html
* igt@xe_oa@oa-exponents:
- shard-adlp: NOTRUN -> [SKIP][194] ([Intel XE#2541] / [Intel XE#3573]) +2 other tests skip
[194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@xe_oa@oa-exponents.html
* igt@xe_pat@pat-index-xelpg:
- shard-dg2-set2: NOTRUN -> [SKIP][195] ([Intel XE#979])
[195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@xe_pat@pat-index-xelpg.html
* igt@xe_peer2peer@write:
- shard-bmg: NOTRUN -> [SKIP][196] ([Intel XE#2427])
[196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@xe_peer2peer@write.html
* igt@xe_pm@d3cold-multiple-execs:
- shard-dg2-set2: NOTRUN -> [SKIP][197] ([Intel XE#2284] / [Intel XE#366])
[197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@xe_pm@d3cold-multiple-execs.html
* igt@xe_pm@s2idle-basic:
- shard-dg2-set2: [PASS][198] -> [ABORT][199] ([Intel XE#1358] / [Intel XE#1794]) +1 other test abort
[198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-463/igt@xe_pm@s2idle-basic.html
[199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-432/igt@xe_pm@s2idle-basic.html
* igt@xe_pm@s2idle-vm-bind-prefetch:
- shard-adlp: [PASS][200] -> [DMESG-WARN][201] ([Intel XE#2953])
[200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-adlp-6/igt@xe_pm@s2idle-vm-bind-prefetch.html
[201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-6/igt@xe_pm@s2idle-vm-bind-prefetch.html
* igt@xe_pm@s4-mocs:
- shard-adlp: [PASS][202] -> [ABORT][203] ([Intel XE#1358] / [Intel XE#1794])
[202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-adlp-3/igt@xe_pm@s4-mocs.html
[203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-9/igt@xe_pm@s4-mocs.html
* igt@xe_query@multigpu-query-invalid-cs-cycles:
- shard-dg2-set2: NOTRUN -> [SKIP][204] ([Intel XE#944]) +1 other test skip
[204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@xe_query@multigpu-query-invalid-cs-cycles.html
* igt@xe_query@multigpu-query-mem-usage:
- shard-bmg: NOTRUN -> [SKIP][205] ([Intel XE#944])
[205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@xe_query@multigpu-query-mem-usage.html
* igt@xe_vm@bind-array-many:
- shard-bmg: [PASS][206] -> [DMESG-WARN][207] ([Intel XE#1033]) +2 other tests dmesg-warn
[206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@xe_vm@bind-array-many.html
[207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-7/igt@xe_vm@bind-array-many.html
#### Possible fixes ####
* igt@kms_atomic_transition@plane-all-modeset-transition@pipe-a-hdmi-a-1:
- shard-adlp: [FAIL][208] ([Intel XE#3908]) -> [PASS][209] +1 other test pass
[208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-adlp-8/igt@kms_atomic_transition@plane-all-modeset-transition@pipe-a-hdmi-a-1.html
[209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@kms_atomic_transition@plane-all-modeset-transition@pipe-a-hdmi-a-1.html
* igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
- shard-adlp: [DMESG-FAIL][210] ([Intel XE#1033]) -> [PASS][211] +1 other test pass
[210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-adlp-6/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html
[211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-6/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html
* igt@kms_bw@connected-linear-tiling-2-displays-1920x1080p:
- shard-bmg: [SKIP][212] ([Intel XE#2314] / [Intel XE#2894]) -> [PASS][213] +1 other test pass
[212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-1920x1080p.html
[213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-7/igt@kms_bw@connected-linear-tiling-2-displays-1920x1080p.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs:
- shard-dg2-set2: [INCOMPLETE][214] ([Intel XE#1727] / [Intel XE#3124] / [Intel XE#4010]) -> [PASS][215]
[214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-435/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
[215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-dp-4:
- shard-dg2-set2: [DMESG-WARN][216] ([Intel XE#1727] / [Intel XE#3113]) -> [PASS][217]
[216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-435/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-dp-4.html
[217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-dp-4.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-b-hdmi-a-6:
- shard-dg2-set2: [INCOMPLETE][218] ([Intel XE#3124] / [Intel XE#4010]) -> [PASS][219]
[218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-435/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-b-hdmi-a-6.html
[219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-b-hdmi-a-6.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-varying-size:
- shard-bmg: [SKIP][220] ([Intel XE#2291]) -> [PASS][221] +4 other tests pass
[220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipb-varying-size.html
[221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-3/igt@kms_cursor_legacy@cursorb-vs-flipb-varying-size.html
* igt@kms_dither@fb-8bpc-vs-panel-6bpc:
- shard-bmg: [SKIP][222] ([Intel XE#1340]) -> [PASS][223]
[222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_dither@fb-8bpc-vs-panel-6bpc.html
[223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-3/igt@kms_dither@fb-8bpc-vs-panel-6bpc.html
* igt@kms_feature_discovery@display-2x:
- shard-bmg: [SKIP][224] ([Intel XE#2373]) -> [PASS][225]
[224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_feature_discovery@display-2x.html
[225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-3/igt@kms_feature_discovery@display-2x.html
* igt@kms_flip@2x-plain-flip-ts-check-interruptible:
- shard-bmg: [SKIP][226] ([Intel XE#2316]) -> [PASS][227] +4 other tests pass
[226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_flip@2x-plain-flip-ts-check-interruptible.html
[227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-3/igt@kms_flip@2x-plain-flip-ts-check-interruptible.html
* igt@kms_flip@flip-vs-expired-vblank-interruptible@a-hdmi-a3:
- shard-bmg: [FAIL][228] ([Intel XE#3321]) -> [PASS][229] +1 other test pass
[228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-hdmi-a3.html
[229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-7/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-hdmi-a3.html
* igt@kms_flip@flip-vs-expired-vblank@c-hdmi-a6:
- shard-dg2-set2: [FAIL][230] ([Intel XE#301]) -> [PASS][231]
[230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-463/igt@kms_flip@flip-vs-expired-vblank@c-hdmi-a6.html
[231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@kms_flip@flip-vs-expired-vblank@c-hdmi-a6.html
* igt@kms_flip@flip-vs-suspend-interruptible:
- shard-dg2-set2: [INCOMPLETE][232] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][233] +1 other test pass
[232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-435/igt@kms_flip@flip-vs-suspend-interruptible.html
[233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@kms_flip@flip-vs-suspend-interruptible.html
* igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a1:
- shard-adlp: [DMESG-WARN][234] ([Intel XE#2953]) -> [PASS][235] +1 other test pass
[234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-adlp-3/igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a1.html
[235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-9/igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a1.html
* igt@kms_hdr@invalid-hdr:
- shard-bmg: [SKIP][236] ([Intel XE#1503]) -> [PASS][237]
[236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-1/igt@kms_hdr@invalid-hdr.html
[237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-8/igt@kms_hdr@invalid-hdr.html
* igt@kms_plane_cursor@overlay@pipe-a-hdmi-a-6-size-64:
- shard-dg2-set2: [FAIL][238] ([Intel XE#616]) -> [PASS][239] +1 other test pass
[238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-463/igt@kms_plane_cursor@overlay@pipe-a-hdmi-a-6-size-64.html
[239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-433/igt@kms_plane_cursor@overlay@pipe-a-hdmi-a-6-size-64.html
* igt@kms_plane_scaling@2x-scaler-multi-pipe:
- shard-bmg: [SKIP][240] ([Intel XE#2571]) -> [PASS][241]
[240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_plane_scaling@2x-scaler-multi-pipe.html
[241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-3/igt@kms_plane_scaling@2x-scaler-multi-pipe.html
* igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-sf@pipe-b-edp-1:
- shard-lnl: [INCOMPLETE][242] -> [PASS][243] +1 other test pass
[242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-lnl-4/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-sf@pipe-b-edp-1.html
[243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-7/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-sf@pipe-b-edp-1.html
* igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1:
- shard-lnl: [FAIL][244] ([Intel XE#899]) -> [PASS][245] +1 other test pass
[244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-lnl-2/igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1.html
[245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1.html
* igt@xe_ccs@block-copy-uncompressed-inc-dimension@tile64-uncompressed-compfmt0-vram01-vram01-346x346:
- shard-dg2-set2: [DMESG-WARN][246] ([Intel XE#1033]) -> [PASS][247] +3 other tests pass
[246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-434/igt@xe_ccs@block-copy-uncompressed-inc-dimension@tile64-uncompressed-compfmt0-vram01-vram01-346x346.html
[247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-436/igt@xe_ccs@block-copy-uncompressed-inc-dimension@tile64-uncompressed-compfmt0-vram01-vram01-346x346.html
* igt@xe_exec_basic@multigpu-no-exec-null-defer-mmap:
- shard-dg2-set2: [SKIP][248] ([Intel XE#1392]) -> [PASS][249] +5 other tests pass
[248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-432/igt@xe_exec_basic@multigpu-no-exec-null-defer-mmap.html
[249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@xe_exec_basic@multigpu-no-exec-null-defer-mmap.html
* igt@xe_exec_reset@cm-gt-reset:
- shard-bmg: [INCOMPLETE][250] ([Intel XE#3592]) -> [PASS][251]
[250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-8/igt@xe_exec_reset@cm-gt-reset.html
[251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@xe_exec_reset@cm-gt-reset.html
* igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_execute:
- shard-adlp: [DMESG-WARN][252] -> [PASS][253]
[252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-adlp-8/igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_execute.html
[253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-6/igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_execute.html
* igt@xe_mmap@bad-flags:
- shard-bmg: [DMESG-WARN][254] ([Intel XE#1033]) -> [PASS][255] +10 other tests pass
[254]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-5/igt@xe_mmap@bad-flags.html
[255]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-5/igt@xe_mmap@bad-flags.html
* igt@xe_pm@s2idle-basic-exec:
- shard-dg2-set2: [ABORT][256] ([Intel XE#1358]) -> [PASS][257] +1 other test pass
[256]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-432/igt@xe_pm@s2idle-basic-exec.html
[257]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-463/igt@xe_pm@s2idle-basic-exec.html
* igt@xe_pm@s4-exec-after:
- shard-adlp: [ABORT][258] ([Intel XE#1358] / [Intel XE#1607]) -> [PASS][259]
[258]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-adlp-9/igt@xe_pm@s4-exec-after.html
[259]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-8/igt@xe_pm@s4-exec-after.html
- shard-lnl: [ABORT][260] ([Intel XE#1358] / [Intel XE#1607]) -> [PASS][261]
[260]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-lnl-2/igt@xe_pm@s4-exec-after.html
[261]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@xe_pm@s4-exec-after.html
* igt@xe_pm@s4-mocs:
- shard-lnl: [ABORT][262] ([Intel XE#1358] / [Intel XE#1794]) -> [PASS][263]
[262]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-lnl-2/igt@xe_pm@s4-mocs.html
[263]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-lnl-4/igt@xe_pm@s4-mocs.html
* igt@xe_pm@s4-vm-bind-userptr:
- shard-adlp: [ABORT][264] ([Intel XE#1358] / [Intel XE#1794]) -> [PASS][265]
[264]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-adlp-9/igt@xe_pm@s4-vm-bind-userptr.html
[265]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-adlp-2/igt@xe_pm@s4-vm-bind-userptr.html
* igt@xe_pm_residency@gt-c6-freeze:
- shard-dg2-set2: [ABORT][266] -> [PASS][267] +1 other test pass
[266]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-432/igt@xe_pm_residency@gt-c6-freeze.html
[267]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@xe_pm_residency@gt-c6-freeze.html
#### Warnings ####
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs:
- shard-dg2-set2: [DMESG-WARN][268] -> [INCOMPLETE][269] ([Intel XE#3862])
[268]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-463/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs.html
[269]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs.html
* igt@kms_content_protection@atomic:
- shard-bmg: [SKIP][270] ([Intel XE#2341]) -> [FAIL][271] ([Intel XE#1178])
[270]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_content_protection@atomic.html
[271]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-7/igt@kms_content_protection@atomic.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-varying-size:
- shard-dg2-set2: [INCOMPLETE][272] ([Intel XE#3226]) -> [DMESG-WARN][273] ([Intel XE#1033])
[272]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-435/igt@kms_cursor_legacy@cursorb-vs-flipa-varying-size.html
[273]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@kms_cursor_legacy@cursorb-vs-flipa-varying-size.html
- shard-bmg: [DMESG-WARN][274] ([Intel XE#877]) -> [SKIP][275] ([Intel XE#2291])
[274]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-5/igt@kms_cursor_legacy@cursorb-vs-flipa-varying-size.html
[275]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipa-varying-size.html
* igt@kms_flip@2x-dpms-vs-vblank-race-interruptible:
- shard-bmg: [SKIP][276] ([Intel XE#2316]) -> [DMESG-WARN][277] ([Intel XE#1033]) +1 other test dmesg-warn
[276]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_flip@2x-dpms-vs-vblank-race-interruptible.html
[277]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-7/igt@kms_flip@2x-dpms-vs-vblank-race-interruptible.html
* igt@kms_flip@2x-flip-vs-expired-vblank:
- shard-bmg: [SKIP][278] ([Intel XE#2316]) -> [FAIL][279] ([Intel XE#3321])
[278]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_flip@2x-flip-vs-expired-vblank.html
[279]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-3/igt@kms_flip@2x-flip-vs-expired-vblank.html
* igt@kms_flip@2x-modeset-vs-vblank-race-interruptible:
- shard-bmg: [DMESG-WARN][280] ([Intel XE#1033]) -> [SKIP][281] ([Intel XE#2316])
[280]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-1/igt@kms_flip@2x-modeset-vs-vblank-race-interruptible.html
[281]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_flip@2x-modeset-vs-vblank-race-interruptible.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt:
- shard-bmg: [SKIP][282] ([Intel XE#2311]) -> [SKIP][283] ([Intel XE#2312]) +15 other tests skip
[282]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-5/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt.html
[283]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-shrfb-msflip-blt:
- shard-bmg: [SKIP][284] ([Intel XE#4141]) -> [SKIP][285] ([Intel XE#2312]) +6 other tests skip
[284]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-5/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-shrfb-msflip-blt.html
[285]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-shrfb-msflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][286] ([Intel XE#2312]) -> [SKIP][287] ([Intel XE#4141]) +7 other tests skip
[286]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc.html
[287]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][288] ([Intel XE#2312]) -> [SKIP][289] ([Intel XE#2311]) +17 other tests skip
[288]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
[289]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-pri-indfb-multidraw:
- shard-bmg: [SKIP][290] ([Intel XE#2312]) -> [SKIP][291] ([Intel XE#2313]) +16 other tests skip
[290]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-pri-indfb-multidraw.html
[291]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcpsr-2p-pri-indfb-multidraw.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-blt:
- shard-bmg: [SKIP][292] ([Intel XE#2313]) -> [SKIP][293] ([Intel XE#2312]) +16 other tests skip
[292]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-blt.html
[293]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-blt.html
* igt@kms_hdr@brightness-with-hdr:
- shard-bmg: [SKIP][294] ([Intel XE#3544]) -> [SKIP][295] ([Intel XE#3374] / [Intel XE#3544])
[294]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-bmg-6/igt@kms_hdr@brightness-with-hdr.html
[295]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-bmg-7/igt@kms_hdr@brightness-with-hdr.html
* igt@xe_peer2peer@read:
- shard-dg2-set2: [FAIL][296] ([Intel XE#1173]) -> [SKIP][297] ([Intel XE#1061])
[296]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-433/igt@xe_peer2peer@read.html
[297]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-432/igt@xe_peer2peer@read.html
* igt@xe_pm@s3-basic:
- shard-dg2-set2: [ABORT][298] ([Intel XE#1033] / [Intel XE#1358] / [Intel XE#1794]) -> [DMESG-WARN][299] ([Intel XE#1033] / [Intel XE#569])
[298]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-432/igt@xe_pm@s3-basic.html
[299]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-434/igt@xe_pm@s3-basic.html
* igt@xe_pm@s3-exec-after:
- shard-dg2-set2: [ABORT][300] ([Intel XE#1358]) -> [DMESG-WARN][301] ([Intel XE#1033] / [Intel XE#569])
[300]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-432/igt@xe_pm@s3-exec-after.html
[301]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-435/igt@xe_pm@s3-exec-after.html
* igt@xe_pm@s3-vm-bind-prefetch:
- shard-dg2-set2: [DMESG-WARN][302] ([Intel XE#1033] / [Intel XE#569]) -> [ABORT][303] ([Intel XE#1033] / [Intel XE#1358] / [Intel XE#1794])
[302]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4/shard-dg2-433/igt@xe_pm@s3-vm-bind-prefetch.html
[303]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/shard-dg2-432/igt@xe_pm@s3-vm-bind-prefetch.html
[Intel XE#1033]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1033
[Intel XE#1061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1061
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1138]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1138
[Intel XE#1169]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1169
[Intel XE#1173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1173
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1192]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1192
[Intel XE#1280]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1280
[Intel XE#1339]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1339
[Intel XE#1340]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1340
[Intel XE#1358]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1358
[Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
[Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407
[Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
[Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
[Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
[Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439
[Intel XE#1470]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1470
[Intel XE#1473]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1473
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499
[Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
[Intel XE#1607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1607
[Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
[Intel XE#1794]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1794
[Intel XE#1874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1874
[Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
[Intel XE#2168]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2168
[Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
[Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
[Intel XE#2350]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2350
[Intel XE#2373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2373
[Intel XE#2425]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2425
[Intel XE#2427]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2427
[Intel XE#2499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2499
[Intel XE#2541]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2541
[Intel XE#2571]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2571
[Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#2905]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2905
[Intel XE#2927]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2927
[Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
[Intel XE#3009]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3009
[Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
[Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
[Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
[Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
[Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
[Intel XE#310]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/310
[Intel XE#3113]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3113
[Intel XE#3124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3124
[Intel XE#314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/314
[Intel XE#3141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3141
[Intel XE#3149]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3149
[Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
[Intel XE#3226]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3226
[Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
[Intel XE#3321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3321
[Intel XE#3374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3374
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3442]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3442
[Intel XE#3544]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3544
[Intel XE#3573]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3573
[Intel XE#3592]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3592
[Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
[Intel XE#378]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/378
[Intel XE#3862]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3862
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#3908]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3908
[Intel XE#4010]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4010
[Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
[Intel XE#488]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/488
[Intel XE#569]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/569
[Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
[Intel XE#616]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/616
[Intel XE#623]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/623
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
[Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
[Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
[Intel XE#703]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/703
[Intel XE#756]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/756
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#877]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/877
[Intel XE#886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/886
[Intel XE#899]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/899
[Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
[Intel XE#979]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/979
Build changes
-------------
* Linux: xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4 -> xe-pw-143883v1
IGT_8207: 9f36f9f9e8825a67b762630c2b31628ddcda5c10 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-2536-33906a7032dfca356fd7309e6abbdf5a7ea97db4: 33906a7032dfca356fd7309e6abbdf5a7ea97db4
xe-pw-143883v1: 143883v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-143883v1/index.html
[-- Attachment #2: Type: text/html, Size: 90629 bytes --]
^ permalink raw reply [flat|nested] 35+ messages in thread
end of thread, other threads:[~2025-01-26 16:39 UTC | newest]
Thread overview: 35+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-22 14:08 [PATCH] drm/sched: Use struct for drm_sched_init() params Philipp Stanner
2025-01-22 14:30 ` Danilo Krummrich
2025-01-22 14:34 ` Christian König
2025-01-22 14:48 ` Philipp Stanner
2025-01-22 15:02 ` Matthew Brost
2025-01-22 15:06 ` Christian König
2025-01-22 15:23 ` Philipp Stanner
2025-01-22 15:37 ` Christian König
2025-01-22 15:29 ` Matthew Brost
2025-01-22 15:51 ` Boris Brezillon
2025-01-22 16:14 ` Tvrtko Ursulin
2025-01-22 17:04 ` Boris Brezillon
2025-01-23 4:37 ` Matthew Brost
2025-01-23 7:34 ` Philipp Stanner
2025-01-22 17:16 ` Boris Brezillon
2025-01-23 7:33 ` Philipp Stanner
2025-01-23 8:23 ` Boris Brezillon
2025-01-23 9:29 ` Danilo Krummrich
2025-01-23 9:35 ` Philipp Stanner
2025-01-23 9:55 ` Danilo Krummrich
2025-01-23 10:57 ` Tvrtko Ursulin
2025-01-22 22:07 ` Maíra Canal
2025-01-23 8:10 ` Philipp Stanner
2025-01-23 8:39 ` Philipp Stanner
2025-01-23 11:10 ` Maíra Canal
2025-01-23 12:13 ` Philipp Stanner
2025-01-23 12:29 ` Maíra Canal
2025-01-23 10:55 ` ✓ CI.Patch_applied: success for " Patchwork
2025-01-23 10:55 ` ✗ CI.checkpatch: warning " Patchwork
2025-01-23 10:56 ` ✓ CI.KUnit: success " Patchwork
2025-01-23 11:13 ` ✓ CI.Build: " Patchwork
2025-01-23 11:15 ` ✓ CI.Hooks: " Patchwork
2025-01-23 11:17 ` ✓ CI.checksparse: " Patchwork
2025-01-23 11:45 ` ✓ Xe.CI.BAT: " Patchwork
2025-01-24 0:02 ` ✗ Xe.CI.Full: failure " Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox