* [PATCH 0/2] Make scheduling of the same index, the same @ 2023-11-24 5:27 Luben Tuikov 2023-11-24 5:27 ` [PATCH 1/2] drm/sched: Rename priority MIN to LOW Luben Tuikov 2023-11-24 5:27 ` [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration Luben Tuikov 0 siblings, 2 replies; 12+ messages in thread From: Luben Tuikov @ 2023-11-24 5:27 UTC (permalink / raw) To: Direct Rendering Infrastructure - Development Cc: Luben Tuikov, Rob Clark, Abhinav Kumar, Dmitry Baryshkov, Danilo Krummrich, Alex Deucher, Christian König, linux-arm-msm, freedreno The first patch renames priority MIN to LOW. The second patch makes the "priority" of the same run-queue index in any two schedulers, the same. This series sits on top on this fix https://patchwork.freedesktop.org/patch/568723/ which I sent yesterday. Luben Tuikov (2): drm/sched: Rename priority MIN to LOW drm/sched: Reverse run-queue priority enumeration drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 4 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- drivers/gpu/drm/msm/msm_gpu.h | 2 +- drivers/gpu/drm/scheduler/sched_entity.c | 7 ++++--- drivers/gpu/drm/scheduler/sched_main.c | 15 +++++++-------- include/drm/gpu_scheduler.h | 6 +++--- 6 files changed, 18 insertions(+), 18 deletions(-) Cc: Rob Clark <robdclark@gmail.com> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Cc: Danilo Krummrich <dakr@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org base-commit: e4d983acffff270ccee417445a69b9ed198658b1 prerequisite-patch-id: d0fec7c91768937b5e22ce9508017e5b9d462000 -- 2.43.0 ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/2] drm/sched: Rename priority MIN to LOW 2023-11-24 5:27 [PATCH 0/2] Make scheduling of the same index, the same Luben Tuikov @ 2023-11-24 5:27 ` Luben Tuikov 2023-11-24 7:57 ` Christian König 2023-11-24 5:27 ` [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration Luben Tuikov 1 sibling, 1 reply; 12+ messages in thread From: Luben Tuikov @ 2023-11-24 5:27 UTC (permalink / raw) To: Direct Rendering Infrastructure - Development Cc: Luben Tuikov, Rob Clark, Abhinav Kumar, Dmitry Baryshkov, Danilo Krummrich, Alex Deucher, Christian König, linux-arm-msm, freedreno Rename DRM_SCHED_PRIORITY_MIN to DRM_SCHED_PRIORITY_LOW. This mirrors DRM_SCHED_PRIORITY_HIGH, for a list of DRM scheduler priorities in ascending order, DRM_SCHED_PRIORITY_LOW, DRM_SCHED_PRIORITY_NORMAL, DRM_SCHED_PRIORITY_HIGH, DRM_SCHED_PRIORITY_KERNEL. Cc: Rob Clark <robdclark@gmail.com> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Cc: Danilo Krummrich <dakr@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 4 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- drivers/gpu/drm/msm/msm_gpu.h | 2 +- drivers/gpu/drm/scheduler/sched_entity.c | 2 +- drivers/gpu/drm/scheduler/sched_main.c | 10 +++++----- include/drm/gpu_scheduler.h | 2 +- 6 files changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c index e2ae9ba147ba97..5cb33ac99f7089 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c @@ -73,10 +73,10 @@ amdgpu_ctx_to_drm_sched_prio(int32_t ctx_prio) return DRM_SCHED_PRIORITY_NORMAL; case AMDGPU_CTX_PRIORITY_VERY_LOW: - return DRM_SCHED_PRIORITY_MIN; + return DRM_SCHED_PRIORITY_LOW; case AMDGPU_CTX_PRIORITY_LOW: - return DRM_SCHED_PRIORITY_MIN; + return DRM_SCHED_PRIORITY_LOW; case AMDGPU_CTX_PRIORITY_NORMAL: return DRM_SCHED_PRIORITY_NORMAL; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index 62bb7fc7448ad9..1a25931607c514 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -325,7 +325,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) int i; /* Signal all jobs not yet scheduled */ - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { + for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { struct drm_sched_rq *rq = sched->sched_rq[i]; spin_lock(&rq->lock); list_for_each_entry(s_entity, &rq->entities, list) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 4252e3839fbc83..eb0c97433e5f8a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { * DRM_SCHED_PRIORITY_KERNEL priority level is treated specially in some * cases, so we don't use it (no need for kernel generated jobs). */ -#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_MIN) +#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_LOW) /** * struct msm_file_private - per-drm_file context diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index 20c9c561843ce1..cb7445be3cbb4e 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -88,7 +88,7 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, drm_err(sched_list[0], "entity with out-of-bounds priority:%u num_rqs:%u\n", entity->priority, sched_list[0]->num_rqs); entity->priority = max_t(s32, (s32) sched_list[0]->num_rqs - 1, - (s32) DRM_SCHED_PRIORITY_MIN); + (s32) DRM_SCHED_PRIORITY_LOW); } entity->rq = sched_list[0]->sched_rq[entity->priority]; } diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 044a8c4875ba64..b6d7bc49ff6ef4 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1052,7 +1052,7 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) int i; /* Kernel run queue has higher priority than normal run queue*/ - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { + for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { entity = drm_sched_policy == DRM_SCHED_POLICY_FIFO ? drm_sched_rq_select_entity_fifo(sched, sched->sched_rq[i]) : drm_sched_rq_select_entity_rr(sched, sched->sched_rq[i]); @@ -1291,7 +1291,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, if (!sched->sched_rq) goto Out_free; sched->num_rqs = num_rqs; - for (i = DRM_SCHED_PRIORITY_MIN; i < sched->num_rqs; i++) { + for (i = DRM_SCHED_PRIORITY_LOW; i < sched->num_rqs; i++) { sched->sched_rq[i] = kzalloc(sizeof(*sched->sched_rq[i]), GFP_KERNEL); if (!sched->sched_rq[i]) goto Out_unroll; @@ -1312,7 +1312,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, sched->ready = true; return 0; Out_unroll: - for (--i ; i >= DRM_SCHED_PRIORITY_MIN; i--) + for (--i ; i >= DRM_SCHED_PRIORITY_LOW; i--) kfree(sched->sched_rq[i]); Out_free: kfree(sched->sched_rq); @@ -1338,7 +1338,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) drm_sched_wqueue_stop(sched); - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { + for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { struct drm_sched_rq *rq = sched->sched_rq[i]; spin_lock(&rq->lock); @@ -1390,7 +1390,7 @@ void drm_sched_increase_karma(struct drm_sched_job *bad) if (bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) { atomic_inc(&bad->karma); - for (i = DRM_SCHED_PRIORITY_MIN; + for (i = DRM_SCHED_PRIORITY_LOW; i < min_t(typeof(sched->num_rqs), sched->num_rqs, DRM_SCHED_PRIORITY_KERNEL); i++) { struct drm_sched_rq *rq = sched->sched_rq[i]; diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 9a50348bd5c04e..d8e2d84d9223e3 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -63,7 +63,7 @@ struct drm_file; * to an array, and as such should start at 0. */ enum drm_sched_priority { - DRM_SCHED_PRIORITY_MIN, + DRM_SCHED_PRIORITY_LOW, DRM_SCHED_PRIORITY_NORMAL, DRM_SCHED_PRIORITY_HIGH, DRM_SCHED_PRIORITY_KERNEL, -- 2.43.0 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/sched: Rename priority MIN to LOW 2023-11-24 5:27 ` [PATCH 1/2] drm/sched: Rename priority MIN to LOW Luben Tuikov @ 2023-11-24 7:57 ` Christian König 2023-11-27 13:55 ` Christian König 0 siblings, 1 reply; 12+ messages in thread From: Christian König @ 2023-11-24 7:57 UTC (permalink / raw) To: Luben Tuikov, Direct Rendering Infrastructure - Development Cc: Rob Clark, Abhinav Kumar, Dmitry Baryshkov, Danilo Krummrich, Alex Deucher, linux-arm-msm, freedreno Am 24.11.23 um 06:27 schrieb Luben Tuikov: > Rename DRM_SCHED_PRIORITY_MIN to DRM_SCHED_PRIORITY_LOW. > > This mirrors DRM_SCHED_PRIORITY_HIGH, for a list of DRM scheduler priorities > in ascending order, > DRM_SCHED_PRIORITY_LOW, > DRM_SCHED_PRIORITY_NORMAL, > DRM_SCHED_PRIORITY_HIGH, > DRM_SCHED_PRIORITY_KERNEL. > > Cc: Rob Clark <robdclark@gmail.com> > Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> > Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> > Cc: Danilo Krummrich <dakr@redhat.com> > Cc: Alex Deucher <alexander.deucher@amd.com> > Cc: Christian König <christian.koenig@amd.com> > Cc: linux-arm-msm@vger.kernel.org > Cc: freedreno@lists.freedesktop.org > Cc: dri-devel@lists.freedesktop.org > Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> Reviewed-by: Christian König <christian.koenig@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 4 ++-- > drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- > drivers/gpu/drm/msm/msm_gpu.h | 2 +- > drivers/gpu/drm/scheduler/sched_entity.c | 2 +- > drivers/gpu/drm/scheduler/sched_main.c | 10 +++++----- > include/drm/gpu_scheduler.h | 2 +- > 6 files changed, 11 insertions(+), 11 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c > index e2ae9ba147ba97..5cb33ac99f7089 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c > @@ -73,10 +73,10 @@ amdgpu_ctx_to_drm_sched_prio(int32_t ctx_prio) > return DRM_SCHED_PRIORITY_NORMAL; > > case AMDGPU_CTX_PRIORITY_VERY_LOW: > - return DRM_SCHED_PRIORITY_MIN; > + return DRM_SCHED_PRIORITY_LOW; > > case AMDGPU_CTX_PRIORITY_LOW: > - return DRM_SCHED_PRIORITY_MIN; > + return DRM_SCHED_PRIORITY_LOW; > > case AMDGPU_CTX_PRIORITY_NORMAL: > return DRM_SCHED_PRIORITY_NORMAL; > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > index 62bb7fc7448ad9..1a25931607c514 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > @@ -325,7 +325,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) > int i; > > /* Signal all jobs not yet scheduled */ > - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { > + for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { > struct drm_sched_rq *rq = sched->sched_rq[i]; > spin_lock(&rq->lock); > list_for_each_entry(s_entity, &rq->entities, list) { > diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h > index 4252e3839fbc83..eb0c97433e5f8a 100644 > --- a/drivers/gpu/drm/msm/msm_gpu.h > +++ b/drivers/gpu/drm/msm/msm_gpu.h > @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { > * DRM_SCHED_PRIORITY_KERNEL priority level is treated specially in some > * cases, so we don't use it (no need for kernel generated jobs). > */ > -#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_MIN) > +#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_LOW) > > /** > * struct msm_file_private - per-drm_file context > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c > index 20c9c561843ce1..cb7445be3cbb4e 100644 > --- a/drivers/gpu/drm/scheduler/sched_entity.c > +++ b/drivers/gpu/drm/scheduler/sched_entity.c > @@ -88,7 +88,7 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, > drm_err(sched_list[0], "entity with out-of-bounds priority:%u num_rqs:%u\n", > entity->priority, sched_list[0]->num_rqs); > entity->priority = max_t(s32, (s32) sched_list[0]->num_rqs - 1, > - (s32) DRM_SCHED_PRIORITY_MIN); > + (s32) DRM_SCHED_PRIORITY_LOW); > } > entity->rq = sched_list[0]->sched_rq[entity->priority]; > } > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c > index 044a8c4875ba64..b6d7bc49ff6ef4 100644 > --- a/drivers/gpu/drm/scheduler/sched_main.c > +++ b/drivers/gpu/drm/scheduler/sched_main.c > @@ -1052,7 +1052,7 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) > int i; > > /* Kernel run queue has higher priority than normal run queue*/ > - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { > + for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { > entity = drm_sched_policy == DRM_SCHED_POLICY_FIFO ? > drm_sched_rq_select_entity_fifo(sched, sched->sched_rq[i]) : > drm_sched_rq_select_entity_rr(sched, sched->sched_rq[i]); > @@ -1291,7 +1291,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, > if (!sched->sched_rq) > goto Out_free; > sched->num_rqs = num_rqs; > - for (i = DRM_SCHED_PRIORITY_MIN; i < sched->num_rqs; i++) { > + for (i = DRM_SCHED_PRIORITY_LOW; i < sched->num_rqs; i++) { > sched->sched_rq[i] = kzalloc(sizeof(*sched->sched_rq[i]), GFP_KERNEL); > if (!sched->sched_rq[i]) > goto Out_unroll; > @@ -1312,7 +1312,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, > sched->ready = true; > return 0; > Out_unroll: > - for (--i ; i >= DRM_SCHED_PRIORITY_MIN; i--) > + for (--i ; i >= DRM_SCHED_PRIORITY_LOW; i--) > kfree(sched->sched_rq[i]); > Out_free: > kfree(sched->sched_rq); > @@ -1338,7 +1338,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) > > drm_sched_wqueue_stop(sched); > > - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { > + for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { > struct drm_sched_rq *rq = sched->sched_rq[i]; > > spin_lock(&rq->lock); > @@ -1390,7 +1390,7 @@ void drm_sched_increase_karma(struct drm_sched_job *bad) > if (bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) { > atomic_inc(&bad->karma); > > - for (i = DRM_SCHED_PRIORITY_MIN; > + for (i = DRM_SCHED_PRIORITY_LOW; > i < min_t(typeof(sched->num_rqs), sched->num_rqs, DRM_SCHED_PRIORITY_KERNEL); > i++) { > struct drm_sched_rq *rq = sched->sched_rq[i]; > diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h > index 9a50348bd5c04e..d8e2d84d9223e3 100644 > --- a/include/drm/gpu_scheduler.h > +++ b/include/drm/gpu_scheduler.h > @@ -63,7 +63,7 @@ struct drm_file; > * to an array, and as such should start at 0. > */ > enum drm_sched_priority { > - DRM_SCHED_PRIORITY_MIN, > + DRM_SCHED_PRIORITY_LOW, > DRM_SCHED_PRIORITY_NORMAL, > DRM_SCHED_PRIORITY_HIGH, > DRM_SCHED_PRIORITY_KERNEL, ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/sched: Rename priority MIN to LOW 2023-11-24 7:57 ` Christian König @ 2023-11-27 13:55 ` Christian König 2023-11-27 14:13 ` Luben Tuikov 0 siblings, 1 reply; 12+ messages in thread From: Christian König @ 2023-11-27 13:55 UTC (permalink / raw) To: Luben Tuikov, Direct Rendering Infrastructure - Development Cc: Rob Clark, Abhinav Kumar, Dmitry Baryshkov, Danilo Krummrich, Alex Deucher, linux-arm-msm, freedreno Hi Luben, Am 24.11.23 um 08:57 schrieb Christian König: > Am 24.11.23 um 06:27 schrieb Luben Tuikov: >> Rename DRM_SCHED_PRIORITY_MIN to DRM_SCHED_PRIORITY_LOW. >> >> This mirrors DRM_SCHED_PRIORITY_HIGH, for a list of DRM scheduler >> priorities >> in ascending order, >> DRM_SCHED_PRIORITY_LOW, >> DRM_SCHED_PRIORITY_NORMAL, >> DRM_SCHED_PRIORITY_HIGH, >> DRM_SCHED_PRIORITY_KERNEL. >> >> Cc: Rob Clark <robdclark@gmail.com> >> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> >> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >> Cc: Danilo Krummrich <dakr@redhat.com> >> Cc: Alex Deucher <alexander.deucher@amd.com> >> Cc: Christian König <christian.koenig@amd.com> >> Cc: linux-arm-msm@vger.kernel.org >> Cc: freedreno@lists.freedesktop.org >> Cc: dri-devel@lists.freedesktop.org >> Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> > > Reviewed-by: Christian König <christian.koenig@amd.com> Looks like you missed one usage in Nouveau: drivers/gpu/drm/nouveau/nouveau_sched.c:21:41: error: ‘DRM_SCHED_PRIORITY_MIN’ undeclared here (not in a function); did you mean ‘DRM_SCHED_PRIORITY_LOW’? 21 | NOUVEAU_SCHED_PRIORITY_SINGLE = DRM_SCHED_PRIORITY_MIN, | ^~~~~~~~~~~~~~~~~~~~~~ | DRM_SCHED_PRIORITY_LOW This now results in a build error on drm-misc-next. Christian. > >> --- >> drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 4 ++-- >> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- >> drivers/gpu/drm/msm/msm_gpu.h | 2 +- >> drivers/gpu/drm/scheduler/sched_entity.c | 2 +- >> drivers/gpu/drm/scheduler/sched_main.c | 10 +++++----- >> include/drm/gpu_scheduler.h | 2 +- >> 6 files changed, 11 insertions(+), 11 deletions(-) >> >> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c >> index e2ae9ba147ba97..5cb33ac99f7089 100644 >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c >> @@ -73,10 +73,10 @@ amdgpu_ctx_to_drm_sched_prio(int32_t ctx_prio) >> return DRM_SCHED_PRIORITY_NORMAL; >> case AMDGPU_CTX_PRIORITY_VERY_LOW: >> - return DRM_SCHED_PRIORITY_MIN; >> + return DRM_SCHED_PRIORITY_LOW; >> case AMDGPU_CTX_PRIORITY_LOW: >> - return DRM_SCHED_PRIORITY_MIN; >> + return DRM_SCHED_PRIORITY_LOW; >> case AMDGPU_CTX_PRIORITY_NORMAL: >> return DRM_SCHED_PRIORITY_NORMAL; >> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >> index 62bb7fc7448ad9..1a25931607c514 100644 >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >> @@ -325,7 +325,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct >> drm_gpu_scheduler *sched) >> int i; >> /* Signal all jobs not yet scheduled */ >> - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { >> + for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { >> struct drm_sched_rq *rq = sched->sched_rq[i]; >> spin_lock(&rq->lock); >> list_for_each_entry(s_entity, &rq->entities, list) { >> diff --git a/drivers/gpu/drm/msm/msm_gpu.h >> b/drivers/gpu/drm/msm/msm_gpu.h >> index 4252e3839fbc83..eb0c97433e5f8a 100644 >> --- a/drivers/gpu/drm/msm/msm_gpu.h >> +++ b/drivers/gpu/drm/msm/msm_gpu.h >> @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { >> * DRM_SCHED_PRIORITY_KERNEL priority level is treated specially in >> some >> * cases, so we don't use it (no need for kernel generated jobs). >> */ >> -#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - >> DRM_SCHED_PRIORITY_MIN) >> +#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - >> DRM_SCHED_PRIORITY_LOW) >> /** >> * struct msm_file_private - per-drm_file context >> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c >> b/drivers/gpu/drm/scheduler/sched_entity.c >> index 20c9c561843ce1..cb7445be3cbb4e 100644 >> --- a/drivers/gpu/drm/scheduler/sched_entity.c >> +++ b/drivers/gpu/drm/scheduler/sched_entity.c >> @@ -88,7 +88,7 @@ int drm_sched_entity_init(struct drm_sched_entity >> *entity, >> drm_err(sched_list[0], "entity with out-of-bounds >> priority:%u num_rqs:%u\n", >> entity->priority, sched_list[0]->num_rqs); >> entity->priority = max_t(s32, (s32) >> sched_list[0]->num_rqs - 1, >> - (s32) DRM_SCHED_PRIORITY_MIN); >> + (s32) DRM_SCHED_PRIORITY_LOW); >> } >> entity->rq = sched_list[0]->sched_rq[entity->priority]; >> } >> diff --git a/drivers/gpu/drm/scheduler/sched_main.c >> b/drivers/gpu/drm/scheduler/sched_main.c >> index 044a8c4875ba64..b6d7bc49ff6ef4 100644 >> --- a/drivers/gpu/drm/scheduler/sched_main.c >> +++ b/drivers/gpu/drm/scheduler/sched_main.c >> @@ -1052,7 +1052,7 @@ drm_sched_select_entity(struct >> drm_gpu_scheduler *sched) >> int i; >> /* Kernel run queue has higher priority than normal run queue*/ >> - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { >> + for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { >> entity = drm_sched_policy == DRM_SCHED_POLICY_FIFO ? >> drm_sched_rq_select_entity_fifo(sched, >> sched->sched_rq[i]) : >> drm_sched_rq_select_entity_rr(sched, sched->sched_rq[i]); >> @@ -1291,7 +1291,7 @@ int drm_sched_init(struct drm_gpu_scheduler >> *sched, >> if (!sched->sched_rq) >> goto Out_free; >> sched->num_rqs = num_rqs; >> - for (i = DRM_SCHED_PRIORITY_MIN; i < sched->num_rqs; i++) { >> + for (i = DRM_SCHED_PRIORITY_LOW; i < sched->num_rqs; i++) { >> sched->sched_rq[i] = kzalloc(sizeof(*sched->sched_rq[i]), >> GFP_KERNEL); >> if (!sched->sched_rq[i]) >> goto Out_unroll; >> @@ -1312,7 +1312,7 @@ int drm_sched_init(struct drm_gpu_scheduler >> *sched, >> sched->ready = true; >> return 0; >> Out_unroll: >> - for (--i ; i >= DRM_SCHED_PRIORITY_MIN; i--) >> + for (--i ; i >= DRM_SCHED_PRIORITY_LOW; i--) >> kfree(sched->sched_rq[i]); >> Out_free: >> kfree(sched->sched_rq); >> @@ -1338,7 +1338,7 @@ void drm_sched_fini(struct drm_gpu_scheduler >> *sched) >> drm_sched_wqueue_stop(sched); >> - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) { >> + for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { >> struct drm_sched_rq *rq = sched->sched_rq[i]; >> spin_lock(&rq->lock); >> @@ -1390,7 +1390,7 @@ void drm_sched_increase_karma(struct >> drm_sched_job *bad) >> if (bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) { >> atomic_inc(&bad->karma); >> - for (i = DRM_SCHED_PRIORITY_MIN; >> + for (i = DRM_SCHED_PRIORITY_LOW; >> i < min_t(typeof(sched->num_rqs), sched->num_rqs, >> DRM_SCHED_PRIORITY_KERNEL); >> i++) { >> struct drm_sched_rq *rq = sched->sched_rq[i]; >> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h >> index 9a50348bd5c04e..d8e2d84d9223e3 100644 >> --- a/include/drm/gpu_scheduler.h >> +++ b/include/drm/gpu_scheduler.h >> @@ -63,7 +63,7 @@ struct drm_file; >> * to an array, and as such should start at 0. >> */ >> enum drm_sched_priority { >> - DRM_SCHED_PRIORITY_MIN, >> + DRM_SCHED_PRIORITY_LOW, >> DRM_SCHED_PRIORITY_NORMAL, >> DRM_SCHED_PRIORITY_HIGH, >> DRM_SCHED_PRIORITY_KERNEL, > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/sched: Rename priority MIN to LOW 2023-11-27 13:55 ` Christian König @ 2023-11-27 14:13 ` Luben Tuikov 2023-11-27 14:20 ` Christian König 0 siblings, 1 reply; 12+ messages in thread From: Luben Tuikov @ 2023-11-27 14:13 UTC (permalink / raw) To: Christian König, Direct Rendering Infrastructure - Development Cc: Rob Clark, Abhinav Kumar, Dmitry Baryshkov, Danilo Krummrich, Alex Deucher, linux-arm-msm, freedreno [-- Attachment #1.1.1: Type: text/plain, Size: 1635 bytes --] On 2023-11-27 08:55, Christian König wrote: > Hi Luben, > > Am 24.11.23 um 08:57 schrieb Christian König: >> Am 24.11.23 um 06:27 schrieb Luben Tuikov: >>> Rename DRM_SCHED_PRIORITY_MIN to DRM_SCHED_PRIORITY_LOW. >>> >>> This mirrors DRM_SCHED_PRIORITY_HIGH, for a list of DRM scheduler >>> priorities >>> in ascending order, >>> DRM_SCHED_PRIORITY_LOW, >>> DRM_SCHED_PRIORITY_NORMAL, >>> DRM_SCHED_PRIORITY_HIGH, >>> DRM_SCHED_PRIORITY_KERNEL. >>> >>> Cc: Rob Clark <robdclark@gmail.com> >>> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> >>> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >>> Cc: Danilo Krummrich <dakr@redhat.com> >>> Cc: Alex Deucher <alexander.deucher@amd.com> >>> Cc: Christian König <christian.koenig@amd.com> >>> Cc: linux-arm-msm@vger.kernel.org >>> Cc: freedreno@lists.freedesktop.org >>> Cc: dri-devel@lists.freedesktop.org >>> Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> >> >> Reviewed-by: Christian König <christian.koenig@amd.com> > > Looks like you missed one usage in Nouveau: > > drivers/gpu/drm/nouveau/nouveau_sched.c:21:41: error: > ‘DRM_SCHED_PRIORITY_MIN’ undeclared here (not in a function); did you > mean ‘DRM_SCHED_PRIORITY_LOW’? > 21 | NOUVEAU_SCHED_PRIORITY_SINGLE = DRM_SCHED_PRIORITY_MIN, > | ^~~~~~~~~~~~~~~~~~~~~~ > | DRM_SCHED_PRIORITY_LOW > > This now results in a build error on drm-misc-next. I'm waiting for someone to R-B the fix I posted two days ago: https://lore.kernel.org/r/20231125192246.87268-2-ltuikov89@gmail.com -- Regards, Luben [-- Attachment #1.1.2: OpenPGP public key --] [-- Type: application/pgp-keys, Size: 677 bytes --] [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 236 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/sched: Rename priority MIN to LOW 2023-11-27 14:13 ` Luben Tuikov @ 2023-11-27 14:20 ` Christian König 2023-11-27 14:35 ` Luben Tuikov 0 siblings, 1 reply; 12+ messages in thread From: Christian König @ 2023-11-27 14:20 UTC (permalink / raw) To: Luben Tuikov, Direct Rendering Infrastructure - Development Cc: Rob Clark, Abhinav Kumar, Dmitry Baryshkov, Danilo Krummrich, Alex Deucher, linux-arm-msm, freedreno Am 27.11.23 um 15:13 schrieb Luben Tuikov: > On 2023-11-27 08:55, Christian König wrote: >> Hi Luben, >> >> Am 24.11.23 um 08:57 schrieb Christian König: >>> Am 24.11.23 um 06:27 schrieb Luben Tuikov: >>>> Rename DRM_SCHED_PRIORITY_MIN to DRM_SCHED_PRIORITY_LOW. >>>> >>>> This mirrors DRM_SCHED_PRIORITY_HIGH, for a list of DRM scheduler >>>> priorities >>>> in ascending order, >>>> DRM_SCHED_PRIORITY_LOW, >>>> DRM_SCHED_PRIORITY_NORMAL, >>>> DRM_SCHED_PRIORITY_HIGH, >>>> DRM_SCHED_PRIORITY_KERNEL. >>>> >>>> Cc: Rob Clark <robdclark@gmail.com> >>>> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> >>>> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >>>> Cc: Danilo Krummrich <dakr@redhat.com> >>>> Cc: Alex Deucher <alexander.deucher@amd.com> >>>> Cc: Christian König <christian.koenig@amd.com> >>>> Cc: linux-arm-msm@vger.kernel.org >>>> Cc: freedreno@lists.freedesktop.org >>>> Cc: dri-devel@lists.freedesktop.org >>>> Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> >>> Reviewed-by: Christian König <christian.koenig@amd.com> >> Looks like you missed one usage in Nouveau: >> >> drivers/gpu/drm/nouveau/nouveau_sched.c:21:41: error: >> ‘DRM_SCHED_PRIORITY_MIN’ undeclared here (not in a function); did you >> mean ‘DRM_SCHED_PRIORITY_LOW’? >> 21 | NOUVEAU_SCHED_PRIORITY_SINGLE = DRM_SCHED_PRIORITY_MIN, >> | ^~~~~~~~~~~~~~~~~~~~~~ >> | DRM_SCHED_PRIORITY_LOW >> >> This now results in a build error on drm-misc-next. > I'm waiting for someone to R-B the fix I posted two days ago: > https://lore.kernel.org/r/20231125192246.87268-2-ltuikov89@gmail.com There must be something wrong with the dri-devel mailing list (or my gmail, but I doubt so). I don't see this mail in my inbox anywhere. Feel free to add my rb and push it. Thanks, Christian. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/sched: Rename priority MIN to LOW 2023-11-27 14:20 ` Christian König @ 2023-11-27 14:35 ` Luben Tuikov 0 siblings, 0 replies; 12+ messages in thread From: Luben Tuikov @ 2023-11-27 14:35 UTC (permalink / raw) To: Christian König, Direct Rendering Infrastructure - Development Cc: Rob Clark, Abhinav Kumar, Dmitry Baryshkov, Danilo Krummrich, Alex Deucher, linux-arm-msm, freedreno [-- Attachment #1.1.1: Type: text/plain, Size: 2008 bytes --] On 2023-11-27 09:20, Christian König wrote: > Am 27.11.23 um 15:13 schrieb Luben Tuikov: >> On 2023-11-27 08:55, Christian König wrote: >>> Hi Luben, >>> >>> Am 24.11.23 um 08:57 schrieb Christian König: >>>> Am 24.11.23 um 06:27 schrieb Luben Tuikov: >>>>> Rename DRM_SCHED_PRIORITY_MIN to DRM_SCHED_PRIORITY_LOW. >>>>> >>>>> This mirrors DRM_SCHED_PRIORITY_HIGH, for a list of DRM scheduler >>>>> priorities >>>>> in ascending order, >>>>> DRM_SCHED_PRIORITY_LOW, >>>>> DRM_SCHED_PRIORITY_NORMAL, >>>>> DRM_SCHED_PRIORITY_HIGH, >>>>> DRM_SCHED_PRIORITY_KERNEL. >>>>> >>>>> Cc: Rob Clark <robdclark@gmail.com> >>>>> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> >>>>> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >>>>> Cc: Danilo Krummrich <dakr@redhat.com> >>>>> Cc: Alex Deucher <alexander.deucher@amd.com> >>>>> Cc: Christian König <christian.koenig@amd.com> >>>>> Cc: linux-arm-msm@vger.kernel.org >>>>> Cc: freedreno@lists.freedesktop.org >>>>> Cc: dri-devel@lists.freedesktop.org >>>>> Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> >>>> Reviewed-by: Christian König <christian.koenig@amd.com> >>> Looks like you missed one usage in Nouveau: >>> >>> drivers/gpu/drm/nouveau/nouveau_sched.c:21:41: error: >>> ‘DRM_SCHED_PRIORITY_MIN’ undeclared here (not in a function); did you >>> mean ‘DRM_SCHED_PRIORITY_LOW’? >>> 21 | NOUVEAU_SCHED_PRIORITY_SINGLE = DRM_SCHED_PRIORITY_MIN, >>> | ^~~~~~~~~~~~~~~~~~~~~~ >>> | DRM_SCHED_PRIORITY_LOW >>> >>> This now results in a build error on drm-misc-next. >> I'm waiting for someone to R-B the fix I posted two days ago: >> https://lore.kernel.org/r/20231125192246.87268-2-ltuikov89@gmail.com > > There must be something wrong with the dri-devel mailing list (or my > gmail, but I doubt so). I don't see this mail in my inbox anywhere. > > Feel free to add my rb and push it. Done. Thanks. -- Regards, Luben [-- Attachment #1.1.2: OpenPGP public key --] [-- Type: application/pgp-keys, Size: 677 bytes --] [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 236 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration 2023-11-24 5:27 [PATCH 0/2] Make scheduling of the same index, the same Luben Tuikov 2023-11-24 5:27 ` [PATCH 1/2] drm/sched: Rename priority MIN to LOW Luben Tuikov @ 2023-11-24 5:27 ` Luben Tuikov 2023-11-24 8:04 ` Christian König 1 sibling, 1 reply; 12+ messages in thread From: Luben Tuikov @ 2023-11-24 5:27 UTC (permalink / raw) To: Direct Rendering Infrastructure - Development Cc: Luben Tuikov, Rob Clark, Abhinav Kumar, Dmitry Baryshkov, Danilo Krummrich, Alex Deucher, Christian König, linux-arm-msm, freedreno Reverse run-queue priority enumeration such that the higest priority is now 0, and for each consecutive integer the prioirty diminishes. Run-queues correspond to priorities. To an external observer a scheduler created with a single run-queue, and another created with DRM_SCHED_PRIORITY_COUNT number of run-queues, should always schedule sched->sched_rq[0] with the same "priority", as that index run-queue exists in both schedulers, i.e. a scheduler with one run-queue or many. This patch makes it so. In other words, the "priority" of sched->sched_rq[n], n >= 0, is the same for any scheduler created with any allowable number of run-queues (priorities), 0 to DRM_SCHED_PRIORITY_COUNT. Cc: Rob Clark <robdclark@gmail.com> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Cc: Danilo Krummrich <dakr@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org Cc: dri-devel@lists.freedesktop.org Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- drivers/gpu/drm/msm/msm_gpu.h | 2 +- drivers/gpu/drm/scheduler/sched_entity.c | 7 ++++--- drivers/gpu/drm/scheduler/sched_main.c | 15 +++++++-------- include/drm/gpu_scheduler.h | 6 +++--- 5 files changed, 16 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index 1a25931607c514..71a5cf37b472d4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -325,7 +325,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) int i; /* Signal all jobs not yet scheduled */ - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { struct drm_sched_rq *rq = sched->sched_rq[i]; spin_lock(&rq->lock); list_for_each_entry(s_entity, &rq->entities, list) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index eb0c97433e5f8a..2bfcb222e35338 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { * DRM_SCHED_PRIORITY_KERNEL priority level is treated specially in some * cases, so we don't use it (no need for kernel generated jobs). */ -#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_LOW) +#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH) /** * struct msm_file_private - per-drm_file context diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index cb7445be3cbb4e..6e2b02e45e3a32 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -81,14 +81,15 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, */ pr_warn("%s: called with uninitialized scheduler\n", __func__); } else if (num_sched_list) { - /* The "priority" of an entity cannot exceed the number - * of run-queues of a scheduler. + /* The "priority" of an entity cannot exceed the number of + * run-queues of a scheduler. Choose the lowest priority + * available. */ if (entity->priority >= sched_list[0]->num_rqs) { drm_err(sched_list[0], "entity with out-of-bounds priority:%u num_rqs:%u\n", entity->priority, sched_list[0]->num_rqs); entity->priority = max_t(s32, (s32) sched_list[0]->num_rqs - 1, - (s32) DRM_SCHED_PRIORITY_LOW); + (s32) DRM_SCHED_PRIORITY_KERNEL); } entity->rq = sched_list[0]->sched_rq[entity->priority]; } diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index b6d7bc49ff6ef4..682aebe96db781 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1051,8 +1051,9 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) struct drm_sched_entity *entity; int i; - /* Kernel run queue has higher priority than normal run queue*/ - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { + /* Start with the highest priority. + */ + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { entity = drm_sched_policy == DRM_SCHED_POLICY_FIFO ? drm_sched_rq_select_entity_fifo(sched, sched->sched_rq[i]) : drm_sched_rq_select_entity_rr(sched, sched->sched_rq[i]); @@ -1291,7 +1292,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, if (!sched->sched_rq) goto Out_free; sched->num_rqs = num_rqs; - for (i = DRM_SCHED_PRIORITY_LOW; i < sched->num_rqs; i++) { + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { sched->sched_rq[i] = kzalloc(sizeof(*sched->sched_rq[i]), GFP_KERNEL); if (!sched->sched_rq[i]) goto Out_unroll; @@ -1312,7 +1313,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, sched->ready = true; return 0; Out_unroll: - for (--i ; i >= DRM_SCHED_PRIORITY_LOW; i--) + for (--i ; i >= DRM_SCHED_PRIORITY_KERNEL; i--) kfree(sched->sched_rq[i]); Out_free: kfree(sched->sched_rq); @@ -1338,7 +1339,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) drm_sched_wqueue_stop(sched); - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { struct drm_sched_rq *rq = sched->sched_rq[i]; spin_lock(&rq->lock); @@ -1390,9 +1391,7 @@ void drm_sched_increase_karma(struct drm_sched_job *bad) if (bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) { atomic_inc(&bad->karma); - for (i = DRM_SCHED_PRIORITY_LOW; - i < min_t(typeof(sched->num_rqs), sched->num_rqs, DRM_SCHED_PRIORITY_KERNEL); - i++) { + for (i = DRM_SCHED_PRIORITY_HIGH; i < sched->num_rqs; i++) { struct drm_sched_rq *rq = sched->sched_rq[i]; spin_lock(&rq->lock); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index d8e2d84d9223e3..5acc64954a8830 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -63,10 +63,10 @@ struct drm_file; * to an array, and as such should start at 0. */ enum drm_sched_priority { - DRM_SCHED_PRIORITY_LOW, - DRM_SCHED_PRIORITY_NORMAL, - DRM_SCHED_PRIORITY_HIGH, DRM_SCHED_PRIORITY_KERNEL, + DRM_SCHED_PRIORITY_HIGH, + DRM_SCHED_PRIORITY_NORMAL, + DRM_SCHED_PRIORITY_LOW, DRM_SCHED_PRIORITY_COUNT }; -- 2.43.0 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration 2023-11-24 5:27 ` [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration Luben Tuikov @ 2023-11-24 8:04 ` Christian König 2023-11-24 8:22 ` Luben Tuikov 0 siblings, 1 reply; 12+ messages in thread From: Christian König @ 2023-11-24 8:04 UTC (permalink / raw) To: Luben Tuikov, Direct Rendering Infrastructure - Development Cc: Rob Clark, Abhinav Kumar, Dmitry Baryshkov, Danilo Krummrich, Alex Deucher, linux-arm-msm, freedreno Am 24.11.23 um 06:27 schrieb Luben Tuikov: > Reverse run-queue priority enumeration such that the higest priority is now 0, > and for each consecutive integer the prioirty diminishes. > > Run-queues correspond to priorities. To an external observer a scheduler > created with a single run-queue, and another created with > DRM_SCHED_PRIORITY_COUNT number of run-queues, should always schedule > sched->sched_rq[0] with the same "priority", as that index run-queue exists in > both schedulers, i.e. a scheduler with one run-queue or many. This patch makes > it so. > > In other words, the "priority" of sched->sched_rq[n], n >= 0, is the same for > any scheduler created with any allowable number of run-queues (priorities), 0 > to DRM_SCHED_PRIORITY_COUNT. > > Cc: Rob Clark <robdclark@gmail.com> > Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> > Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> > Cc: Danilo Krummrich <dakr@redhat.com> > Cc: Alex Deucher <alexander.deucher@amd.com> > Cc: Christian König <christian.koenig@amd.com> > Cc: linux-arm-msm@vger.kernel.org > Cc: freedreno@lists.freedesktop.org > Cc: dri-devel@lists.freedesktop.org > Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- > drivers/gpu/drm/msm/msm_gpu.h | 2 +- > drivers/gpu/drm/scheduler/sched_entity.c | 7 ++++--- > drivers/gpu/drm/scheduler/sched_main.c | 15 +++++++-------- > include/drm/gpu_scheduler.h | 6 +++--- > 5 files changed, 16 insertions(+), 16 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > index 1a25931607c514..71a5cf37b472d4 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > @@ -325,7 +325,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) > int i; > > /* Signal all jobs not yet scheduled */ > - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { > + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { > struct drm_sched_rq *rq = sched->sched_rq[i]; > spin_lock(&rq->lock); > list_for_each_entry(s_entity, &rq->entities, list) { > diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h > index eb0c97433e5f8a..2bfcb222e35338 100644 > --- a/drivers/gpu/drm/msm/msm_gpu.h > +++ b/drivers/gpu/drm/msm/msm_gpu.h > @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { > * DRM_SCHED_PRIORITY_KERNEL priority level is treated specially in some > * cases, so we don't use it (no need for kernel generated jobs). > */ > -#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_LOW) > +#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH) > > /** > * struct msm_file_private - per-drm_file context > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c > index cb7445be3cbb4e..6e2b02e45e3a32 100644 > --- a/drivers/gpu/drm/scheduler/sched_entity.c > +++ b/drivers/gpu/drm/scheduler/sched_entity.c > @@ -81,14 +81,15 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, > */ > pr_warn("%s: called with uninitialized scheduler\n", __func__); > } else if (num_sched_list) { > - /* The "priority" of an entity cannot exceed the number > - * of run-queues of a scheduler. > + /* The "priority" of an entity cannot exceed the number of > + * run-queues of a scheduler. Choose the lowest priority > + * available. > */ > if (entity->priority >= sched_list[0]->num_rqs) { > drm_err(sched_list[0], "entity with out-of-bounds priority:%u num_rqs:%u\n", > entity->priority, sched_list[0]->num_rqs); > entity->priority = max_t(s32, (s32) sched_list[0]->num_rqs - 1, > - (s32) DRM_SCHED_PRIORITY_LOW); > + (s32) DRM_SCHED_PRIORITY_KERNEL); That seems to be a no-op. You basically say max_T(.., num_rqs - 1, 0), this will always be num_rqs - 1 Apart from that looks good to me. Christian. > } > entity->rq = sched_list[0]->sched_rq[entity->priority]; > } > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c > index b6d7bc49ff6ef4..682aebe96db781 100644 > --- a/drivers/gpu/drm/scheduler/sched_main.c > +++ b/drivers/gpu/drm/scheduler/sched_main.c > @@ -1051,8 +1051,9 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) > struct drm_sched_entity *entity; > int i; > > - /* Kernel run queue has higher priority than normal run queue*/ > - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { > + /* Start with the highest priority. > + */ > + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { > entity = drm_sched_policy == DRM_SCHED_POLICY_FIFO ? > drm_sched_rq_select_entity_fifo(sched, sched->sched_rq[i]) : > drm_sched_rq_select_entity_rr(sched, sched->sched_rq[i]); > @@ -1291,7 +1292,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, > if (!sched->sched_rq) > goto Out_free; > sched->num_rqs = num_rqs; > - for (i = DRM_SCHED_PRIORITY_LOW; i < sched->num_rqs; i++) { > + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { > sched->sched_rq[i] = kzalloc(sizeof(*sched->sched_rq[i]), GFP_KERNEL); > if (!sched->sched_rq[i]) > goto Out_unroll; > @@ -1312,7 +1313,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, > sched->ready = true; > return 0; > Out_unroll: > - for (--i ; i >= DRM_SCHED_PRIORITY_LOW; i--) > + for (--i ; i >= DRM_SCHED_PRIORITY_KERNEL; i--) > kfree(sched->sched_rq[i]); > Out_free: > kfree(sched->sched_rq); > @@ -1338,7 +1339,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) > > drm_sched_wqueue_stop(sched); > > - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { > + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { > struct drm_sched_rq *rq = sched->sched_rq[i]; > > spin_lock(&rq->lock); > @@ -1390,9 +1391,7 @@ void drm_sched_increase_karma(struct drm_sched_job *bad) > if (bad->s_priority != DRM_SCHED_PRIORITY_KERNEL) { > atomic_inc(&bad->karma); > > - for (i = DRM_SCHED_PRIORITY_LOW; > - i < min_t(typeof(sched->num_rqs), sched->num_rqs, DRM_SCHED_PRIORITY_KERNEL); > - i++) { > + for (i = DRM_SCHED_PRIORITY_HIGH; i < sched->num_rqs; i++) { > struct drm_sched_rq *rq = sched->sched_rq[i]; > > spin_lock(&rq->lock); > diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h > index d8e2d84d9223e3..5acc64954a8830 100644 > --- a/include/drm/gpu_scheduler.h > +++ b/include/drm/gpu_scheduler.h > @@ -63,10 +63,10 @@ struct drm_file; > * to an array, and as such should start at 0. > */ > enum drm_sched_priority { > - DRM_SCHED_PRIORITY_LOW, > - DRM_SCHED_PRIORITY_NORMAL, > - DRM_SCHED_PRIORITY_HIGH, > DRM_SCHED_PRIORITY_KERNEL, > + DRM_SCHED_PRIORITY_HIGH, > + DRM_SCHED_PRIORITY_NORMAL, > + DRM_SCHED_PRIORITY_LOW, > > DRM_SCHED_PRIORITY_COUNT > }; ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration 2023-11-24 8:04 ` Christian König @ 2023-11-24 8:22 ` Luben Tuikov 2023-11-24 9:38 ` Christian König 0 siblings, 1 reply; 12+ messages in thread From: Luben Tuikov @ 2023-11-24 8:22 UTC (permalink / raw) To: Christian König, Direct Rendering Infrastructure - Development Cc: Rob Clark, Abhinav Kumar, Dmitry Baryshkov, Danilo Krummrich, Alex Deucher, linux-arm-msm, freedreno [-- Attachment #1.1.1: Type: text/plain, Size: 4619 bytes --] On 2023-11-24 03:04, Christian König wrote: > Am 24.11.23 um 06:27 schrieb Luben Tuikov: >> Reverse run-queue priority enumeration such that the higest priority is now 0, >> and for each consecutive integer the prioirty diminishes. >> >> Run-queues correspond to priorities. To an external observer a scheduler >> created with a single run-queue, and another created with >> DRM_SCHED_PRIORITY_COUNT number of run-queues, should always schedule >> sched->sched_rq[0] with the same "priority", as that index run-queue exists in >> both schedulers, i.e. a scheduler with one run-queue or many. This patch makes >> it so. >> >> In other words, the "priority" of sched->sched_rq[n], n >= 0, is the same for >> any scheduler created with any allowable number of run-queues (priorities), 0 >> to DRM_SCHED_PRIORITY_COUNT. >> >> Cc: Rob Clark <robdclark@gmail.com> >> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> >> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >> Cc: Danilo Krummrich <dakr@redhat.com> >> Cc: Alex Deucher <alexander.deucher@amd.com> >> Cc: Christian König <christian.koenig@amd.com> >> Cc: linux-arm-msm@vger.kernel.org >> Cc: freedreno@lists.freedesktop.org >> Cc: dri-devel@lists.freedesktop.org >> Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> >> --- >> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- >> drivers/gpu/drm/msm/msm_gpu.h | 2 +- >> drivers/gpu/drm/scheduler/sched_entity.c | 7 ++++--- >> drivers/gpu/drm/scheduler/sched_main.c | 15 +++++++-------- >> include/drm/gpu_scheduler.h | 6 +++--- >> 5 files changed, 16 insertions(+), 16 deletions(-) >> >> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >> index 1a25931607c514..71a5cf37b472d4 100644 >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >> @@ -325,7 +325,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) >> int i; >> >> /* Signal all jobs not yet scheduled */ >> - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { >> + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { >> struct drm_sched_rq *rq = sched->sched_rq[i]; >> spin_lock(&rq->lock); >> list_for_each_entry(s_entity, &rq->entities, list) { >> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h >> index eb0c97433e5f8a..2bfcb222e35338 100644 >> --- a/drivers/gpu/drm/msm/msm_gpu.h >> +++ b/drivers/gpu/drm/msm/msm_gpu.h >> @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { >> * DRM_SCHED_PRIORITY_KERNEL priority level is treated specially in some >> * cases, so we don't use it (no need for kernel generated jobs). >> */ >> -#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_LOW) >> +#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH) >> >> /** >> * struct msm_file_private - per-drm_file context >> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c >> index cb7445be3cbb4e..6e2b02e45e3a32 100644 >> --- a/drivers/gpu/drm/scheduler/sched_entity.c >> +++ b/drivers/gpu/drm/scheduler/sched_entity.c >> @@ -81,14 +81,15 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, >> */ >> pr_warn("%s: called with uninitialized scheduler\n", __func__); >> } else if (num_sched_list) { >> - /* The "priority" of an entity cannot exceed the number >> - * of run-queues of a scheduler. >> + /* The "priority" of an entity cannot exceed the number of >> + * run-queues of a scheduler. Choose the lowest priority >> + * available. >> */ >> if (entity->priority >= sched_list[0]->num_rqs) { >> drm_err(sched_list[0], "entity with out-of-bounds priority:%u num_rqs:%u\n", >> entity->priority, sched_list[0]->num_rqs); >> entity->priority = max_t(s32, (s32) sched_list[0]->num_rqs - 1, >> - (s32) DRM_SCHED_PRIORITY_LOW); >> + (s32) DRM_SCHED_PRIORITY_KERNEL); > > That seems to be a no-op. You basically say max_T(.., num_rqs - 1, 0), > this will always be num_rqs - 1 This protects against num_rqs being equal to 0, in which case we select KERNEL (0). This comes from "[PATCH] drm/sched: Fix bounds limiting when given a malformed entity" which I sent yesterday (Message-ID: <20231123122422.167832-2-ltuikov89@gmail.com>). Could you R-B that patch too? > > Apart from that looks good to me. Okay, could you R-B this patch then. -- Regards, Luben [-- Attachment #1.1.2: OpenPGP public key --] [-- Type: application/pgp-keys, Size: 677 bytes --] [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 236 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration 2023-11-24 8:22 ` Luben Tuikov @ 2023-11-24 9:38 ` Christian König 2023-11-25 4:22 ` Luben Tuikov 0 siblings, 1 reply; 12+ messages in thread From: Christian König @ 2023-11-24 9:38 UTC (permalink / raw) To: Luben Tuikov, Direct Rendering Infrastructure - Development Cc: Rob Clark, Abhinav Kumar, Dmitry Baryshkov, Danilo Krummrich, Alex Deucher, linux-arm-msm, freedreno Am 24.11.23 um 09:22 schrieb Luben Tuikov: > On 2023-11-24 03:04, Christian König wrote: >> Am 24.11.23 um 06:27 schrieb Luben Tuikov: >>> Reverse run-queue priority enumeration such that the higest priority is now 0, >>> and for each consecutive integer the prioirty diminishes. >>> >>> Run-queues correspond to priorities. To an external observer a scheduler >>> created with a single run-queue, and another created with >>> DRM_SCHED_PRIORITY_COUNT number of run-queues, should always schedule >>> sched->sched_rq[0] with the same "priority", as that index run-queue exists in >>> both schedulers, i.e. a scheduler with one run-queue or many. This patch makes >>> it so. >>> >>> In other words, the "priority" of sched->sched_rq[n], n >= 0, is the same for >>> any scheduler created with any allowable number of run-queues (priorities), 0 >>> to DRM_SCHED_PRIORITY_COUNT. >>> >>> Cc: Rob Clark <robdclark@gmail.com> >>> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> >>> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >>> Cc: Danilo Krummrich <dakr@redhat.com> >>> Cc: Alex Deucher <alexander.deucher@amd.com> >>> Cc: Christian König <christian.koenig@amd.com> >>> Cc: linux-arm-msm@vger.kernel.org >>> Cc: freedreno@lists.freedesktop.org >>> Cc: dri-devel@lists.freedesktop.org >>> Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> >>> --- >>> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- >>> drivers/gpu/drm/msm/msm_gpu.h | 2 +- >>> drivers/gpu/drm/scheduler/sched_entity.c | 7 ++++--- >>> drivers/gpu/drm/scheduler/sched_main.c | 15 +++++++-------- >>> include/drm/gpu_scheduler.h | 6 +++--- >>> 5 files changed, 16 insertions(+), 16 deletions(-) >>> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >>> index 1a25931607c514..71a5cf37b472d4 100644 >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >>> @@ -325,7 +325,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) >>> int i; >>> >>> /* Signal all jobs not yet scheduled */ >>> - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { >>> + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { >>> struct drm_sched_rq *rq = sched->sched_rq[i]; >>> spin_lock(&rq->lock); >>> list_for_each_entry(s_entity, &rq->entities, list) { >>> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h >>> index eb0c97433e5f8a..2bfcb222e35338 100644 >>> --- a/drivers/gpu/drm/msm/msm_gpu.h >>> +++ b/drivers/gpu/drm/msm/msm_gpu.h >>> @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { >>> * DRM_SCHED_PRIORITY_KERNEL priority level is treated specially in some >>> * cases, so we don't use it (no need for kernel generated jobs). >>> */ >>> -#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_LOW) >>> +#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH) >>> >>> /** >>> * struct msm_file_private - per-drm_file context >>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c >>> index cb7445be3cbb4e..6e2b02e45e3a32 100644 >>> --- a/drivers/gpu/drm/scheduler/sched_entity.c >>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c >>> @@ -81,14 +81,15 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, >>> */ >>> pr_warn("%s: called with uninitialized scheduler\n", __func__); >>> } else if (num_sched_list) { >>> - /* The "priority" of an entity cannot exceed the number >>> - * of run-queues of a scheduler. >>> + /* The "priority" of an entity cannot exceed the number of >>> + * run-queues of a scheduler. Choose the lowest priority >>> + * available. >>> */ >>> if (entity->priority >= sched_list[0]->num_rqs) { >>> drm_err(sched_list[0], "entity with out-of-bounds priority:%u num_rqs:%u\n", >>> entity->priority, sched_list[0]->num_rqs); >>> entity->priority = max_t(s32, (s32) sched_list[0]->num_rqs - 1, >>> - (s32) DRM_SCHED_PRIORITY_LOW); >>> + (s32) DRM_SCHED_PRIORITY_KERNEL); >> That seems to be a no-op. You basically say max_T(.., num_rqs - 1, 0), >> this will always be num_rqs - 1 > This protects against num_rqs being equal to 0, in which case we select KERNEL (0). Ah! That's also why convert it to signed! I was already wondering why you do this. > > This comes from "[PATCH] drm/sched: Fix bounds limiting when given a malformed entity" > which I sent yesterday (Message-ID: <20231123122422.167832-2-ltuikov89@gmail.com>). I can't find that one in my inbox anywhere, but was able to find it in patchwork. > Could you R-B that patch too? I would add a comment cause the intention of max_t(s32 is really not obvious here. With that done feel free to add my rb to both patches. Regards, Christian. > >> Apart from that looks good to me. > Okay, could you R-B this patch then. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration 2023-11-24 9:38 ` Christian König @ 2023-11-25 4:22 ` Luben Tuikov 0 siblings, 0 replies; 12+ messages in thread From: Luben Tuikov @ 2023-11-25 4:22 UTC (permalink / raw) To: Christian König, Direct Rendering Infrastructure - Development Cc: Rob Clark, Abhinav Kumar, Dmitry Baryshkov, Danilo Krummrich, Alex Deucher, linux-arm-msm, freedreno [-- Attachment #1.1.1: Type: text/plain, Size: 5440 bytes --] On 2023-11-24 04:38, Christian König wrote: > Am 24.11.23 um 09:22 schrieb Luben Tuikov: >> On 2023-11-24 03:04, Christian König wrote: >>> Am 24.11.23 um 06:27 schrieb Luben Tuikov: >>>> Reverse run-queue priority enumeration such that the higest priority is now 0, >>>> and for each consecutive integer the prioirty diminishes. >>>> >>>> Run-queues correspond to priorities. To an external observer a scheduler >>>> created with a single run-queue, and another created with >>>> DRM_SCHED_PRIORITY_COUNT number of run-queues, should always schedule >>>> sched->sched_rq[0] with the same "priority", as that index run-queue exists in >>>> both schedulers, i.e. a scheduler with one run-queue or many. This patch makes >>>> it so. >>>> >>>> In other words, the "priority" of sched->sched_rq[n], n >= 0, is the same for >>>> any scheduler created with any allowable number of run-queues (priorities), 0 >>>> to DRM_SCHED_PRIORITY_COUNT. >>>> >>>> Cc: Rob Clark <robdclark@gmail.com> >>>> Cc: Abhinav Kumar <quic_abhinavk@quicinc.com> >>>> Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> >>>> Cc: Danilo Krummrich <dakr@redhat.com> >>>> Cc: Alex Deucher <alexander.deucher@amd.com> >>>> Cc: Christian König <christian.koenig@amd.com> >>>> Cc: linux-arm-msm@vger.kernel.org >>>> Cc: freedreno@lists.freedesktop.org >>>> Cc: dri-devel@lists.freedesktop.org >>>> Signed-off-by: Luben Tuikov <ltuikov89@gmail.com> >>>> --- >>>> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +- >>>> drivers/gpu/drm/msm/msm_gpu.h | 2 +- >>>> drivers/gpu/drm/scheduler/sched_entity.c | 7 ++++--- >>>> drivers/gpu/drm/scheduler/sched_main.c | 15 +++++++-------- >>>> include/drm/gpu_scheduler.h | 6 +++--- >>>> 5 files changed, 16 insertions(+), 16 deletions(-) >>>> >>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >>>> index 1a25931607c514..71a5cf37b472d4 100644 >>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c >>>> @@ -325,7 +325,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched) >>>> int i; >>>> >>>> /* Signal all jobs not yet scheduled */ >>>> - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) { >>>> + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) { >>>> struct drm_sched_rq *rq = sched->sched_rq[i]; >>>> spin_lock(&rq->lock); >>>> list_for_each_entry(s_entity, &rq->entities, list) { >>>> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h >>>> index eb0c97433e5f8a..2bfcb222e35338 100644 >>>> --- a/drivers/gpu/drm/msm/msm_gpu.h >>>> +++ b/drivers/gpu/drm/msm/msm_gpu.h >>>> @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { >>>> * DRM_SCHED_PRIORITY_KERNEL priority level is treated specially in some >>>> * cases, so we don't use it (no need for kernel generated jobs). >>>> */ >>>> -#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_LOW) >>>> +#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH) >>>> >>>> /** >>>> * struct msm_file_private - per-drm_file context >>>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c >>>> index cb7445be3cbb4e..6e2b02e45e3a32 100644 >>>> --- a/drivers/gpu/drm/scheduler/sched_entity.c >>>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c >>>> @@ -81,14 +81,15 @@ int drm_sched_entity_init(struct drm_sched_entity *entity, >>>> */ >>>> pr_warn("%s: called with uninitialized scheduler\n", __func__); >>>> } else if (num_sched_list) { >>>> - /* The "priority" of an entity cannot exceed the number >>>> - * of run-queues of a scheduler. >>>> + /* The "priority" of an entity cannot exceed the number of >>>> + * run-queues of a scheduler. Choose the lowest priority >>>> + * available. >>>> */ >>>> if (entity->priority >= sched_list[0]->num_rqs) { >>>> drm_err(sched_list[0], "entity with out-of-bounds priority:%u num_rqs:%u\n", >>>> entity->priority, sched_list[0]->num_rqs); >>>> entity->priority = max_t(s32, (s32) sched_list[0]->num_rqs - 1, >>>> - (s32) DRM_SCHED_PRIORITY_LOW); >>>> + (s32) DRM_SCHED_PRIORITY_KERNEL); >>> That seems to be a no-op. You basically say max_T(.., num_rqs - 1, 0), >>> this will always be num_rqs - 1 >> This protects against num_rqs being equal to 0, in which case we select KERNEL (0). > > Ah! That's also why convert it to signed! I was already wondering why > you do this. I thought it was clear since we're doing, num_rqs - 1 and there's no guarantee that the result would be non-negative even if the variable num_rqs was non-negative. But I've added an explanation of what's going on here. >> >> This comes from "[PATCH] drm/sched: Fix bounds limiting when given a malformed entity" >> which I sent yesterday (Message-ID: <20231123122422.167832-2-ltuikov89@gmail.com>). > > I can't find that one in my inbox anywhere, but was able to find it in > patchwork. Did you check your gmail? >> Could you R-B that patch too? > > I would add a comment cause the intention of max_t(s32 is really not > obvious here. > > With that done feel free to add my rb to both patches. Done. -- Regards, Luben [-- Attachment #1.1.2: OpenPGP public key --] [-- Type: application/pgp-keys, Size: 677 bytes --] [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 236 bytes --] ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2023-11-27 14:35 UTC | newest] Thread overview: 12+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-11-24 5:27 [PATCH 0/2] Make scheduling of the same index, the same Luben Tuikov 2023-11-24 5:27 ` [PATCH 1/2] drm/sched: Rename priority MIN to LOW Luben Tuikov 2023-11-24 7:57 ` Christian König 2023-11-27 13:55 ` Christian König 2023-11-27 14:13 ` Luben Tuikov 2023-11-27 14:20 ` Christian König 2023-11-27 14:35 ` Luben Tuikov 2023-11-24 5:27 ` [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration Luben Tuikov 2023-11-24 8:04 ` Christian König 2023-11-24 8:22 ` Luben Tuikov 2023-11-24 9:38 ` Christian König 2023-11-25 4:22 ` Luben Tuikov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox