* [PATCH] drm/i915/gvt: move intel_runtime_pm_get out of spin_lock in stop_schedule
@ 2018-07-31 10:05 hang.yuan
2018-07-31 15:09 ` Greg KH
0 siblings, 1 reply; 4+ messages in thread
From: hang.yuan @ 2018-07-31 10:05 UTC (permalink / raw)
To: intel-gvt-dev; +Cc: stable, Hang Yuan, Xiong Zhang
From: Hang Yuan <hang.yuan@linux.intel.com>
pm_runtime_get_sync in intel_runtime_pm_get might sleep if i915
device is not active. When stop vgpu schedule, the device may be
inactive. So need to move runtime_pm_get out of spin_lock/unlock.
Fixes: b24881e0b0b6("drm/i915/gvt: Add runtime_pm_get/put into gvt_switch_mmio
Signed-off-by: Hang Yuan <hang.yuan@linux.intel.com>
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
---
drivers/gpu/drm/i915/gvt/mmio_context.c | 2 --
drivers/gpu/drm/i915/gvt/sched_policy.c | 3 +++
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/gvt/mmio_context.c b/drivers/gpu/drm/i915/gvt/mmio_context.c
index 7e702c6..10e63ee 100644
--- a/drivers/gpu/drm/i915/gvt/mmio_context.c
+++ b/drivers/gpu/drm/i915/gvt/mmio_context.c
@@ -549,11 +549,9 @@ void intel_gvt_switch_mmio(struct intel_vgpu *pre,
* performace for batch mmio read/write, so we need
* handle forcewake mannually.
*/
- intel_runtime_pm_get(dev_priv);
intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
switch_mmio(pre, next, ring_id);
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
- intel_runtime_pm_put(dev_priv);
}
/**
diff --git a/drivers/gpu/drm/i915/gvt/sched_policy.c b/drivers/gpu/drm/i915/gvt/sched_policy.c
index 09d7bb7..985fe81 100644
--- a/drivers/gpu/drm/i915/gvt/sched_policy.c
+++ b/drivers/gpu/drm/i915/gvt/sched_policy.c
@@ -426,6 +426,7 @@ void intel_vgpu_stop_schedule(struct intel_vgpu *vgpu)
&vgpu->gvt->scheduler;
int ring_id;
struct vgpu_sched_data *vgpu_data = vgpu->sched_data;
+ struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
if (!vgpu_data->active)
return;
@@ -444,6 +445,7 @@ void intel_vgpu_stop_schedule(struct intel_vgpu *vgpu)
scheduler->current_vgpu = NULL;
}
+ intel_runtime_pm_get(dev_priv);
spin_lock_bh(&scheduler->mmio_context_lock);
for (ring_id = 0; ring_id < I915_NUM_ENGINES; ring_id++) {
if (scheduler->engine_owner[ring_id] == vgpu) {
@@ -452,5 +454,6 @@ void intel_vgpu_stop_schedule(struct intel_vgpu *vgpu)
}
}
spin_unlock_bh(&scheduler->mmio_context_lock);
+ intel_runtime_pm_put(dev_priv);
mutex_unlock(&vgpu->gvt->sched_lock);
}
--
2.7.4
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] drm/i915/gvt: move intel_runtime_pm_get out of spin_lock in stop_schedule
2018-07-31 10:05 [PATCH] drm/i915/gvt: move intel_runtime_pm_get out of spin_lock in stop_schedule hang.yuan
@ 2018-07-31 15:09 ` Greg KH
2018-08-01 2:12 ` Hang Yuan
0 siblings, 1 reply; 4+ messages in thread
From: Greg KH @ 2018-07-31 15:09 UTC (permalink / raw)
To: hang.yuan; +Cc: intel-gvt-dev, stable, Xiong Zhang
On Tue, Jul 31, 2018 at 06:05:46PM +0800, hang.yuan@linux.intel.com wrote:
> From: Hang Yuan <hang.yuan@linux.intel.com>
>
> pm_runtime_get_sync in intel_runtime_pm_get might sleep if i915
> device is not active. When stop vgpu schedule, the device may be
> inactive. So need to move runtime_pm_get out of spin_lock/unlock.
>
> Fixes: b24881e0b0b6("drm/i915/gvt: Add runtime_pm_get/put into gvt_switch_mmio
> Signed-off-by: Hang Yuan <hang.yuan@linux.intel.com>
> Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
> ---
> drivers/gpu/drm/i915/gvt/mmio_context.c | 2 --
> drivers/gpu/drm/i915/gvt/sched_policy.c | 3 +++
> 2 files changed, 3 insertions(+), 2 deletions(-)
<formletter>
This is not the correct way to submit patches for inclusion in the
stable kernel tree. Please read:
https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
for how to do this properly.
</formletter>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] drm/i915/gvt: move intel_runtime_pm_get out of spin_lock in stop_schedule
2018-07-31 15:09 ` Greg KH
@ 2018-08-01 2:12 ` Hang Yuan
0 siblings, 0 replies; 4+ messages in thread
From: Hang Yuan @ 2018-08-01 2:12 UTC (permalink / raw)
To: Greg KH; +Cc: intel-gvt-dev, stable, Xiong Zhang
On 07/31/2018 11:09 PM, Greg KH wrote:
> On Tue, Jul 31, 2018 at 06:05:46PM +0800, hang.yuan@linux.intel.com wrote:
>> From: Hang Yuan <hang.yuan@linux.intel.com>
>>
>> pm_runtime_get_sync in intel_runtime_pm_get might sleep if i915
>> device is not active. When stop vgpu schedule, the device may be
>> inactive. So need to move runtime_pm_get out of spin_lock/unlock.
>>
>> Fixes: b24881e0b0b6("drm/i915/gvt: Add runtime_pm_get/put into gvt_switch_mmio
>> Signed-off-by: Hang Yuan <hang.yuan@linux.intel.com>
>> Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
>> ---
>> drivers/gpu/drm/i915/gvt/mmio_context.c | 2 --
>> drivers/gpu/drm/i915/gvt/sched_policy.c | 3 +++
>> 2 files changed, 3 insertions(+), 2 deletions(-)
>
> <formletter>
>
> This is not the correct way to submit patches for inclusion in the
> stable kernel tree. Please read:
> https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html
> for how to do this properly.
>
> </formletter>
>
Thank you for the guide. I misunderstood the option 1.
Regards,
Henry
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH] drm/i915/gvt: move intel_runtime_pm_get out of spin_lock in stop_schedule
@ 2018-08-29 9:07 hang.yuan
0 siblings, 0 replies; 4+ messages in thread
From: hang.yuan @ 2018-08-29 9:07 UTC (permalink / raw)
To: intel-gvt-dev; +Cc: Hang Yuan, stable, Xiong Zhang
From: Hang Yuan <hang.yuan@linux.intel.com>
pm_runtime_get_sync in intel_runtime_pm_get might sleep if i915
device is not active. When stop vgpu schedule, the device may be
inactive. So need to move runtime_pm_get out of spin_lock/unlock.
Fixes: b24881e0b0b6("drm/i915/gvt: Add runtime_pm_get/put into gvt_switch_mmio
Cc: <stable@vger.kernel.org>
Signed-off-by: Hang Yuan <hang.yuan@linux.intel.com>
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
---
drivers/gpu/drm/i915/gvt/mmio_context.c | 2 --
drivers/gpu/drm/i915/gvt/sched_policy.c | 3 +++
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/gvt/mmio_context.c b/drivers/gpu/drm/i915/gvt/mmio_context.c
index 7e702c6..10e63ee 100644
--- a/drivers/gpu/drm/i915/gvt/mmio_context.c
+++ b/drivers/gpu/drm/i915/gvt/mmio_context.c
@@ -549,11 +549,9 @@ void intel_gvt_switch_mmio(struct intel_vgpu *pre,
* performace for batch mmio read/write, so we need
* handle forcewake mannually.
*/
- intel_runtime_pm_get(dev_priv);
intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
switch_mmio(pre, next, ring_id);
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
- intel_runtime_pm_put(dev_priv);
}
/**
diff --git a/drivers/gpu/drm/i915/gvt/sched_policy.c b/drivers/gpu/drm/i915/gvt/sched_policy.c
index 09d7bb7..985fe81 100644
--- a/drivers/gpu/drm/i915/gvt/sched_policy.c
+++ b/drivers/gpu/drm/i915/gvt/sched_policy.c
@@ -426,6 +426,7 @@ void intel_vgpu_stop_schedule(struct intel_vgpu *vgpu)
&vgpu->gvt->scheduler;
int ring_id;
struct vgpu_sched_data *vgpu_data = vgpu->sched_data;
+ struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv;
if (!vgpu_data->active)
return;
@@ -444,6 +445,7 @@ void intel_vgpu_stop_schedule(struct intel_vgpu *vgpu)
scheduler->current_vgpu = NULL;
}
+ intel_runtime_pm_get(dev_priv);
spin_lock_bh(&scheduler->mmio_context_lock);
for (ring_id = 0; ring_id < I915_NUM_ENGINES; ring_id++) {
if (scheduler->engine_owner[ring_id] == vgpu) {
@@ -452,5 +454,6 @@ void intel_vgpu_stop_schedule(struct intel_vgpu *vgpu)
}
}
spin_unlock_bh(&scheduler->mmio_context_lock);
+ intel_runtime_pm_put(dev_priv);
mutex_unlock(&vgpu->gvt->sched_lock);
}
--
2.7.4
^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-08-29 13:08 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-07-31 10:05 [PATCH] drm/i915/gvt: move intel_runtime_pm_get out of spin_lock in stop_schedule hang.yuan
2018-07-31 15:09 ` Greg KH
2018-08-01 2:12 ` Hang Yuan
-- strict thread matches above, loose matches on Subject: below --
2018-08-29 9:07 hang.yuan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox