From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com ([192.55.52.120]:49479 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751036AbcDRI20 (ORCPT ); Mon, 18 Apr 2016 04:28:26 -0400 Date: Mon, 18 Apr 2016 11:28:22 +0300 From: Ville =?iso-8859-1?Q?Syrj=E4l=E4?= To: Imre Deak Cc: intel-gfx@lists.freedesktop.org, stable@vger.kernel.org Subject: Re: [PATCH 2/4] drm/i915: Fix system resume if PCI device remained enabled Message-ID: <20160418082822.GY4329@intel.com> References: <1460963062-13211-1-git-send-email-imre.deak@intel.com> <1460963062-13211-3-git-send-email-imre.deak@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1460963062-13211-3-git-send-email-imre.deak@intel.com> Sender: stable-owner@vger.kernel.org List-ID: On Mon, Apr 18, 2016 at 10:04:20AM +0300, Imre Deak wrote: > During system resume we depended on pci_enable_device() also putting the > device into PCI D0 state. This won't work if the PCI device was already > enabled but still in D3 state. This is because pci_enable_device() is > refcounted and will not change the HW state if called with a non-zero > refcount. Leaving the device in D3 will make all subsequent device > accesses fail. > > This didn't cause a problem most of the time, since we resumed with an > enable refcount of 0. But it fails at least after module reload because > after that we also happen to leak a PCI device enable reference: During > probing we call drm_get_pci_dev() which will enable the PCI device, but > during device removal drm_put_dev() won't disable it. This is a bug of > its own in DRM core, but without much harm as it only leaves the PCI > device enabled. Fixing it is also a bit more involved, due to DRM > mid-layering and because it affects non-i915 drivers too. The fix in > this patch is valid regardless of the problem in DRM core. > > CC: Ville Syrj�l� > CC: stable@vger.kernel.org > Signed-off-by: Imre Deak > --- > drivers/gpu/drm/i915/i915_drv.c | 9 ++++++++- > 1 file changed, 8 insertions(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c > index d550ae2..7eaa93e 100644 > --- a/drivers/gpu/drm/i915/i915_drv.c > +++ b/drivers/gpu/drm/i915/i915_drv.c > @@ -803,7 +803,7 @@ static int i915_drm_resume(struct drm_device *dev) > static int i915_drm_resume_early(struct drm_device *dev) > { > struct drm_i915_private *dev_priv = dev->dev_private; > - int ret = 0; > + int ret; > > /* > * We have a resume ordering issue with the snd-hda driver also > @@ -814,6 +814,13 @@ static int i915_drm_resume_early(struct drm_device *dev) > * FIXME: This should be solved with a special hdmi sink device or > * similar so that power domains can be employed. > */ > + > + ret = pci_set_power_state(dev->pdev, PCI_D0); > + if (ret) { > + DRM_ERROR("failed to set PCI D0 power state (%d)\n", ret); > + goto out; > + } Hmm. Doesn't this already happen from pci bus resume_noirq hook? > + > if (pci_enable_device(dev->pdev)) { > ret = -EIO; > goto out; > -- > 2.5.0 -- Ville Syrj�l� Intel OTC