linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/1] PCI: qcom: Add support for system suspend and resume
@ 2023-03-27 13:38 Manivannan Sadhasivam
  2023-03-27 13:38 ` [PATCH v3 1/1] " Manivannan Sadhasivam
  0 siblings, 1 reply; 13+ messages in thread
From: Manivannan Sadhasivam @ 2023-03-27 13:38 UTC (permalink / raw)
  To: lpieralisi, kw, robh
  Cc: andersson, konrad.dybcio, bhelgaas, linux-pci, linux-arm-msm,
	linux-kernel, quic_krichai, johan+linaro, steev, mka,
	Manivannan Sadhasivam

Hello,

This series (a single patch) adds the system suspend and resume support
to the Qualcomm PCIe RC controller.

Background
==========

There were previous attempts [1][2] to add system suspend and resume
support to this driver.

In previous versions, the controller was put into low power mode by turning
OFF the resources even if there were active PCIe devices connected. Thanks
to Qualcomm's internal power topology, the link did not enter L2/L3 state
and the devices were still powered ON. But during very late end of suspend
cycle, kernel tried to disable MSIs of the PCIe devices. This caused access
violations as the resources needed to access the PCIe devices config space
were turned OFF. Series [1] worked around this issue by not accessing the
PCIe config space if the link was down in dw_msi_{un}mask_irq() functions.
But that approach was not accepted.

Then, series [2] implemented the suspend and resume operations using the
syscore framework that disabled the resources at the end of the suspend
cycle. But that approach also did not get much acceptance.

Proposal
========

So the proposal here is to just vote for minimal interconnect bandwidth and
not turn OFF the resources if there are active PCIe devices connected to
the controllers. This avoids the access violation issue during suspend and
also saves some power due to the lower interconnect bandwidth used.

Then if there are no active PCIe devices connected to the controller,
the resources are turned OFF completely and brought back during resume.
This also saves power if there are controllers in a system without any
devices connected.

Testing
=======

This series has been tested on Lenovo Thinkpad X13s.

Thanks,
Mani

[1] https://lore.kernel.org/linux-pci/1656055682-18817-1-git-send-email-quic_krichai@quicinc.com/
[2] https://lore.kernel.org/linux-pci/1663669347-29308-1-git-send-email-quic_krichai@quicinc.com/

Changes in v3:

* Limited comments to 80 column
* Added error handling in resume_noirq()

Changes in v2:

* Used minimum icc vote to keep data path functional during suspend
* Collected Ack

Manivannan Sadhasivam (1):
  PCI: qcom: Add support for system suspend and resume

 drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 1/1] PCI: qcom: Add support for system suspend and resume
  2023-03-27 13:38 [PATCH v3 0/1] PCI: qcom: Add support for system suspend and resume Manivannan Sadhasivam
@ 2023-03-27 13:38 ` Manivannan Sadhasivam
  2023-03-27 15:29   ` [EXT] " Frank Li
  2023-03-29  9:56   ` Johan Hovold
  0 siblings, 2 replies; 13+ messages in thread
From: Manivannan Sadhasivam @ 2023-03-27 13:38 UTC (permalink / raw)
  To: lpieralisi, kw, robh
  Cc: andersson, konrad.dybcio, bhelgaas, linux-pci, linux-arm-msm,
	linux-kernel, quic_krichai, johan+linaro, steev, mka,
	Manivannan Sadhasivam, Dhruva Gole

During the system suspend, vote for minimal interconnect bandwidth and
also turn OFF the resources like clock and PHY if there are no active
devices connected to the controller. For the controllers with active
devices, the resources are kept ON as removing the resources will
trigger access violation during the late end of suspend cycle as kernel
tries to access the config space of PCIe devices to mask the MSIs.

Also, it is not desirable to put the link into L2/L3 state as that
implies VDD supply will be removed and the devices may go into powerdown
state. This will affect the lifetime of storage devices like NVMe.

And finally, during resume, turn ON the resources if the controller was
truly suspended (resources OFF) and update the interconnect bandwidth
based on PCIe Gen speed.

Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
Acked-by: Dhruva Gole <d-gole@ti.com>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
index a232b04af048..f33df536d9be 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -227,6 +227,7 @@ struct qcom_pcie {
 	struct gpio_desc *reset;
 	struct icc_path *icc_mem;
 	const struct qcom_pcie_cfg *cfg;
+	bool suspended;
 };
 
 #define to_qcom_pcie(x)		dev_get_drvdata((x)->dev)
@@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct platform_device *pdev)
 	return ret;
 }
 
+static int qcom_pcie_suspend_noirq(struct device *dev)
+{
+	struct qcom_pcie *pcie = dev_get_drvdata(dev);
+	int ret;
+
+	/*
+	 * Set minimum bandwidth required to keep data path functional during
+	 * suspend.
+	 */
+	ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
+	if (ret) {
+		dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
+		return ret;
+	}
+
+	/*
+	 * Turn OFF the resources only for controllers without active PCIe
+	 * devices. For controllers with active devices, the resources are kept
+	 * ON and the link is expected to be in L0/L1 (sub)states.
+	 *
+	 * Turning OFF the resources for controllers with active PCIe devices
+	 * will trigger access violation during the end of the suspend cycle,
+	 * as kernel tries to access the PCIe devices config space for masking
+	 * MSIs.
+	 *
+	 * Also, it is not desirable to put the link into L2/L3 state as that
+	 * implies VDD supply will be removed and the devices may go into
+	 * powerdown state. This will affect the lifetime of the storage devices
+	 * like NVMe.
+	 */
+	if (!dw_pcie_link_up(pcie->pci)) {
+		qcom_pcie_host_deinit(&pcie->pci->pp);
+		pcie->suspended = true;
+	}
+
+	return 0;
+}
+
+static int qcom_pcie_resume_noirq(struct device *dev)
+{
+	struct qcom_pcie *pcie = dev_get_drvdata(dev);
+	int ret;
+
+	if (pcie->suspended) {
+		ret = qcom_pcie_host_init(&pcie->pci->pp);
+		if (ret)
+			return ret;
+
+		pcie->suspended = false;
+	}
+
+	qcom_pcie_icc_update(pcie);
+
+	return 0;
+}
+
 static const struct of_device_id qcom_pcie_match[] = {
 	{ .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 },
 	{ .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 },
@@ -1856,12 +1913,17 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302, qcom_fixup_class);
 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000, qcom_fixup_class);
 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001, qcom_fixup_class);
 
+static const struct dev_pm_ops qcom_pcie_pm_ops = {
+	NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_suspend_noirq, qcom_pcie_resume_noirq)
+};
+
 static struct platform_driver qcom_pcie_driver = {
 	.probe = qcom_pcie_probe,
 	.driver = {
 		.name = "qcom-pcie",
 		.suppress_bind_attrs = true,
 		.of_match_table = qcom_pcie_match,
+		.pm = &qcom_pcie_pm_ops,
 	},
 };
 builtin_platform_driver(qcom_pcie_driver);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* RE: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend and resume
  2023-03-27 13:38 ` [PATCH v3 1/1] " Manivannan Sadhasivam
@ 2023-03-27 15:29   ` Frank Li
  2023-03-29 13:02     ` Manivannan Sadhasivam
  2023-03-29  9:56   ` Johan Hovold
  1 sibling, 1 reply; 13+ messages in thread
From: Frank Li @ 2023-03-27 15:29 UTC (permalink / raw)
  To: Manivannan Sadhasivam, lpieralisi@kernel.org, kw@linux.com,
	robh@kernel.org
  Cc: andersson@kernel.org, konrad.dybcio@linaro.org,
	bhelgaas@google.com, linux-pci@vger.kernel.org,
	linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org,
	quic_krichai@quicinc.com, johan+linaro@kernel.org, steev@kali.org,
	mka@chromium.org, Dhruva Gole



> -----Original Message-----
> From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> Sent: Monday, March 27, 2023 8:38 AM
> To: lpieralisi@kernel.org; kw@linux.com; robh@kernel.org
> Cc: andersson@kernel.org; konrad.dybcio@linaro.org;
> bhelgaas@google.com; linux-pci@vger.kernel.org; linux-arm-
> msm@vger.kernel.org; linux-kernel@vger.kernel.org;
> quic_krichai@quicinc.com; johan+linaro@kernel.org; steev@kali.org;
> mka@chromium.org; Manivannan Sadhasivam
> <manivannan.sadhasivam@linaro.org>; Dhruva Gole <d-gole@ti.com>
> Subject: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend
> and resume
> 
> Caution: EXT Email
> 
> During the system suspend, vote for minimal interconnect bandwidth and
> also turn OFF the resources like clock and PHY if there are no active
> devices connected to the controller. For the controllers with active
> devices, the resources are kept ON as removing the resources will
> trigger access violation during the late end of suspend cycle as kernel
> tries to access the config space of PCIe devices to mask the MSIs.

I remember I met similar problem before. It is relate ASPM settings of NVME.
NVME try to use L1.2 at suspend to save restore time. 

It should be user decided if PCI enter L1.2( for better resume time) or L2
For batter power saving.  If NVME disable ASPM,  NVME driver will free
Msi irq before enter suspend,  so not issue access config space by MSI
Irq disable function. 

This is just general comment. It is not specific for this patches.  Many platform
Will face the similar problem.  Maybe need better solution to handle
L2/L3 for better power saving in future. 

Frank Li
 
> 
> Also, it is not desirable to put the link into L2/L3 state as that
> implies VDD supply will be removed and the devices may go into powerdown
> state. This will affect the lifetime of storage devices like NVMe.
> 
> And finally, during resume, turn ON the resources if the controller was
> truly suspended (resources OFF) and update the interconnect bandwidth
> based on PCIe Gen speed.
> 
> Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
> Acked-by: Dhruva Gole <d-gole@ti.com>
> Signed-off-by: Manivannan Sadhasivam
> <manivannan.sadhasivam@linaro.org>
> ---
>  drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++
>  1 file changed, 62 insertions(+)
> 
> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c
> b/drivers/pci/controller/dwc/pcie-qcom.c
> index a232b04af048..f33df536d9be 100644
> --- a/drivers/pci/controller/dwc/pcie-qcom.c
> +++ b/drivers/pci/controller/dwc/pcie-qcom.c
> @@ -227,6 +227,7 @@ struct qcom_pcie {
>         struct gpio_desc *reset;
>         struct icc_path *icc_mem;
>         const struct qcom_pcie_cfg *cfg;
> +       bool suspended;
>  };
> 
>  #define to_qcom_pcie(x)                dev_get_drvdata((x)->dev)
> @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct
> platform_device *pdev)
>         return ret;
>  }
> 
> +static int qcom_pcie_suspend_noirq(struct device *dev)
> +{
> +       struct qcom_pcie *pcie = dev_get_drvdata(dev);
> +       int ret;
> +
> +       /*
> +        * Set minimum bandwidth required to keep data path functional during
> +        * suspend.
> +        */
> +       ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
> +       if (ret) {
> +               dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
> +               return ret;
> +       }
> +
> +       /*
> +        * Turn OFF the resources only for controllers without active PCIe
> +        * devices. For controllers with active devices, the resources are kept
> +        * ON and the link is expected to be in L0/L1 (sub)states.
> +        *
> +        * Turning OFF the resources for controllers with active PCIe devices
> +        * will trigger access violation during the end of the suspend cycle,
> +        * as kernel tries to access the PCIe devices config space for masking
> +        * MSIs.
> +        *
> +        * Also, it is not desirable to put the link into L2/L3 state as that
> +        * implies VDD supply will be removed and the devices may go into
> +        * powerdown state. This will affect the lifetime of the storage devices
> +        * like NVMe.
> +        */
> +       if (!dw_pcie_link_up(pcie->pci)) {
> +               qcom_pcie_host_deinit(&pcie->pci->pp);
> +               pcie->suspended = true;
> +       }
> +
> +       return 0;
> +}
> +
> +static int qcom_pcie_resume_noirq(struct device *dev)
> +{
> +       struct qcom_pcie *pcie = dev_get_drvdata(dev);
> +       int ret;
> +
> +       if (pcie->suspended) {
> +               ret = qcom_pcie_host_init(&pcie->pci->pp);
> +               if (ret)
> +                       return ret;
> +
> +               pcie->suspended = false;
> +       }
> +
> +       qcom_pcie_icc_update(pcie);
> +
> +       return 0;
> +}
> +
>  static const struct of_device_id qcom_pcie_match[] = {
>         { .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 },
>         { .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 },
> @@ -1856,12 +1913,17 @@
> DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302,
> qcom_fixup_class);
>  DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000,
> qcom_fixup_class);
>  DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001,
> qcom_fixup_class);
> 
> +static const struct dev_pm_ops qcom_pcie_pm_ops = {
> +       NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_suspend_noirq,
> qcom_pcie_resume_noirq)
> +};
> +
>  static struct platform_driver qcom_pcie_driver = {
>         .probe = qcom_pcie_probe,
>         .driver = {
>                 .name = "qcom-pcie",
>                 .suppress_bind_attrs = true,
>                 .of_match_table = qcom_pcie_match,
> +               .pm = &qcom_pcie_pm_ops,
>         },
>  };
>  builtin_platform_driver(qcom_pcie_driver);
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/1] PCI: qcom: Add support for system suspend and resume
  2023-03-27 13:38 ` [PATCH v3 1/1] " Manivannan Sadhasivam
  2023-03-27 15:29   ` [EXT] " Frank Li
@ 2023-03-29  9:56   ` Johan Hovold
  2023-03-29 12:52     ` Manivannan Sadhasivam
  1 sibling, 1 reply; 13+ messages in thread
From: Johan Hovold @ 2023-03-29  9:56 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: lpieralisi, kw, robh, andersson, konrad.dybcio, bhelgaas,
	linux-pci, linux-arm-msm, linux-kernel, quic_krichai,
	johan+linaro, steev, mka, Dhruva Gole

On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote:
> During the system suspend, vote for minimal interconnect bandwidth and
> also turn OFF the resources like clock and PHY if there are no active
> devices connected to the controller. For the controllers with active
> devices, the resources are kept ON as removing the resources will
> trigger access violation during the late end of suspend cycle as kernel
> tries to access the config space of PCIe devices to mask the MSIs.
> 
> Also, it is not desirable to put the link into L2/L3 state as that
> implies VDD supply will be removed and the devices may go into powerdown
> state. This will affect the lifetime of storage devices like NVMe.
> 
> And finally, during resume, turn ON the resources if the controller was
> truly suspended (resources OFF) and update the interconnect bandwidth
> based on PCIe Gen speed.
> 
> Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
> Acked-by: Dhruva Gole <d-gole@ti.com>
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>  drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++
>  1 file changed, 62 insertions(+)
> 
> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
> index a232b04af048..f33df536d9be 100644
> --- a/drivers/pci/controller/dwc/pcie-qcom.c
> +++ b/drivers/pci/controller/dwc/pcie-qcom.c
> @@ -227,6 +227,7 @@ struct qcom_pcie {
>  	struct gpio_desc *reset;
>  	struct icc_path *icc_mem;
>  	const struct qcom_pcie_cfg *cfg;
> +	bool suspended;
>  };
>  
>  #define to_qcom_pcie(x)		dev_get_drvdata((x)->dev)
> @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct platform_device *pdev)
>  	return ret;
>  }
>  
> +static int qcom_pcie_suspend_noirq(struct device *dev)
> +{
> +	struct qcom_pcie *pcie = dev_get_drvdata(dev);
> +	int ret;
> +
> +	/*
> +	 * Set minimum bandwidth required to keep data path functional during
> +	 * suspend.
> +	 */
> +	ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));

This isn't really the minimum bandwidth you're setting here.

I think you said off list that you didn't see real impact reducing the
bandwidth, but have you tried requesting the real minimum which would be
kBps_to_icc(1)?

Doing so works fine here with both the CRD and X13s and may result in
some further power savings.

> +	if (ret) {
> +		dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
> +		return ret;
> +	}

Johan

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/1] PCI: qcom: Add support for system suspend and resume
  2023-03-29  9:56   ` Johan Hovold
@ 2023-03-29 12:52     ` Manivannan Sadhasivam
  2023-03-29 12:59       ` Konrad Dybcio
  2023-03-29 13:19       ` Johan Hovold
  0 siblings, 2 replies; 13+ messages in thread
From: Manivannan Sadhasivam @ 2023-03-29 12:52 UTC (permalink / raw)
  To: Johan Hovold
  Cc: lpieralisi, kw, robh, andersson, konrad.dybcio, bhelgaas,
	linux-pci, linux-arm-msm, linux-kernel, quic_krichai,
	johan+linaro, steev, mka, Dhruva Gole

On Wed, Mar 29, 2023 at 11:56:43AM +0200, Johan Hovold wrote:
> On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote:
> > During the system suspend, vote for minimal interconnect bandwidth and
> > also turn OFF the resources like clock and PHY if there are no active
> > devices connected to the controller. For the controllers with active
> > devices, the resources are kept ON as removing the resources will
> > trigger access violation during the late end of suspend cycle as kernel
> > tries to access the config space of PCIe devices to mask the MSIs.
> > 
> > Also, it is not desirable to put the link into L2/L3 state as that
> > implies VDD supply will be removed and the devices may go into powerdown
> > state. This will affect the lifetime of storage devices like NVMe.
> > 
> > And finally, during resume, turn ON the resources if the controller was
> > truly suspended (resources OFF) and update the interconnect bandwidth
> > based on PCIe Gen speed.
> > 
> > Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
> > Acked-by: Dhruva Gole <d-gole@ti.com>
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > ---
> >  drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++
> >  1 file changed, 62 insertions(+)
> > 
> > diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
> > index a232b04af048..f33df536d9be 100644
> > --- a/drivers/pci/controller/dwc/pcie-qcom.c
> > +++ b/drivers/pci/controller/dwc/pcie-qcom.c
> > @@ -227,6 +227,7 @@ struct qcom_pcie {
> >  	struct gpio_desc *reset;
> >  	struct icc_path *icc_mem;
> >  	const struct qcom_pcie_cfg *cfg;
> > +	bool suspended;
> >  };
> >  
> >  #define to_qcom_pcie(x)		dev_get_drvdata((x)->dev)
> > @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct platform_device *pdev)
> >  	return ret;
> >  }
> >  
> > +static int qcom_pcie_suspend_noirq(struct device *dev)
> > +{
> > +	struct qcom_pcie *pcie = dev_get_drvdata(dev);
> > +	int ret;
> > +
> > +	/*
> > +	 * Set minimum bandwidth required to keep data path functional during
> > +	 * suspend.
> > +	 */
> > +	ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
> 
> This isn't really the minimum bandwidth you're setting here.
> 
> I think you said off list that you didn't see real impact reducing the
> bandwidth, but have you tried requesting the real minimum which would be
> kBps_to_icc(1)?
> 
> Doing so works fine here with both the CRD and X13s and may result in
> some further power savings.
> 

No, we shouldn't be setting random value as the bandwidth. Reason is, these
values are computed by the bus team based on the requirement of the interconnect
paths (clock, voltage etc...) with actual PCIe Gen speeds. I don't know about
the potential implication even if it happens to work.

- Mani

> > +	if (ret) {
> > +		dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
> > +		return ret;
> > +	}
> 
> Johan

-- 
மணிவண்ணன் சதாசிவம்

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/1] PCI: qcom: Add support for system suspend and resume
  2023-03-29 12:52     ` Manivannan Sadhasivam
@ 2023-03-29 12:59       ` Konrad Dybcio
  2023-03-29 13:19       ` Johan Hovold
  1 sibling, 0 replies; 13+ messages in thread
From: Konrad Dybcio @ 2023-03-29 12:59 UTC (permalink / raw)
  To: Manivannan Sadhasivam, Johan Hovold
  Cc: lpieralisi, kw, robh, andersson, bhelgaas, linux-pci,
	linux-arm-msm, linux-kernel, quic_krichai, johan+linaro, steev,
	mka, Dhruva Gole



On 29.03.2023 14:52, Manivannan Sadhasivam wrote:
> On Wed, Mar 29, 2023 at 11:56:43AM +0200, Johan Hovold wrote:
>> On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote:
>>> During the system suspend, vote for minimal interconnect bandwidth and
>>> also turn OFF the resources like clock and PHY if there are no active
>>> devices connected to the controller. For the controllers with active
>>> devices, the resources are kept ON as removing the resources will
>>> trigger access violation during the late end of suspend cycle as kernel
>>> tries to access the config space of PCIe devices to mask the MSIs.
>>>
>>> Also, it is not desirable to put the link into L2/L3 state as that
>>> implies VDD supply will be removed and the devices may go into powerdown
>>> state. This will affect the lifetime of storage devices like NVMe.
>>>
>>> And finally, during resume, turn ON the resources if the controller was
>>> truly suspended (resources OFF) and update the interconnect bandwidth
>>> based on PCIe Gen speed.
>>>
>>> Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
>>> Acked-by: Dhruva Gole <d-gole@ti.com>
>>> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>>> ---
>>>  drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++
>>>  1 file changed, 62 insertions(+)
>>>
>>> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
>>> index a232b04af048..f33df536d9be 100644
>>> --- a/drivers/pci/controller/dwc/pcie-qcom.c
>>> +++ b/drivers/pci/controller/dwc/pcie-qcom.c
>>> @@ -227,6 +227,7 @@ struct qcom_pcie {
>>>  	struct gpio_desc *reset;
>>>  	struct icc_path *icc_mem;
>>>  	const struct qcom_pcie_cfg *cfg;
>>> +	bool suspended;
>>>  };
>>>  
>>>  #define to_qcom_pcie(x)		dev_get_drvdata((x)->dev)
>>> @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct platform_device *pdev)
>>>  	return ret;
>>>  }
>>>  
>>> +static int qcom_pcie_suspend_noirq(struct device *dev)
>>> +{
>>> +	struct qcom_pcie *pcie = dev_get_drvdata(dev);
>>> +	int ret;
>>> +
>>> +	/*
>>> +	 * Set minimum bandwidth required to keep data path functional during
>>> +	 * suspend.
>>> +	 */
>>> +	ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
>>
>> This isn't really the minimum bandwidth you're setting here.
>>
>> I think you said off list that you didn't see real impact reducing the
>> bandwidth, but have you tried requesting the real minimum which would be
>> kBps_to_icc(1)?
>>
>> Doing so works fine here with both the CRD and X13s and may result in
>> some further power savings.
>>
> 
> No, we shouldn't be setting random value as the bandwidth. Reason is, these
> values are computed by the bus team based on the requirement of the interconnect
> paths (clock, voltage etc...) with actual PCIe Gen speeds.
Should it then be variable, based on the current link gen?

Konrad
I don't know about
> the potential implication even if it happens to work.
> 
> - Mani
> 
>>> +	if (ret) {
>>> +		dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
>>> +		return ret;
>>> +	}
>>
>> Johan
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend and resume
  2023-03-27 15:29   ` [EXT] " Frank Li
@ 2023-03-29 13:02     ` Manivannan Sadhasivam
  2023-03-29 13:04       ` Konrad Dybcio
  2023-03-29 14:51       ` Frank Li
  0 siblings, 2 replies; 13+ messages in thread
From: Manivannan Sadhasivam @ 2023-03-29 13:02 UTC (permalink / raw)
  To: Frank Li
  Cc: lpieralisi@kernel.org, kw@linux.com, robh@kernel.org,
	andersson@kernel.org, konrad.dybcio@linaro.org,
	bhelgaas@google.com, linux-pci@vger.kernel.org,
	linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org,
	quic_krichai@quicinc.com, johan+linaro@kernel.org, steev@kali.org,
	mka@chromium.org, Dhruva Gole

On Mon, Mar 27, 2023 at 03:29:54PM +0000, Frank Li wrote:
> 
> 
> > -----Original Message-----
> > From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > Sent: Monday, March 27, 2023 8:38 AM
> > To: lpieralisi@kernel.org; kw@linux.com; robh@kernel.org
> > Cc: andersson@kernel.org; konrad.dybcio@linaro.org;
> > bhelgaas@google.com; linux-pci@vger.kernel.org; linux-arm-
> > msm@vger.kernel.org; linux-kernel@vger.kernel.org;
> > quic_krichai@quicinc.com; johan+linaro@kernel.org; steev@kali.org;
> > mka@chromium.org; Manivannan Sadhasivam
> > <manivannan.sadhasivam@linaro.org>; Dhruva Gole <d-gole@ti.com>
> > Subject: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend
> > and resume
> > 
> > Caution: EXT Email
> > 
> > During the system suspend, vote for minimal interconnect bandwidth and
> > also turn OFF the resources like clock and PHY if there are no active
> > devices connected to the controller. For the controllers with active
> > devices, the resources are kept ON as removing the resources will
> > trigger access violation during the late end of suspend cycle as kernel
> > tries to access the config space of PCIe devices to mask the MSIs.
> 
> I remember I met similar problem before. It is relate ASPM settings of NVME.
> NVME try to use L1.2 at suspend to save restore time. 
> 
> It should be user decided if PCI enter L1.2( for better resume time) or L2
> For batter power saving.  If NVME disable ASPM,  NVME driver will free
> Msi irq before enter suspend,  so not issue access config space by MSI
> Irq disable function. 
> 

The NVMe driver will only shutdown the device if ASPM is completely disabled in
the kernel. They also take powerdown path for some Intel platforms though. For
others, they keep the device in power on state and expect power saving with
ASPM.

> This is just general comment. It is not specific for this patches.  Many platform
> Will face the similar problem.  Maybe need better solution to handle
> L2/L3 for better power saving in future. 
> 

The only argument I hear from them is that, when the NVMe device gets powered
down during suspend, then it may detoriate the life time of it as the suspend
cycle is going to be high.

- Mani

> Frank Li
>  
> > 
> > Also, it is not desirable to put the link into L2/L3 state as that
> > implies VDD supply will be removed and the devices may go into powerdown
> > state. This will affect the lifetime of storage devices like NVMe.
> > 
> > And finally, during resume, turn ON the resources if the controller was
> > truly suspended (resources OFF) and update the interconnect bandwidth
> > based on PCIe Gen speed.
> > 
> > Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
> > Acked-by: Dhruva Gole <d-gole@ti.com>
> > Signed-off-by: Manivannan Sadhasivam
> > <manivannan.sadhasivam@linaro.org>
> > ---
> >  drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++
> >  1 file changed, 62 insertions(+)
> > 
> > diff --git a/drivers/pci/controller/dwc/pcie-qcom.c
> > b/drivers/pci/controller/dwc/pcie-qcom.c
> > index a232b04af048..f33df536d9be 100644
> > --- a/drivers/pci/controller/dwc/pcie-qcom.c
> > +++ b/drivers/pci/controller/dwc/pcie-qcom.c
> > @@ -227,6 +227,7 @@ struct qcom_pcie {
> >         struct gpio_desc *reset;
> >         struct icc_path *icc_mem;
> >         const struct qcom_pcie_cfg *cfg;
> > +       bool suspended;
> >  };
> > 
> >  #define to_qcom_pcie(x)                dev_get_drvdata((x)->dev)
> > @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct
> > platform_device *pdev)
> >         return ret;
> >  }
> > 
> > +static int qcom_pcie_suspend_noirq(struct device *dev)
> > +{
> > +       struct qcom_pcie *pcie = dev_get_drvdata(dev);
> > +       int ret;
> > +
> > +       /*
> > +        * Set minimum bandwidth required to keep data path functional during
> > +        * suspend.
> > +        */
> > +       ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
> > +       if (ret) {
> > +               dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
> > +               return ret;
> > +       }
> > +
> > +       /*
> > +        * Turn OFF the resources only for controllers without active PCIe
> > +        * devices. For controllers with active devices, the resources are kept
> > +        * ON and the link is expected to be in L0/L1 (sub)states.
> > +        *
> > +        * Turning OFF the resources for controllers with active PCIe devices
> > +        * will trigger access violation during the end of the suspend cycle,
> > +        * as kernel tries to access the PCIe devices config space for masking
> > +        * MSIs.
> > +        *
> > +        * Also, it is not desirable to put the link into L2/L3 state as that
> > +        * implies VDD supply will be removed and the devices may go into
> > +        * powerdown state. This will affect the lifetime of the storage devices
> > +        * like NVMe.
> > +        */
> > +       if (!dw_pcie_link_up(pcie->pci)) {
> > +               qcom_pcie_host_deinit(&pcie->pci->pp);
> > +               pcie->suspended = true;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static int qcom_pcie_resume_noirq(struct device *dev)
> > +{
> > +       struct qcom_pcie *pcie = dev_get_drvdata(dev);
> > +       int ret;
> > +
> > +       if (pcie->suspended) {
> > +               ret = qcom_pcie_host_init(&pcie->pci->pp);
> > +               if (ret)
> > +                       return ret;
> > +
> > +               pcie->suspended = false;
> > +       }
> > +
> > +       qcom_pcie_icc_update(pcie);
> > +
> > +       return 0;
> > +}
> > +
> >  static const struct of_device_id qcom_pcie_match[] = {
> >         { .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 },
> >         { .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 },
> > @@ -1856,12 +1913,17 @@
> > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302,
> > qcom_fixup_class);
> >  DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000,
> > qcom_fixup_class);
> >  DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001,
> > qcom_fixup_class);
> > 
> > +static const struct dev_pm_ops qcom_pcie_pm_ops = {
> > +       NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_suspend_noirq,
> > qcom_pcie_resume_noirq)
> > +};
> > +
> >  static struct platform_driver qcom_pcie_driver = {
> >         .probe = qcom_pcie_probe,
> >         .driver = {
> >                 .name = "qcom-pcie",
> >                 .suppress_bind_attrs = true,
> >                 .of_match_table = qcom_pcie_match,
> > +               .pm = &qcom_pcie_pm_ops,
> >         },
> >  };
> >  builtin_platform_driver(qcom_pcie_driver);
> > --
> > 2.25.1
> 

-- 
மணிவண்ணன் சதாசிவம்

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend and resume
  2023-03-29 13:02     ` Manivannan Sadhasivam
@ 2023-03-29 13:04       ` Konrad Dybcio
  2023-03-29 14:51       ` Frank Li
  1 sibling, 0 replies; 13+ messages in thread
From: Konrad Dybcio @ 2023-03-29 13:04 UTC (permalink / raw)
  To: Manivannan Sadhasivam, Frank Li
  Cc: lpieralisi@kernel.org, kw@linux.com, robh@kernel.org,
	andersson@kernel.org, bhelgaas@google.com,
	linux-pci@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-kernel@vger.kernel.org, quic_krichai@quicinc.com,
	johan+linaro@kernel.org, steev@kali.org, mka@chromium.org,
	Dhruva Gole



On 29.03.2023 15:02, Manivannan Sadhasivam wrote:
> On Mon, Mar 27, 2023 at 03:29:54PM +0000, Frank Li wrote:
>>
>>
>>> -----Original Message-----
>>> From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>>> Sent: Monday, March 27, 2023 8:38 AM
>>> To: lpieralisi@kernel.org; kw@linux.com; robh@kernel.org
>>> Cc: andersson@kernel.org; konrad.dybcio@linaro.org;
>>> bhelgaas@google.com; linux-pci@vger.kernel.org; linux-arm-
>>> msm@vger.kernel.org; linux-kernel@vger.kernel.org;
>>> quic_krichai@quicinc.com; johan+linaro@kernel.org; steev@kali.org;
>>> mka@chromium.org; Manivannan Sadhasivam
>>> <manivannan.sadhasivam@linaro.org>; Dhruva Gole <d-gole@ti.com>
>>> Subject: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend
>>> and resume
>>>
>>> Caution: EXT Email
>>>
>>> During the system suspend, vote for minimal interconnect bandwidth and
>>> also turn OFF the resources like clock and PHY if there are no active
>>> devices connected to the controller. For the controllers with active
>>> devices, the resources are kept ON as removing the resources will
>>> trigger access violation during the late end of suspend cycle as kernel
>>> tries to access the config space of PCIe devices to mask the MSIs.
>>
>> I remember I met similar problem before. It is relate ASPM settings of NVME.
>> NVME try to use L1.2 at suspend to save restore time. 
>>
>> It should be user decided if PCI enter L1.2( for better resume time) or L2
>> For batter power saving.  If NVME disable ASPM,  NVME driver will free
>> Msi irq before enter suspend,  so not issue access config space by MSI
>> Irq disable function. 
>>
> 
> The NVMe driver will only shutdown the device if ASPM is completely disabled in
> the kernel. They also take powerdown path for some Intel platforms though. For
> others, they keep the device in power on state and expect power saving with
> ASPM.
> 
>> This is just general comment. It is not specific for this patches.  Many platform
>> Will face the similar problem.  Maybe need better solution to handle
>> L2/L3 for better power saving in future. 
>>
> 
> The only argument I hear from them is that, when the NVMe device gets powered
> down during suspend, then it may detoriate the life time of it as the suspend
> cycle is going to be high.
I think I asked that question before, but.. Do we know what Windows/macOS do?

Konrad
> 
> - Mani
> 
>> Frank Li
>>  
>>>
>>> Also, it is not desirable to put the link into L2/L3 state as that
>>> implies VDD supply will be removed and the devices may go into powerdown
>>> state. This will affect the lifetime of storage devices like NVMe.
>>>
>>> And finally, during resume, turn ON the resources if the controller was
>>> truly suspended (resources OFF) and update the interconnect bandwidth
>>> based on PCIe Gen speed.
>>>
>>> Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
>>> Acked-by: Dhruva Gole <d-gole@ti.com>
>>> Signed-off-by: Manivannan Sadhasivam
>>> <manivannan.sadhasivam@linaro.org>
>>> ---
>>>  drivers/pci/controller/dwc/pcie-qcom.c | 62 ++++++++++++++++++++++++++
>>>  1 file changed, 62 insertions(+)
>>>
>>> diff --git a/drivers/pci/controller/dwc/pcie-qcom.c
>>> b/drivers/pci/controller/dwc/pcie-qcom.c
>>> index a232b04af048..f33df536d9be 100644
>>> --- a/drivers/pci/controller/dwc/pcie-qcom.c
>>> +++ b/drivers/pci/controller/dwc/pcie-qcom.c
>>> @@ -227,6 +227,7 @@ struct qcom_pcie {
>>>         struct gpio_desc *reset;
>>>         struct icc_path *icc_mem;
>>>         const struct qcom_pcie_cfg *cfg;
>>> +       bool suspended;
>>>  };
>>>
>>>  #define to_qcom_pcie(x)                dev_get_drvdata((x)->dev)
>>> @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct
>>> platform_device *pdev)
>>>         return ret;
>>>  }
>>>
>>> +static int qcom_pcie_suspend_noirq(struct device *dev)
>>> +{
>>> +       struct qcom_pcie *pcie = dev_get_drvdata(dev);
>>> +       int ret;
>>> +
>>> +       /*
>>> +        * Set minimum bandwidth required to keep data path functional during
>>> +        * suspend.
>>> +        */
>>> +       ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
>>> +       if (ret) {
>>> +               dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
>>> +               return ret;
>>> +       }
>>> +
>>> +       /*
>>> +        * Turn OFF the resources only for controllers without active PCIe
>>> +        * devices. For controllers with active devices, the resources are kept
>>> +        * ON and the link is expected to be in L0/L1 (sub)states.
>>> +        *
>>> +        * Turning OFF the resources for controllers with active PCIe devices
>>> +        * will trigger access violation during the end of the suspend cycle,
>>> +        * as kernel tries to access the PCIe devices config space for masking
>>> +        * MSIs.
>>> +        *
>>> +        * Also, it is not desirable to put the link into L2/L3 state as that
>>> +        * implies VDD supply will be removed and the devices may go into
>>> +        * powerdown state. This will affect the lifetime of the storage devices
>>> +        * like NVMe.
>>> +        */
>>> +       if (!dw_pcie_link_up(pcie->pci)) {
>>> +               qcom_pcie_host_deinit(&pcie->pci->pp);
>>> +               pcie->suspended = true;
>>> +       }
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +static int qcom_pcie_resume_noirq(struct device *dev)
>>> +{
>>> +       struct qcom_pcie *pcie = dev_get_drvdata(dev);
>>> +       int ret;
>>> +
>>> +       if (pcie->suspended) {
>>> +               ret = qcom_pcie_host_init(&pcie->pci->pp);
>>> +               if (ret)
>>> +                       return ret;
>>> +
>>> +               pcie->suspended = false;
>>> +       }
>>> +
>>> +       qcom_pcie_icc_update(pcie);
>>> +
>>> +       return 0;
>>> +}
>>> +
>>>  static const struct of_device_id qcom_pcie_match[] = {
>>>         { .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 },
>>>         { .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 },
>>> @@ -1856,12 +1913,17 @@
>>> DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302,
>>> qcom_fixup_class);
>>>  DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000,
>>> qcom_fixup_class);
>>>  DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001,
>>> qcom_fixup_class);
>>>
>>> +static const struct dev_pm_ops qcom_pcie_pm_ops = {
>>> +       NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_suspend_noirq,
>>> qcom_pcie_resume_noirq)
>>> +};
>>> +
>>>  static struct platform_driver qcom_pcie_driver = {
>>>         .probe = qcom_pcie_probe,
>>>         .driver = {
>>>                 .name = "qcom-pcie",
>>>                 .suppress_bind_attrs = true,
>>>                 .of_match_table = qcom_pcie_match,
>>> +               .pm = &qcom_pcie_pm_ops,
>>>         },
>>>  };
>>>  builtin_platform_driver(qcom_pcie_driver);
>>> --
>>> 2.25.1
>>
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/1] PCI: qcom: Add support for system suspend and resume
  2023-03-29 12:52     ` Manivannan Sadhasivam
  2023-03-29 12:59       ` Konrad Dybcio
@ 2023-03-29 13:19       ` Johan Hovold
  2023-03-29 14:01         ` Manivannan Sadhasivam
  1 sibling, 1 reply; 13+ messages in thread
From: Johan Hovold @ 2023-03-29 13:19 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: lpieralisi, kw, robh, andersson, konrad.dybcio, bhelgaas,
	linux-pci, linux-arm-msm, linux-kernel, quic_krichai,
	johan+linaro, steev, mka, Dhruva Gole

On Wed, Mar 29, 2023 at 06:22:32PM +0530, Manivannan Sadhasivam wrote:
> On Wed, Mar 29, 2023 at 11:56:43AM +0200, Johan Hovold wrote:
> > On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote:
 
> > > +static int qcom_pcie_suspend_noirq(struct device *dev)
> > > +{
> > > +	struct qcom_pcie *pcie = dev_get_drvdata(dev);
> > > +	int ret;
> > > +
> > > +	/*
> > > +	 * Set minimum bandwidth required to keep data path functional during
> > > +	 * suspend.
> > > +	 */
> > > +	ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
> > 
> > This isn't really the minimum bandwidth you're setting here.
> > 
> > I think you said off list that you didn't see real impact reducing the
> > bandwidth, but have you tried requesting the real minimum which would be
> > kBps_to_icc(1)?
> > 
> > Doing so works fine here with both the CRD and X13s and may result in
> > some further power savings.
> > 
> 
> No, we shouldn't be setting random value as the bandwidth. Reason is, these
> values are computed by the bus team based on the requirement of the interconnect
> paths (clock, voltage etc...) with actual PCIe Gen speeds. I don't know about
> the potential implication even if it happens to work.

Why would you need PCIe gen1 speed during suspend?

These numbers are already somewhat random as, for example, the vendor
driver is requesting 500 kBps (800 peak) during runtime, while we are
now requesting five times that during suspend (the vendor driver gets a
away with 0).

Sure, this indicates that the interconnect driver is broken and we
should indeed be using values that at least makes some sense (and
eventually fix the interconnect driver).

Just not sure that you need to request that much bandwidth during
suspend (e.g. for just a couple of register accesses).

Johan

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/1] PCI: qcom: Add support for system suspend and resume
  2023-03-29 13:19       ` Johan Hovold
@ 2023-03-29 14:01         ` Manivannan Sadhasivam
  2023-03-29 14:42           ` Johan Hovold
  0 siblings, 1 reply; 13+ messages in thread
From: Manivannan Sadhasivam @ 2023-03-29 14:01 UTC (permalink / raw)
  To: Johan Hovold
  Cc: lpieralisi, kw, robh, andersson, konrad.dybcio, bhelgaas,
	linux-pci, linux-arm-msm, linux-kernel, quic_krichai,
	johan+linaro, steev, mka, Dhruva Gole

On Wed, Mar 29, 2023 at 03:19:51PM +0200, Johan Hovold wrote:
> On Wed, Mar 29, 2023 at 06:22:32PM +0530, Manivannan Sadhasivam wrote:
> > On Wed, Mar 29, 2023 at 11:56:43AM +0200, Johan Hovold wrote:
> > > On Mon, Mar 27, 2023 at 07:08:24PM +0530, Manivannan Sadhasivam wrote:
>  
> > > > +static int qcom_pcie_suspend_noirq(struct device *dev)
> > > > +{
> > > > +	struct qcom_pcie *pcie = dev_get_drvdata(dev);
> > > > +	int ret;
> > > > +
> > > > +	/*
> > > > +	 * Set minimum bandwidth required to keep data path functional during
> > > > +	 * suspend.
> > > > +	 */
> > > > +	ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
> > > 
> > > This isn't really the minimum bandwidth you're setting here.
> > > 
> > > I think you said off list that you didn't see real impact reducing the
> > > bandwidth, but have you tried requesting the real minimum which would be
> > > kBps_to_icc(1)?
> > > 
> > > Doing so works fine here with both the CRD and X13s and may result in
> > > some further power savings.
> > > 
> > 
> > No, we shouldn't be setting random value as the bandwidth. Reason is, these
> > values are computed by the bus team based on the requirement of the interconnect
> > paths (clock, voltage etc...) with actual PCIe Gen speeds. I don't know about
> > the potential implication even if it happens to work.
> 
> Why would you need PCIe gen1 speed during suspend?
> 

That's what the suggestion I got from Qcom PCIe team. But I didn't compare the
value you added during icc support patch with downstream. More below...

> These numbers are already somewhat random as, for example, the vendor
> driver is requesting 500 kBps (800 peak) during runtime, while we are
> now requesting five times that during suspend (the vendor driver gets a
> away with 0).
> 

Hmm, then I should've asked you this question when you added icc support.
I thought you inherited those values from downstream but apparently not.
Even in downstream they are using different bw votes for different platforms.
I will touch base with PCIe and ICC teams to find out the actual value that
needs to be used.

Regarding 0 icc vote, downstream puts all the devices in D3Cold (poweroff)
state during suspend. So for them 0 icc vote will work but not for us as we need
to keep the device and link intact.

- Mani

> Sure, this indicates that the interconnect driver is broken and we
> should indeed be using values that at least makes some sense (and
> eventually fix the interconnect driver).
> 
> Just not sure that you need to request that much bandwidth during
> suspend (e.g. for just a couple of register accesses).
> 
> Johan

-- 
மணிவண்ணன் சதாசிவம்

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/1] PCI: qcom: Add support for system suspend and resume
  2023-03-29 14:01         ` Manivannan Sadhasivam
@ 2023-03-29 14:42           ` Johan Hovold
  2023-03-29 16:37             ` Manivannan Sadhasivam
  0 siblings, 1 reply; 13+ messages in thread
From: Johan Hovold @ 2023-03-29 14:42 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: lpieralisi, kw, robh, andersson, konrad.dybcio, bhelgaas,
	linux-pci, linux-arm-msm, linux-kernel, quic_krichai,
	johan+linaro, steev, mka, Dhruva Gole

On Wed, Mar 29, 2023 at 07:31:50PM +0530, Manivannan Sadhasivam wrote:
> On Wed, Mar 29, 2023 at 03:19:51PM +0200, Johan Hovold wrote:
> > On Wed, Mar 29, 2023 at 06:22:32PM +0530, Manivannan Sadhasivam wrote:

> > Why would you need PCIe gen1 speed during suspend?
> 
> That's what the suggestion I got from Qcom PCIe team. But I didn't compare the
> value you added during icc support patch with downstream. More below...
> 
> > These numbers are already somewhat random as, for example, the vendor
> > driver is requesting 500 kBps (800 peak) during runtime, while we are
> > now requesting five times that during suspend (the vendor driver gets a
> > away with 0).
> 
> Hmm, then I should've asked you this question when you added icc support.
> I thought you inherited those values from downstream but apparently not.
> Even in downstream they are using different bw votes for different platforms.
> I will touch base with PCIe and ICC teams to find out the actual value that
> needs to be used.

We discussed things at length at the time, but perhaps it was before you
joined to project. As I alluded to above, we should not play the game of
using arbitrary numbers but instead fix the interconnect driver so that
it can map the interconnect values in kBps to something that makes sense
for the Qualcomm hardware. Anything else is not acceptable for upstream.

Johan

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend and resume
  2023-03-29 13:02     ` Manivannan Sadhasivam
  2023-03-29 13:04       ` Konrad Dybcio
@ 2023-03-29 14:51       ` Frank Li
  1 sibling, 0 replies; 13+ messages in thread
From: Frank Li @ 2023-03-29 14:51 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: lpieralisi@kernel.org, kw@linux.com, robh@kernel.org,
	andersson@kernel.org, konrad.dybcio@linaro.org,
	bhelgaas@google.com, linux-pci@vger.kernel.org,
	linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org,
	quic_krichai@quicinc.com, johan+linaro@kernel.org, steev@kali.org,
	mka@chromium.org, Dhruva Gole



> -----Original Message-----
> From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> Sent: Wednesday, March 29, 2023 8:03 AM
> To: Frank Li <frank.li@nxp.com>
> Cc: lpieralisi@kernel.org; kw@linux.com; robh@kernel.org;
> andersson@kernel.org; konrad.dybcio@linaro.org; bhelgaas@google.com;
> linux-pci@vger.kernel.org; linux-arm-msm@vger.kernel.org; linux-
> kernel@vger.kernel.org; quic_krichai@quicinc.com; johan+linaro@kernel.org;
> steev@kali.org; mka@chromium.org; Dhruva Gole <d-gole@ti.com>
> Subject: Re: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend
> and resume
> 
> Caution: EXT Email
> 
> On Mon, Mar 27, 2023 at 03:29:54PM +0000, Frank Li wrote:
> >
> >
> > > -----Original Message-----
> > > From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > > Sent: Monday, March 27, 2023 8:38 AM
> > > To: lpieralisi@kernel.org; kw@linux.com; robh@kernel.org
> > > Cc: andersson@kernel.org; konrad.dybcio@linaro.org;
> > > bhelgaas@google.com; linux-pci@vger.kernel.org; linux-arm-
> > > msm@vger.kernel.org; linux-kernel@vger.kernel.org;
> > > quic_krichai@quicinc.com; johan+linaro@kernel.org; steev@kali.org;
> > > mka@chromium.org; Manivannan Sadhasivam
> > > <manivannan.sadhasivam@linaro.org>; Dhruva Gole <d-gole@ti.com>
> > > Subject: [EXT] [PATCH v3 1/1] PCI: qcom: Add support for system suspend
> > > and resume
> > >
> > > Caution: EXT Email
> > >
> > > During the system suspend, vote for minimal interconnect bandwidth and
> > > also turn OFF the resources like clock and PHY if there are no active
> > > devices connected to the controller. For the controllers with active
> > > devices, the resources are kept ON as removing the resources will
> > > trigger access violation during the late end of suspend cycle as kernel
> > > tries to access the config space of PCIe devices to mask the MSIs.
> >
> > I remember I met similar problem before. It is relate ASPM settings of
> NVME.
> > NVME try to use L1.2 at suspend to save restore time.
> >
> > It should be user decided if PCI enter L1.2( for better resume time) or L2
> > For batter power saving.  If NVME disable ASPM,  NVME driver will free
> > Msi irq before enter suspend,  so not issue access config space by MSI
> > Irq disable function.
> >
> 
> The NVMe driver will only shutdown the device if ASPM is completely
> disabled in
> the kernel. They also take powerdown path for some Intel platforms though.
> For
> others, they keep the device in power on state and expect power saving with
> ASPM.

It appears that not every device is compatible with L1.2 ASPM.

The PCI controller driver should manage this situation by transitioning devices to L2/3
when the system is suspended. However, I am unsure of the appropriate method for
handling this case.. 

> 
> > This is just general comment. It is not specific for this patches.  Many
> platform
> > Will face the similar problem.  Maybe need better solution to handle
> > L2/L3 for better power saving in future.
> >
> 
> The only argument I hear from them is that, when the NVMe device gets
> powered
> down during suspend, then it may detoriate the life time of it as the suspend
> cycle is going to be high.
> 
> - Mani
> 
> > Frank Li
> >
> > >
> > > Also, it is not desirable to put the link into L2/L3 state as that
> > > implies VDD supply will be removed and the devices may go into
> powerdown
> > > state. This will affect the lifetime of storage devices like NVMe.
> > >
> > > And finally, during resume, turn ON the resources if the controller was
> > > truly suspended (resources OFF) and update the interconnect bandwidth
> > > based on PCIe Gen speed.
> > >
> > > Suggested-by: Krishna chaitanya chundru <quic_krichai@quicinc.com>
> > > Acked-by: Dhruva Gole <d-gole@ti.com>
> > > Signed-off-by: Manivannan Sadhasivam
> > > <manivannan.sadhasivam@linaro.org>
> > > ---
> > >  drivers/pci/controller/dwc/pcie-qcom.c | 62
> ++++++++++++++++++++++++++
> > >  1 file changed, 62 insertions(+)
> > >
> > > diff --git a/drivers/pci/controller/dwc/pcie-qcom.c
> > > b/drivers/pci/controller/dwc/pcie-qcom.c
> > > index a232b04af048..f33df536d9be 100644
> > > --- a/drivers/pci/controller/dwc/pcie-qcom.c
> > > +++ b/drivers/pci/controller/dwc/pcie-qcom.c
> > > @@ -227,6 +227,7 @@ struct qcom_pcie {
> > >         struct gpio_desc *reset;
> > >         struct icc_path *icc_mem;
> > >         const struct qcom_pcie_cfg *cfg;
> > > +       bool suspended;
> > >  };
> > >
> > >  #define to_qcom_pcie(x)                dev_get_drvdata((x)->dev)
> > > @@ -1820,6 +1821,62 @@ static int qcom_pcie_probe(struct
> > > platform_device *pdev)
> > >         return ret;
> > >  }
> > >
> > > +static int qcom_pcie_suspend_noirq(struct device *dev)
> > > +{
> > > +       struct qcom_pcie *pcie = dev_get_drvdata(dev);
> > > +       int ret;
> > > +
> > > +       /*
> > > +        * Set minimum bandwidth required to keep data path functional
> during
> > > +        * suspend.
> > > +        */
> > > +       ret = icc_set_bw(pcie->icc_mem, 0, MBps_to_icc(250));
> > > +       if (ret) {
> > > +               dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret);
> > > +               return ret;
> > > +       }
> > > +
> > > +       /*
> > > +        * Turn OFF the resources only for controllers without active PCIe
> > > +        * devices. For controllers with active devices, the resources are kept
> > > +        * ON and the link is expected to be in L0/L1 (sub)states.
> > > +        *
> > > +        * Turning OFF the resources for controllers with active PCIe devices
> > > +        * will trigger access violation during the end of the suspend cycle,
> > > +        * as kernel tries to access the PCIe devices config space for masking
> > > +        * MSIs.
> > > +        *
> > > +        * Also, it is not desirable to put the link into L2/L3 state as that
> > > +        * implies VDD supply will be removed and the devices may go into
> > > +        * powerdown state. This will affect the lifetime of the storage
> devices
> > > +        * like NVMe.
> > > +        */
> > > +       if (!dw_pcie_link_up(pcie->pci)) {
> > > +               qcom_pcie_host_deinit(&pcie->pci->pp);
> > > +               pcie->suspended = true;
> > > +       }
> > > +
> > > +       return 0;
> > > +}
> > > +
> > > +static int qcom_pcie_resume_noirq(struct device *dev)
> > > +{
> > > +       struct qcom_pcie *pcie = dev_get_drvdata(dev);
> > > +       int ret;
> > > +
> > > +       if (pcie->suspended) {
> > > +               ret = qcom_pcie_host_init(&pcie->pci->pp);
> > > +               if (ret)
> > > +                       return ret;
> > > +
> > > +               pcie->suspended = false;
> > > +       }
> > > +
> > > +       qcom_pcie_icc_update(pcie);
> > > +
> > > +       return 0;
> > > +}
> > > +
> > >  static const struct of_device_id qcom_pcie_match[] = {
> > >         { .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 },
> > >         { .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 },
> > > @@ -1856,12 +1913,17 @@
> > > DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302,
> > > qcom_fixup_class);
> > >  DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000,
> > > qcom_fixup_class);
> > >  DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001,
> > > qcom_fixup_class);
> > >
> > > +static const struct dev_pm_ops qcom_pcie_pm_ops = {
> > > +       NOIRQ_SYSTEM_SLEEP_PM_OPS(qcom_pcie_suspend_noirq,
> > > qcom_pcie_resume_noirq)
> > > +};
> > > +
> > >  static struct platform_driver qcom_pcie_driver = {
> > >         .probe = qcom_pcie_probe,
> > >         .driver = {
> > >                 .name = "qcom-pcie",
> > >                 .suppress_bind_attrs = true,
> > >                 .of_match_table = qcom_pcie_match,
> > > +               .pm = &qcom_pcie_pm_ops,
> > >         },
> > >  };
> > >  builtin_platform_driver(qcom_pcie_driver);
> > > --
> > > 2.25.1
> >
> 
> --
> மணிவண்ணன் சதாசிவம்

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 1/1] PCI: qcom: Add support for system suspend and resume
  2023-03-29 14:42           ` Johan Hovold
@ 2023-03-29 16:37             ` Manivannan Sadhasivam
  0 siblings, 0 replies; 13+ messages in thread
From: Manivannan Sadhasivam @ 2023-03-29 16:37 UTC (permalink / raw)
  To: Johan Hovold
  Cc: lpieralisi, kw, robh, andersson, konrad.dybcio, bhelgaas,
	linux-pci, linux-arm-msm, linux-kernel, quic_krichai,
	johan+linaro, steev, mka, Dhruva Gole

On Wed, Mar 29, 2023 at 04:42:23PM +0200, Johan Hovold wrote:
> On Wed, Mar 29, 2023 at 07:31:50PM +0530, Manivannan Sadhasivam wrote:
> > On Wed, Mar 29, 2023 at 03:19:51PM +0200, Johan Hovold wrote:
> > > On Wed, Mar 29, 2023 at 06:22:32PM +0530, Manivannan Sadhasivam wrote:
> 
> > > Why would you need PCIe gen1 speed during suspend?
> > 
> > That's what the suggestion I got from Qcom PCIe team. But I didn't compare the
> > value you added during icc support patch with downstream. More below...
> > 
> > > These numbers are already somewhat random as, for example, the vendor
> > > driver is requesting 500 kBps (800 peak) during runtime, while we are
> > > now requesting five times that during suspend (the vendor driver gets a
> > > away with 0).
> > 
> > Hmm, then I should've asked you this question when you added icc support.
> > I thought you inherited those values from downstream but apparently not.
> > Even in downstream they are using different bw votes for different platforms.
> > I will touch base with PCIe and ICC teams to find out the actual value that
> > needs to be used.
> 
> We discussed things at length at the time, but perhaps it was before you
> joined to project.

Yeah, could be.

> As I alluded to above, we should not play the game of
> using arbitrary numbers but instead fix the interconnect driver so that
> it can map the interconnect values in kBps to something that makes sense
> for the Qualcomm hardware. Anything else is not acceptable for upstream.
> 

Agree. I've started the discussion regarding this and will get back once I have
answers.

- Mani

> Johan

-- 
மணிவண்ணன் சதாசிவம்

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-03-29 16:37 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-27 13:38 [PATCH v3 0/1] PCI: qcom: Add support for system suspend and resume Manivannan Sadhasivam
2023-03-27 13:38 ` [PATCH v3 1/1] " Manivannan Sadhasivam
2023-03-27 15:29   ` [EXT] " Frank Li
2023-03-29 13:02     ` Manivannan Sadhasivam
2023-03-29 13:04       ` Konrad Dybcio
2023-03-29 14:51       ` Frank Li
2023-03-29  9:56   ` Johan Hovold
2023-03-29 12:52     ` Manivannan Sadhasivam
2023-03-29 12:59       ` Konrad Dybcio
2023-03-29 13:19       ` Johan Hovold
2023-03-29 14:01         ` Manivannan Sadhasivam
2023-03-29 14:42           ` Johan Hovold
2023-03-29 16:37             ` Manivannan Sadhasivam

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).