From mboxrd@z Thu Jan 1 00:00:00 1970 From: Lin Ming Subject: Re: [RFC PATCH] PCIe: Add PCIe runtime D3cold support Date: Mon, 16 Apr 2012 08:48:31 +0800 Message-ID: <1334537311.11188.118.camel@minggr> References: <4F8790F6.5080408@intel.com> <201204132141.58063.rjw@sisk.pl> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <201204132141.58063.rjw@sisk.pl> Sender: linux-pci-owner@vger.kernel.org To: "Rafael J. Wysocki" Cc: "Yan, Zheng" , bhelgaas@google.com, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-pm@vger.kernel.org, Zhang Rui , huang ying , ACPI Devel Mailing List List-Id: linux-acpi@vger.kernel.org On Fri, 2012-04-13 at 21:41 +0200, Rafael J. Wysocki wrote: > Hi, > > On Friday, April 13, 2012, Yan, Zheng wrote: > > Hi all, > > > > This patch adds PCIe runtime D3cold support, namely cut power supply for functions > > beneath a PCIe port when they all have entered D3. A device in D3cold can only > > generate wake event through the WAKE# pin. Because we can not access to a device's > > configure space while it's in D3cold, pme_poll is disabled for devices in D3cold. > > > > Any comment will be appreciated. > > > > Signed-off-by: Zheng Yan > > --- > > diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c > > index 0f150f2..e210e8cb 100644 > > --- a/drivers/pci/pci-acpi.c > > +++ b/drivers/pci/pci-acpi.c > > @@ -224,7 +224,7 @@ static int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state) > > [PCI_D1] = ACPI_STATE_D1, > > [PCI_D2] = ACPI_STATE_D2, > > [PCI_D3hot] = ACPI_STATE_D3, > > - [PCI_D3cold] = ACPI_STATE_D3 > > + [PCI_D3cold] = ACPI_STATE_D3_COLD > > }; > > int error = -EINVAL; > > > > Please don't use that ACPI_STATE_D3_COLD thing, it's not defined correctly. > > We should define ACPI_STATE_D3_COLD == ACPI_STATE_D3 and add ACPI_STATE_D3_HOT > instead. I'll prepare a patch for that over the weekend if no one has done > that already. Hi Rafael, Have you started to write the patch? If not, I can do it. Thanks, Lin Ming