public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "David E. Box" <david.e.box@linux.intel.com>
To: Daniel Drake <drake@endlessos.org>
Cc: Bjorn Helgaas <helgaas@kernel.org>,
	tglx@linutronix.de, mingo@redhat.com,  bp@alien8.de,
	dave.hansen@linux.intel.com, x86@kernel.org, hpa@zytor.com,
	 linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
	bhelgaas@google.com,  mario.limonciello@amd.com,
	rafael@kernel.org, lenb@kernel.org,  linux-acpi@vger.kernel.org,
	linux@endlessos.org
Subject: Re: [PATCH v2 1/2] PCI: Disable D3cold on Asus B1400 PCI-NVMe bridge
Date: Tue, 20 Feb 2024 15:25:41 -0800	[thread overview]
Message-ID: <6d989b7da1e7493a3e2d478cec060a459f375daf.camel@linux.intel.com> (raw)
In-Reply-To: <CAD8Lp46dPtE12ai8srt9Bz3awnkkb1LZz_7FQuF57M=LaUSaCw@mail.gmail.com>

On Mon, 2024-02-19 at 12:35 +0100, Daniel Drake wrote:
> On Fri, Feb 9, 2024 at 9:36 AM Daniel Drake <drake@endlessos.org> wrote:
> > On Thu, Feb 8, 2024 at 5:57 PM David E. Box <david.e.box@linux.intel.com>
> > wrote:
> > > This does look like a firmware bug. We've had reports of D3cold support
> > > missing
> > > when running in non-VMD mode on systems that were designed with VMD for
> > > Windows.
> > > These issues have been caught and addressed by OEMs during enabling of
> > > Linux
> > > systems. Does D3cold work in VMD mode?
> > 
> > On Windows for the VMD=on case, we only tested this on a BIOS with
> > StorageD3Enable=0. The NVMe device and parent bridge stayed in D0 over
> > suspend, but that's exactly what the BIOS asked for, so it doesn't
> > really answer your question.
> 
> Tested on the original BIOS version with VMD=on: Windows leaves the
> NVMe device (and parent bridge) in D0 during suspend (i.e. same result
> as VMD=off).
> 
> On this setup, there are 2 devices with StorageD3Enable flags:
> 
> 1. \_SB.PC00.PEG0.PEGP._DSD has StorageD3Enable=1. This is set
> regardless of the VMD setting at the BIOS level. This is the flag that
> is causing us the headache in non-VMD mode where Linux then proceeds
> to put devices into D3cold.

Likely never tested.

> This PEGP device in the non-VMD configuration corresponds to the NVMe
> storage device. PEG0 is the PCI root port at 00:06.0 (the one in
> question in this thread), and PEGP is the child with address 0.
> However in VMD mode, 00:06.0 is a dummy device (not a bridge) so this
> PEGP device isn't going to be used by anything.
> 
> 2. \_SB.PC00.VMD0._DSD has StorageD3Enable=0. This VMD0 device is only
> present when VMD is enabled in the BIOS. It is the companion for
> 00:0e.0 which is the device that the vmd driver binds against. This
> could be influencing Windows to leave the NVMe device in D0, but I
> doubt it, because it can't explain why Windows would have the D0
> behaviour when VMD=off, also this is a really strange place to put the
> StorageD3Enable setting because it is not a storage device.

Yes, we don't look for the property here, only under the child device of the
Root Port.

> 
> > On Linux with VMD=on and StorageD3Enable=1, the NVMe storage device
> > and the VMD parent bridge are staying in D0 over suspend. I don't know
> > why this is, I would have expected at least D3hot.  However, given
> > that the NVMe device has no firmware_node under the VMD=on setup, I
> > believe there is no way it would enter D3cold because there's no
> > linkage to an ACPI device, so no available _PS3 or _PR0 or whatever is
> > the precise definition of D3cold.
> 
> Checked in more detail. In Linux, the NVMe device will only go into
> D3hot/D3cold if the ACPI companion device has an explicit
> StorageD3Enable=1. However, in VMD mode the NVMe storage device has no
> ACPI companion. Code flow is nvme_pci_alloc_dev() -> acpi_storage_d3()
>  -> return false because no companion.

That explains it.

> 
> The VMD PCI bridge at 10000:e0:06.0 that is parent of the SATA & NVME
> devices does have a companion \_SB.PC00.VMD0.PEG0
> However, the SATA and NVME child devices do not have any ACPI
> companion. I examined the logic of vmd_acpi_find_companion() and
> determined that it is looking for devices with _ADR 80b8ffff (SATA)
> and 8100ffff (NVME) and such devices do not exist in the ACPI tables.

Based on its counterpart it should be at \_SB.PC00.VMD0.PEG0.PEGP in VMD mode.
This is where _ADR = 8100FFFF should be. This looks like broken ACPI code since
the property does exist in the expected location when VMD is disabled.

> 
> Speculating a little, I guess this is also why Windows leaves the
> device in D0 in VMD=on mode: it would only put the NVMe device in
> D3hot/D3cold if it had a corresponding companion with
> StorageD3Enable=1 and there isn't one of those. What's still unknown
> is why it doesn't put the device in D3 in VMD=off mode because there
> is a correctly placed StorageD3Enable=1 in that case.

Given the observations (inconsistent firmware and testing showing Windows using
D0) I'm good with the approach.

David

  parent reply	other threads:[~2024-02-20 23:25 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-07  8:44 [PATCH v2 1/2] PCI: Disable D3cold on Asus B1400 PCI-NVMe bridge Daniel Drake
2024-02-07  8:44 ` [PATCH v2 2/2] Revert "ACPI: PM: Block ASUS B1400CEAE from suspend to idle by default" Daniel Drake
2024-02-07  9:08   ` Jian-Hong Pan
2024-02-07 10:29     ` Rafael J. Wysocki
2024-02-07 11:42       ` Jian-Hong Pan
2024-02-07  9:06 ` [PATCH v2 1/2] PCI: Disable D3cold on Asus B1400 PCI-NVMe bridge Jian-Hong Pan
2024-02-07 11:43   ` Jian-Hong Pan
2024-02-07 20:05 ` Bjorn Helgaas
2024-02-08  8:37   ` Daniel Drake
2024-02-08  9:52     ` Daniel Drake
2024-02-08 16:57       ` David E. Box
2024-02-09  8:36         ` Daniel Drake
2024-02-09 17:19           ` David E. Box
2024-02-19 11:35           ` Daniel Drake
2024-02-19 12:45             ` Mario Limonciello
2024-02-20 23:25             ` David E. Box [this message]
2024-02-19  9:52       ` Daniel Drake

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6d989b7da1e7493a3e2d478cec060a459f375daf.camel@linux.intel.com \
    --to=david.e.box@linux.intel.com \
    --cc=bhelgaas@google.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@linux.intel.com \
    --cc=drake@endlessos.org \
    --cc=helgaas@kernel.org \
    --cc=hpa@zytor.com \
    --cc=lenb@kernel.org \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linux@endlessos.org \
    --cc=mario.limonciello@amd.com \
    --cc=mingo@redhat.com \
    --cc=rafael@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox