From: <Mario.Limonciello@dell.com>
To: <luto@kernel.org>
Cc: <axboe@kernel.dk>, <linux-kernel@vger.kernel.org>,
<kai.heng.feng@canonical.com>, <linux-nvme@lists.infradead.org>,
<hch@lst.de>, <sagi@grimberg.me>, <keith.busch@intel.com>
Subject: Re: [PATCH] nvme: Change our APST table to be no more aggressive than Intel RSTe
Date: Fri, 12 May 2017 14:34:45 +0000 [thread overview]
Message-ID: <1494599685838.3433@Dell.com> (raw)
In-Reply-To: <CALCETrUVb8KqOOzW7fXDWCyCiXj6jpPm-w=XZ-FXrh_rfdDkJQ@mail.gmail.com>
>Yes, mostly. I've written the patch, but I was planning to target it
>at 4.12 or 4.13 but not -stable. It's mostly just a cleanup and has
>no real power saving benefit since the RSTe timeouts are so absurdly
>conservative that I doubt PS4 will happen in practical usage.
OK.
>Perhaps
>in suspend-to-idle? (For suspend-to-idle, I suspect we should really
>be using D3 instead. Do we already do that?)
Well I think this will depend upon what the SSD "will" support.
There isn't good documentation for how Linux "should" handle
suspend-to-idle with disks yet, so the best you can follow is
what Microsoft publicly mentions for Modern Standby. [1]
"Akin to AHCI PCIe SSDs, NVMe SSDs need to provide the host with a
non-operational power state that is comparable to DEVSLP (<5mW draw,
<100ms exit latency) in order to allow the host to perform appropriate
transitions into Modern Standby. Should the NVMe SSD not expose
such a non-operational power state, autonomous power state
transitions (APST) is the only other option to enter Modern
Standby successfully.
Note that in the absence of DEVSLP or a comparable NVMe
non-operational power state, the host can make no guarantees
on the device’s power draw. In this case, if you observe
non-optimal power consumption by the device/system, you
will have to work with your device vendor to determine the cause."
Something important to consider though is that Microsoft keeps
more of the OS actually running during Modern Standby but
has deep control of what applications and kernel threads are
allowed to do. For Linux I think that D3 is most likely going to
provide the best power for these disks.
[1] https://msdn.microsoft.com/en-us/windows/hardware/commercialize/design/device-experiences/part-selection#ssd-storage
next prev parent reply other threads:[~2017-05-12 14:34 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-12 4:06 [PATCH] nvme: Change our APST table to be no more aggressive than Intel RSTe Andy Lutomirski
2017-05-12 13:58 ` Mario.Limonciello
2017-05-12 14:04 ` Andy Lutomirski
2017-05-12 14:34 ` Mario.Limonciello [this message]
2017-05-12 23:49 ` Andy Lutomirski
2017-05-15 15:51 ` Mario.Limonciello
2017-05-13 12:27 ` Andy Lutomirski
2017-05-15 16:11 ` Mario.Limonciello
2017-05-19 1:18 ` Andy Lutomirski
2017-05-19 1:13 ` Andy Lutomirski
2017-05-19 1:32 ` Mario.Limonciello
2017-05-19 1:37 ` Andy Lutomirski
2017-05-19 6:35 ` Christoph Hellwig
2017-05-19 14:18 ` Keith Busch
2017-05-19 14:15 ` Christoph Hellwig
2017-05-19 18:24 ` Andy Lutomirski
2017-05-19 21:42 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1494599685838.3433@Dell.com \
--to=mario.limonciello@dell.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=kai.heng.feng@canonical.com \
--cc=keith.busch@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=luto@kernel.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox