From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753246AbcIBWXI (ORCPT ); Fri, 2 Sep 2016 18:23:08 -0400 Received: from mga07.intel.com ([134.134.136.100]:5832 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750989AbcIBWXH (ORCPT ); Fri, 2 Sep 2016 18:23:07 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.30,273,1470726000"; d="scan'208";a="4043142" Message-ID: <1472854985.2946.45.camel@linux.intel.com> Subject: Re: [PATCH v2 3/3] nvme: Enable autonomous power state transitions From: J Freyensee To: Andy Lutomirski Cc: Jens Axboe , "linux-kernel@vger.kernel.org" , linux-nvme@lists.infradead.org, Keith Busch , Andy Lutomirski , Christoph Hellwig Date: Fri, 02 Sep 2016 15:23:05 -0700 In-Reply-To: References: <6e6c3cd47858160bf8e34f2edb481c0ef5092d5b.1472594203.git.luto@kernel.org> <1472850905.2946.20.camel@linux.intel.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.20.4 (3.20.4-1.fc24) Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 2016-09-02 at 14:43 -0700, Andy Lutomirski wrote: > On Fri, Sep 2, 2016 at 2:15 PM, J Freyensee > wrote: > > > > On Tue, 2016-08-30 at 14:59 -0700, Andy Lutomirski wrote: > > > > > > NVME devices can advertise multiple power states.  These states > > > can > > > be either "operational" (the device is fully functional but > > > possibly > > > slow) or "non-operational" (the device is asleep until woken up). > > > Some devices can automatically enter a non-operational state when > > > idle for a specified amount of time and then automatically wake > > > back > > > up when needed. > > > > > > The hardware configuration is a table.  For each state, an entry > > > in > > > the table indicates the next deeper non-operational state, if > > > any, > > > to autonomously transition to and the idle time required before > > > transitioning. > > > > > > This patch teaches the driver to program APST so that each > > > successive non-operational state will be entered after an idle > > > time > > > equal to 100% of the total latency (entry plus exit) associated > > > with > > > that state.  A sysfs attribute 'apst_max_latency_us' gives the > > > maximum acceptable latency in ns; non-operational states with > > > total > > > latency greater than this value will not be used.  As a special > > > case, apst_max_latency_us=0 will disable APST entirely. > > > > May I ask a dumb question? > > > > How does this work with multiple NVMe devices plugged into a > > system?  I > > would have thought we'd want one apst_max_latency_us entry per NVMe > > controller for individual control of each device?  I have two > > Fultondale-class devices plugged into a system I tried these > > patches on > > (the 4.8-rc4 kernel) and I'm not sure how the single > > /sys/module/nvme_core/parameters/apst_max_latency_us would work per > > my > > 2 devices (and the value is using the default 25000). > > > > Ah, I faked you out :( > > The module parameter (nvme_core/parameters/apst_max_latency_us) just > sets the default for newly probed devices.  The actual setting is in > /sys/devices/whatever (symlinked from /sys/block/nvme0n1/devices, for > example).  Perhaps I should name the former > default_apst_max_latency_us. It would certainly be more describable to understand what the entry is for, but then the name is also getting longer. Just "default_apst_latency_us"? Or maybe it's probably fine as-is.