* [PATCH 0/1] nvme-pci: Add CPU latency pm-qos handling
@ 2024-10-04 10:09 Tero Kristo
2024-10-04 10:09 ` [PATCH 1/1] " Tero Kristo
0 siblings, 1 reply; 9+ messages in thread
From: Tero Kristo @ 2024-10-04 10:09 UTC (permalink / raw)
Cc: linux-kernel, axboe, hch, linux-nvme, sagi, kbusch
Hello,
Re-posting this as the 6.12-rc1 is out, and the previous RFC didn't
receive any feedback. The patch hasn't seen any changes, but I included
the cover letter for details.
The patch adds mechanism for tacking NVME latency with random workloads.
A new sysfs knob (cpu_latency_us) is added under NVME devices, which can
be used to fine tune PM QoS CPU latency limit while NVME is operational.
Below is a postprocessed measurement run on an Icelake Xeon platform,
measuring latencies with 'fio' tool, running random-read and read
profiles. 5 random-read and 5 bulk read operations are done with the
latency limit enabled / disabled, and the maximum 'slat' (start latency),
'clat' (completion latency) and 'lat' (total latency) values shown for each
setup; values are in microseconds. The bandwidth is measured with the
'read' payload of fio, and min-avg-max values are shown in MiB/s. c6%
indicates the time spent in c6 state as percentage during the test for
the CPU running 'fio'.
==
Setting cpu_latency_us limit to 10 (enabled)
slat: 31, clat: 99, lat: 113, bw: 1156-1332-1359, c6%: 2.8
slat: 49, clat: 135, lat: 143, bw: 1156-1332-1361, c6%: 1.0
slat: 67, clat: 148, lat: 156, bw: 1159-1331-1361, c6%: 0.9
slat: 51, clat: 99, lat: 107, bw: 1160-1330-1356, c6%: 1.0
slat: 82, clat: 114, lat: 122, bw: 1156-1333-1359, c6%: 1.0
Setting cpu_latency_us limit to -1 (disabled)
slat: 112, clat: 275, lat: 364, bw: 1153-1334-1364, c6%: 80.0
slat: 110, clat: 270, lat: 324, bw: 1164-1338-1369, c6%: 80.1
slat: 106, clat: 260, lat: 320, bw: 1159-1330-1362, c6%: 79.7
slat: 110, clat: 255, lat: 300, bw: 1156-1332-1363, c6%: 80.2
slat: 107, clat: 248, lat: 322, bw: 1152-1331-1362, c6%: 79.9
==
As a summary, the c6 induced latencies are eliminated from the
random-read tests ('clat' drops from 250+us to 100-150us), and in the
maximum throughput testing the bandwidth is not impacted negatively
(bandwidth values are pretty much identical) so the overhead introduced
is minimal.
-Tero
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/1] nvme-pci: Add CPU latency pm-qos handling
2024-10-04 10:09 [PATCH 0/1] nvme-pci: Add CPU latency pm-qos handling Tero Kristo
@ 2024-10-04 10:09 ` Tero Kristo
2024-10-07 6:19 ` Christoph Hellwig
0 siblings, 1 reply; 9+ messages in thread
From: Tero Kristo @ 2024-10-04 10:09 UTC (permalink / raw)
Cc: linux-kernel, axboe, hch, linux-nvme, sagi, kbusch
Add support for limiting CPU latency while NVME IO is running. When a
NVME IO is started, it will add a user configurable CPU latency limit
in place (if any.) The limit is removed after 3ms of inactivity.
The CPU latency limit is configurable via a sysfs parameter;
cpu_latency_us under the NVME device.
Signed-off-by: Tero Kristo <tero.kristo@linux.intel.com>
---
drivers/nvme/host/pci.c | 95 ++++++++++++++++++++++++++++++++++++++---
1 file changed, 90 insertions(+), 5 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 7990c3f22ecf..de8ddc9b36de 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -21,6 +21,7 @@
#include <linux/mutex.h>
#include <linux/once.h>
#include <linux/pci.h>
+#include <linux/pm_qos.h>
#include <linux/suspend.h>
#include <linux/t10-pi.h>
#include <linux/types.h>
@@ -112,6 +113,14 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown);
static void nvme_delete_io_queues(struct nvme_dev *dev);
static void nvme_update_attrs(struct nvme_dev *dev);
+#define NVME_CPU_LATENCY_TIMEOUT_MS 3
+
+struct nvme_cpu_latency_qos {
+ struct dev_pm_qos_request req;
+ struct delayed_work work;
+ unsigned long active;
+};
+
/*
* Represents an NVM Express device. Each nvme_dev is a PCI function.
*/
@@ -141,6 +150,8 @@ struct nvme_dev {
struct nvme_ctrl ctrl;
u32 last_ps;
bool hmb;
+ int cpu_latency;
+ struct nvme_cpu_latency_qos __percpu *cpu_latency_qos;
mempool_t *iod_mempool;
@@ -213,6 +224,7 @@ struct nvme_queue {
__le32 *dbbuf_cq_db;
__le32 *dbbuf_sq_ei;
__le32 *dbbuf_cq_ei;
+ const struct cpumask *irq_aff_mask;
struct completion delete_done;
};
@@ -470,6 +482,9 @@ static void nvme_pci_map_queues(struct blk_mq_tag_set *set)
*/
static inline void nvme_write_sq_db(struct nvme_queue *nvmeq, bool write_sq)
{
+ struct nvme_dev *dev;
+ int cpu;
+
if (!write_sq) {
u16 next_tail = nvmeq->sq_tail + 1;
@@ -483,6 +498,27 @@ static inline void nvme_write_sq_db(struct nvme_queue *nvmeq, bool write_sq)
nvmeq->dbbuf_sq_db, nvmeq->dbbuf_sq_ei))
writel(nvmeq->sq_tail, nvmeq->q_db);
nvmeq->last_sq_tail = nvmeq->sq_tail;
+
+ /* Kick CPU latency while updating queue. */
+ dev = nvmeq->dev;
+ if (!dev || dev->cpu_latency < 0)
+ return;
+
+ for_each_cpu(cpu, nvmeq->irq_aff_mask) {
+ struct nvme_cpu_latency_qos *qos;
+
+ qos = per_cpu_ptr(dev->cpu_latency_qos, cpu);
+
+ qos->active = jiffies + msecs_to_jiffies(NVME_CPU_LATENCY_TIMEOUT_MS);
+
+ if (dev_pm_qos_request_active(&qos->req))
+ continue;
+
+ dev_pm_qos_add_request(get_cpu_device(cpu), &qos->req,
+ DEV_PM_QOS_RESUME_LATENCY,
+ dev->cpu_latency);
+ schedule_delayed_work(&qos->work, msecs_to_jiffies(NVME_CPU_LATENCY_TIMEOUT_MS));
+ }
}
static inline void nvme_sq_copy_cmd(struct nvme_queue *nvmeq,
@@ -1600,14 +1636,19 @@ static int queue_request_irq(struct nvme_queue *nvmeq)
{
struct pci_dev *pdev = to_pci_dev(nvmeq->dev->dev);
int nr = nvmeq->dev->ctrl.instance;
+ int ret;
if (use_threaded_interrupts) {
- return pci_request_irq(pdev, nvmeq->cq_vector, nvme_irq_check,
- nvme_irq, nvmeq, "nvme%dq%d", nr, nvmeq->qid);
+ ret = pci_request_irq(pdev, nvmeq->cq_vector, nvme_irq_check,
+ nvme_irq, nvmeq, "nvme%dq%d", nr, nvmeq->qid);
} else {
- return pci_request_irq(pdev, nvmeq->cq_vector, nvme_irq,
- NULL, nvmeq, "nvme%dq%d", nr, nvmeq->qid);
+ ret = pci_request_irq(pdev, nvmeq->cq_vector, nvme_irq,
+ NULL, nvmeq, "nvme%dq%d", nr, nvmeq->qid);
}
+
+ nvmeq->irq_aff_mask = pci_irq_get_affinity(pdev, nvmeq->cq_vector);
+
+ return ret;
}
static void nvme_init_queue(struct nvme_queue *nvmeq, u16 qid)
@@ -2171,6 +2212,26 @@ static ssize_t hmb_store(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RW(hmb);
+static ssize_t cpu_latency_us_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct nvme_dev *ndev = to_nvme_dev(dev_get_drvdata(dev));
+
+ return sysfs_emit(buf, "%d\n", ndev->cpu_latency);
+}
+
+static ssize_t cpu_latency_us_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct nvme_dev *ndev = to_nvme_dev(dev_get_drvdata(dev));
+
+ if (kstrtoint(buf, 10, &ndev->cpu_latency) < 0)
+ return -EINVAL;
+
+ return count;
+}
+static DEVICE_ATTR_RW(cpu_latency_us);
+
static umode_t nvme_pci_attrs_are_visible(struct kobject *kobj,
struct attribute *a, int n)
{
@@ -2195,6 +2256,7 @@ static struct attribute *nvme_pci_attrs[] = {
&dev_attr_cmbloc.attr,
&dev_attr_cmbsz.attr,
&dev_attr_hmb.attr,
+ &dev_attr_cpu_latency_us.attr,
NULL,
};
@@ -2731,6 +2793,7 @@ static void nvme_pci_free_ctrl(struct nvme_ctrl *ctrl)
nvme_free_tagset(dev);
put_device(dev->dev);
kfree(dev->queues);
+ free_percpu(dev->cpu_latency_qos);
kfree(dev);
}
@@ -2989,6 +3052,17 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev)
return 0;
}
+static void nvme_cpu_latency_work(struct work_struct *work)
+{
+ struct nvme_cpu_latency_qos *qos =
+ container_of(work, struct nvme_cpu_latency_qos, work.work);
+ if (time_after(jiffies, qos->active)) {
+ dev_pm_qos_remove_request(&qos->req);
+ } else {
+ schedule_delayed_work(&qos->work, msecs_to_jiffies(NVME_CPU_LATENCY_TIMEOUT_MS));
+ }
+}
+
static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
const struct pci_device_id *id)
{
@@ -2996,6 +3070,7 @@ static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
int node = dev_to_node(&pdev->dev);
struct nvme_dev *dev;
int ret = -ENOMEM;
+ int cpu;
dev = kzalloc_node(sizeof(*dev), GFP_KERNEL, node);
if (!dev)
@@ -3003,13 +3078,21 @@ static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work);
mutex_init(&dev->shutdown_lock);
+ dev->cpu_latency_qos = alloc_percpu(struct nvme_cpu_latency_qos);
+ if (!dev->cpu_latency_qos)
+ goto out_free_dev;
+ for_each_possible_cpu(cpu)
+ INIT_DELAYED_WORK(per_cpu_ptr(&dev->cpu_latency_qos->work, cpu),
+ nvme_cpu_latency_work);
+ dev->cpu_latency = -1;
+
dev->nr_write_queues = write_queues;
dev->nr_poll_queues = poll_queues;
dev->nr_allocated_queues = nvme_max_io_queues(dev) + 1;
dev->queues = kcalloc_node(dev->nr_allocated_queues,
sizeof(struct nvme_queue), GFP_KERNEL, node);
if (!dev->queues)
- goto out_free_dev;
+ goto out_free_pm_qos;
dev->dev = get_device(&pdev->dev);
@@ -3055,6 +3138,8 @@ static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
out_put_device:
put_device(dev->dev);
kfree(dev->queues);
+out_free_pm_qos:
+ free_percpu(dev->cpu_latency_qos);
out_free_dev:
kfree(dev);
return ERR_PTR(ret);
--
2.43.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 1/1] nvme-pci: Add CPU latency pm-qos handling
2024-10-04 10:09 ` [PATCH 1/1] " Tero Kristo
@ 2024-10-07 6:19 ` Christoph Hellwig
2024-10-09 6:45 ` Tero Kristo
0 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2024-10-07 6:19 UTC (permalink / raw)
To: Tero Kristo; +Cc: linux-kernel, axboe, hch, linux-nvme, sagi, kbusch
> @@ -483,6 +498,27 @@ static inline void nvme_write_sq_db(struct nvme_queue *nvmeq, bool write_sq)
> nvmeq->dbbuf_sq_db, nvmeq->dbbuf_sq_ei))
> writel(nvmeq->sq_tail, nvmeq->q_db);
> nvmeq->last_sq_tail = nvmeq->sq_tail;
> +
> + /* Kick CPU latency while updating queue. */
> + dev = nvmeq->dev;
> + if (!dev || dev->cpu_latency < 0)
> + return;
> +
> + for_each_cpu(cpu, nvmeq->irq_aff_mask) {
Doing something as complex as this for every doorbell write is not
going to fly.
Even if it was I see nothing nvme-specific in the interface.
So please figure out a way to make things cheap in the I/O path
and move code to the right layers.
Also please avoid all these overly long lines.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/1] nvme-pci: Add CPU latency pm-qos handling
2024-10-07 6:19 ` Christoph Hellwig
@ 2024-10-09 6:45 ` Tero Kristo
2024-10-09 8:00 ` Christoph Hellwig
0 siblings, 1 reply; 9+ messages in thread
From: Tero Kristo @ 2024-10-09 6:45 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-kernel, axboe, linux-nvme, sagi, kbusch
On Mon, 2024-10-07 at 08:19 +0200, Christoph Hellwig wrote:
> > @@ -483,6 +498,27 @@ static inline void nvme_write_sq_db(struct
> > nvme_queue *nvmeq, bool write_sq)
> > nvmeq->dbbuf_sq_db, nvmeq->dbbuf_sq_ei))
> > writel(nvmeq->sq_tail, nvmeq->q_db);
> > nvmeq->last_sq_tail = nvmeq->sq_tail;
> > +
> > + /* Kick CPU latency while updating queue. */
> > + dev = nvmeq->dev;
> > + if (!dev || dev->cpu_latency < 0)
> > + return;
> > +
> > + for_each_cpu(cpu, nvmeq->irq_aff_mask) {
>
> Doing something as complex as this for every doorbell write is not
> going to fly.
>
> Even if it was I see nothing nvme-specific in the interface.
>
> So please figure out a way to make things cheap in the I/O path
> and move code to the right layers.
Initially, I posted the patch against block layer, but there the
recommendation was to move this closer to the HW; i.e. NVMe driver
level.
See:
https://patchwork.kernel.org/project/linux-block/patch/20240829075423.1345042-2-tero.kristo@linux.intel.com/
Any tips where this piece of code should actually be moved would be
appreciated.
-Tero
>
> Also please avoid all these overly long lines.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/1] nvme-pci: Add CPU latency pm-qos handling
2024-10-09 6:45 ` Tero Kristo
@ 2024-10-09 8:00 ` Christoph Hellwig
2024-10-09 8:24 ` Tero Kristo
0 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2024-10-09 8:00 UTC (permalink / raw)
To: Tero Kristo
Cc: Christoph Hellwig, linux-kernel, axboe, linux-nvme, sagi, kbusch
On Wed, Oct 09, 2024 at 09:45:07AM +0300, Tero Kristo wrote:
> Initially, I posted the patch against block layer, but there the
> recommendation was to move this closer to the HW; i.e. NVMe driver
> level.
Even if it is called from NVMe, at lot of the code is not nvme specific.
Some of it appears block specific and other pats are entirely generic.
But I still don't see how walking cpumasks and updating paramters in
far away (in terms of cache lines and pointer dereferences) for every
single I/O could work without having a huge performance impact.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/1] nvme-pci: Add CPU latency pm-qos handling
2024-10-09 8:00 ` Christoph Hellwig
@ 2024-10-09 8:24 ` Tero Kristo
2024-10-15 9:25 ` Tero Kristo
0 siblings, 1 reply; 9+ messages in thread
From: Tero Kristo @ 2024-10-09 8:24 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-kernel, axboe, linux-nvme, sagi, kbusch
On Wed, 2024-10-09 at 10:00 +0200, Christoph Hellwig wrote:
> On Wed, Oct 09, 2024 at 09:45:07AM +0300, Tero Kristo wrote:
> > Initially, I posted the patch against block layer, but there the
> > recommendation was to move this closer to the HW; i.e. NVMe driver
> > level.
>
> Even if it is called from NVMe, at lot of the code is not nvme
> specific.
> Some of it appears block specific and other pats are entirely
> generic.
>
> But I still don't see how walking cpumasks and updating paramters in
> far away (in terms of cache lines and pointer dereferences) for every
> single I/O could work without having a huge performance impact.
>
Generally, the cpumask only has a couple of CPUs on it; yes its true on
certain setups every CPU of the system may end up on it, but then the
user has the option to not enable this feature at all. In my testing
system, there is a separate NVME irq for each CPU, so the affinity mask
only contains one bit.
Also, the code tries to avoid calling the heavy PM QoS stuff, by
checking if the request is already active, and updating the values in a
workqueue later on. Generally the heavy-ish parameter update only
happens on the first activity of a burst of NVMe accesses.
-Tero
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/1] nvme-pci: Add CPU latency pm-qos handling
2024-10-09 8:24 ` Tero Kristo
@ 2024-10-15 9:25 ` Tero Kristo
2024-10-15 13:29 ` Christoph Hellwig
0 siblings, 1 reply; 9+ messages in thread
From: Tero Kristo @ 2024-10-15 9:25 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-kernel, axboe, linux-nvme, sagi, kbusch
On Wed, 2024-10-09 at 11:24 +0300, Tero Kristo wrote:
> On Wed, 2024-10-09 at 10:00 +0200, Christoph Hellwig wrote:
> > On Wed, Oct 09, 2024 at 09:45:07AM +0300, Tero Kristo wrote:
> > > Initially, I posted the patch against block layer, but there the
> > > recommendation was to move this closer to the HW; i.e. NVMe
> > > driver
> > > level.
> >
> > Even if it is called from NVMe, at lot of the code is not nvme
> > specific.
> > Some of it appears block specific and other pats are entirely
> > generic.
> >
> > But I still don't see how walking cpumasks and updating paramters
> > in
> > far away (in terms of cache lines and pointer dereferences) for
> > every
> > single I/O could work without having a huge performance impact.
> >
>
> Generally, the cpumask only has a couple of CPUs on it; yes its true
> on
> certain setups every CPU of the system may end up on it, but then the
> user has the option to not enable this feature at all. In my testing
> system, there is a separate NVME irq for each CPU, so the affinity
> mask
> only contains one bit.
>
> Also, the code tries to avoid calling the heavy PM QoS stuff, by
> checking if the request is already active, and updating the values in
> a
> workqueue later on. Generally the heavy-ish parameter update only
> happens on the first activity of a burst of NVMe accesses.
I've been giving this some thought offline, but can't really think of
how this could be done in the generic layers; the code needs to figure
out the interrupt that gets fired by the activity, to prevent the CPU
that is going to handle that interrupt to go into deep idle,
potentially ruining the latency and throughput of the request. The
knowledge of this interrupt mapping only resides in the driver level,
in this case NVMe.
One thing that could be done is to prevent the whole feature to be used
on setups where the number of cpus per irq is above some threshold;
lets say 4 as an example.
-Tero
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/1] nvme-pci: Add CPU latency pm-qos handling
2024-10-15 9:25 ` Tero Kristo
@ 2024-10-15 13:29 ` Christoph Hellwig
2024-10-18 7:58 ` Tero Kristo
0 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2024-10-15 13:29 UTC (permalink / raw)
To: Tero Kristo
Cc: Christoph Hellwig, linux-kernel, axboe, linux-nvme, sagi, kbusch
On Tue, Oct 15, 2024 at 12:25:37PM +0300, Tero Kristo wrote:
> I've been giving this some thought offline, but can't really think of
> how this could be done in the generic layers; the code needs to figure
> out the interrupt that gets fired by the activity, to prevent the CPU
> that is going to handle that interrupt to go into deep idle,
> potentially ruining the latency and throughput of the request. The
> knowledge of this interrupt mapping only resides in the driver level,
> in this case NVMe.
>
> One thing that could be done is to prevent the whole feature to be used
> on setups where the number of cpus per irq is above some threshold;
> lets say 4 as an example.
As a disclaimer I don't really understand the PM QOS framework, just
the NVMe driver and block layer.
With that my gut feeling is that all this latency management should
be driven by the blk_mq_hctx structure, the block layer equivalent
to a queue. And instead of having a per-cpu array of QOS requests
per device, there should one per cpu in the actual mask of the
hctx, so that you only have to iterate this local shared data
structure.
Preferably there would be one single active check per hctx and
not one per cpu, e.g. when the block layer submits commands
it has to do one single check instead of an iteration. Similarly
the block layer code would time out the activity once per hctx,
and only then iterate the (usually few) CPUs per hctx.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/1] nvme-pci: Add CPU latency pm-qos handling
2024-10-15 13:29 ` Christoph Hellwig
@ 2024-10-18 7:58 ` Tero Kristo
0 siblings, 0 replies; 9+ messages in thread
From: Tero Kristo @ 2024-10-18 7:58 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-kernel, axboe, linux-nvme, sagi, kbusch
On Tue, 2024-10-15 at 15:29 +0200, Christoph Hellwig wrote:
> On Tue, Oct 15, 2024 at 12:25:37PM +0300, Tero Kristo wrote:
> > I've been giving this some thought offline, but can't really think
> > of
> > how this could be done in the generic layers; the code needs to
> > figure
> > out the interrupt that gets fired by the activity, to prevent the
> > CPU
> > that is going to handle that interrupt to go into deep idle,
> > potentially ruining the latency and throughput of the request. The
> > knowledge of this interrupt mapping only resides in the driver
> > level,
> > in this case NVMe.
> >
> > One thing that could be done is to prevent the whole feature to be
> > used
> > on setups where the number of cpus per irq is above some threshold;
> > lets say 4 as an example.
>
> As a disclaimer I don't really understand the PM QOS framework, just
> the NVMe driver and block layer.
>
> With that my gut feeling is that all this latency management should
> be driven by the blk_mq_hctx structure, the block layer equivalent
> to a queue. And instead of having a per-cpu array of QOS requests
> per device, there should one per cpu in the actual mask of the
> hctx, so that you only have to iterate this local shared data
> structure.
>
> Preferably there would be one single active check per hctx and
> not one per cpu, e.g. when the block layer submits commands
> it has to do one single check instead of an iteration. Similarly
> the block layer code would time out the activity once per hctx,
> and only then iterate the (usually few) CPUs per hctx.
>
Thanks for the feedback, I have now reworked + retested my patches
against blk-mq, just posted them to the block mailing list also.
-Tero
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-10-18 8:04 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-04 10:09 [PATCH 0/1] nvme-pci: Add CPU latency pm-qos handling Tero Kristo
2024-10-04 10:09 ` [PATCH 1/1] " Tero Kristo
2024-10-07 6:19 ` Christoph Hellwig
2024-10-09 6:45 ` Tero Kristo
2024-10-09 8:00 ` Christoph Hellwig
2024-10-09 8:24 ` Tero Kristo
2024-10-15 9:25 ` Tero Kristo
2024-10-15 13:29 ` Christoph Hellwig
2024-10-18 7:58 ` Tero Kristo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).