* [RFC PATCHv5 1/7] block: expose blk_stat_{enable,disable}_accounting() to drivers
2025-11-05 10:33 [RFC PATCHv5 0/7] nvme-multipath: introduce adaptive I/O policy Nilay Shroff
@ 2025-11-05 10:33 ` Nilay Shroff
2025-11-05 10:33 ` [RFC PATCHv5 2/7] nvme-multipath: add support for adaptive I/O policy Nilay Shroff
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Nilay Shroff @ 2025-11-05 10:33 UTC (permalink / raw)
To: linux-nvme; +Cc: hare, hch, kbusch, sagi, dwagner, axboe, kanie, gjoyce
The functions blk_stat_enable_accounting() and
blk_stat_disable_accounting() are currently exported, but their
prototypes are only defined in a private header. Move these prototypes
into a common header so that block drivers can directly use these APIs.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
block/blk-stat.h | 4 ----
include/linux/blk-mq.h | 4 ++++
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/block/blk-stat.h b/block/blk-stat.h
index 9e05bf18d1be..f5d95dd8c0e9 100644
--- a/block/blk-stat.h
+++ b/block/blk-stat.h
@@ -67,10 +67,6 @@ void blk_free_queue_stats(struct blk_queue_stats *);
void blk_stat_add(struct request *rq, u64 now);
-/* record time/size info in request but not add a callback */
-void blk_stat_enable_accounting(struct request_queue *q);
-void blk_stat_disable_accounting(struct request_queue *q);
-
/**
* blk_stat_alloc_callback() - Allocate a block statistics callback.
* @timer_fn: Timer callback function.
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index b25d12545f46..f647444643b8 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -735,6 +735,10 @@ int blk_rq_poll(struct request *rq, struct io_comp_batch *iob,
bool blk_mq_queue_inflight(struct request_queue *q);
+/* record time/size info in request but not add a callback */
+void blk_stat_enable_accounting(struct request_queue *q);
+void blk_stat_disable_accounting(struct request_queue *q);
+
enum {
/* return when out of requests */
BLK_MQ_REQ_NOWAIT = (__force blk_mq_req_flags_t)(1 << 0),
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* [RFC PATCHv5 2/7] nvme-multipath: add support for adaptive I/O policy
2025-11-05 10:33 [RFC PATCHv5 0/7] nvme-multipath: introduce adaptive I/O policy Nilay Shroff
2025-11-05 10:33 ` [RFC PATCHv5 1/7] block: expose blk_stat_{enable,disable}_accounting() to drivers Nilay Shroff
@ 2025-11-05 10:33 ` Nilay Shroff
2025-11-05 10:33 ` [RFC PATCHv5 3/7] nvme: add generic debugfs support Nilay Shroff
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Nilay Shroff @ 2025-11-05 10:33 UTC (permalink / raw)
To: linux-nvme; +Cc: hare, hch, kbusch, sagi, dwagner, axboe, kanie, gjoyce
This commit introduces a new I/O policy named "adaptive". Users can
configure it by writing "adaptive" to "/sys/class/nvme-subsystem/nvme-
subsystemX/iopolicy"
The adaptive policy dynamically distributes I/O based on measured
completion latency. The main idea is to calculate latency for each path,
derive a weight, and then proportionally forward I/O according to those
weights.
To ensure scalability, path latency is measured per-CPU. Each CPU
maintains its own statistics, and I/O forwarding uses these per-CPU
values. Every ~15 seconds, a simple average latency of per-CPU batched
samples are computed and fed into an Exponentially Weighted Moving
Average (EWMA):
avg_latency = div_u64(batch, batch_count);
new_ewma_latency = (prev_ewma_latency * (WEIGHT-1) + avg_latency)/WEIGHT
With WEIGHT = 8, this assigns 7/8 (~87.5%) weight to the previous
latency value and 1/8 (~12.5%) to the most recent latency. This
smoothing reduces jitter, adapts quickly to changing conditions,
avoids storing historical samples, and works well for both low and
high I/O rates. Path weights are then derived from the smoothed (EWMA)
latency as follows (example with two paths A and B):
path_A_score = NSEC_PER_SEC / path_A_ewma_latency
path_B_score = NSEC_PER_SEC / path_B_ewma_latency
total_score = path_A_score + path_B_score
path_A_weight = (path_A_score * 100) / total_score
path_B_weight = (path_B_score * 100) / total_score
where:
- path_X_ewma_latency is the smoothed latency of a path in nanoseconds
- NSEC_PER_SEC is used as a scaling factor since valid latencies
are < 1 second
- weights are normalized to a 0–64 scale across all paths.
Path credits are refilled based on this weight, with one credit
consumed per I/O. When all credits are consumed, the credits are
refilled again based on the current weight. This ensures that I/O is
distributed across paths proportionally to their calculated weight.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/core.c | 15 +-
drivers/nvme/host/ioctl.c | 31 ++-
drivers/nvme/host/multipath.c | 425 ++++++++++++++++++++++++++++++++--
drivers/nvme/host/nvme.h | 74 +++++-
drivers/nvme/host/pr.c | 6 +-
drivers/nvme/host/sysfs.c | 2 +-
6 files changed, 530 insertions(+), 23 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index fa4181d7de73..47f375c63d2d 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -672,6 +672,9 @@ static void nvme_free_ns_head(struct kref *ref)
cleanup_srcu_struct(&head->srcu);
nvme_put_subsystem(head->subsys);
kfree(head->plids);
+#ifdef CONFIG_NVME_MULTIPATH
+ free_percpu(head->adp_path);
+#endif
kfree(head);
}
@@ -689,6 +692,7 @@ static void nvme_free_ns(struct kref *kref)
{
struct nvme_ns *ns = container_of(kref, struct nvme_ns, kref);
+ nvme_free_ns_stat(ns);
put_disk(ns->disk);
nvme_put_ns_head(ns->head);
nvme_put_ctrl(ns->ctrl);
@@ -4137,6 +4141,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
if (nvme_init_ns_head(ns, info))
goto out_cleanup_disk;
+ if (nvme_alloc_ns_stat(ns))
+ goto out_unlink_ns;
+
/*
* If multipathing is enabled, the device name for all disks and not
* just those that represent shared namespaces needs to be based on the
@@ -4161,7 +4168,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
}
if (nvme_update_ns_info(ns, info))
- goto out_unlink_ns;
+ goto out_free_ns_stat;
mutex_lock(&ctrl->namespaces_lock);
/*
@@ -4170,7 +4177,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
*/
if (test_bit(NVME_CTRL_FROZEN, &ctrl->flags)) {
mutex_unlock(&ctrl->namespaces_lock);
- goto out_unlink_ns;
+ goto out_free_ns_stat;
}
nvme_ns_add_to_ctrl_list(ns);
mutex_unlock(&ctrl->namespaces_lock);
@@ -4201,6 +4208,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
list_del_rcu(&ns->list);
mutex_unlock(&ctrl->namespaces_lock);
synchronize_srcu(&ctrl->srcu);
+out_free_ns_stat:
+ nvme_free_ns_stat(ns);
out_unlink_ns:
mutex_lock(&ctrl->subsys->lock);
list_del_rcu(&ns->siblings);
@@ -4244,6 +4253,8 @@ static void nvme_ns_remove(struct nvme_ns *ns)
*/
synchronize_srcu(&ns->head->srcu);
+ nvme_mpath_cancel_adaptive_path_weight_work(ns);
+
/* wait for concurrent submissions */
if (nvme_mpath_clear_current_path(ns))
synchronize_srcu(&ns->head->srcu);
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index c212fa952c0f..759d147d9930 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -700,18 +700,29 @@ static int nvme_ns_head_ctrl_ioctl(struct nvme_ns *ns, unsigned int cmd,
int nvme_ns_head_ioctl(struct block_device *bdev, blk_mode_t mode,
unsigned int cmd, unsigned long arg)
{
+ u8 opcode;
struct nvme_ns_head *head = bdev->bd_disk->private_data;
bool open_for_write = mode & BLK_OPEN_WRITE;
void __user *argp = (void __user *)arg;
struct nvme_ns *ns;
int srcu_idx, ret = -EWOULDBLOCK;
unsigned int flags = 0;
+ unsigned int op_type = NVME_STAT_OTHER;
if (bdev_is_partition(bdev))
flags |= NVME_IOCTL_PARTITION;
+ if (cmd == NVME_IOCTL_SUBMIT_IO) {
+ if (get_user(opcode, (u8 *)argp))
+ return -EFAULT;
+ if (opcode == nvme_cmd_write)
+ op_type = NVME_STAT_WRITE;
+ else if (opcode == nvme_cmd_read)
+ op_type = NVME_STAT_READ;
+ }
+
srcu_idx = srcu_read_lock(&head->srcu);
- ns = nvme_find_path(head);
+ ns = nvme_find_path(head, op_type);
if (!ns)
goto out_unlock;
@@ -733,6 +744,7 @@ int nvme_ns_head_ioctl(struct block_device *bdev, blk_mode_t mode,
long nvme_ns_head_chr_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
+ u8 opcode;
bool open_for_write = file->f_mode & FMODE_WRITE;
struct cdev *cdev = file_inode(file)->i_cdev;
struct nvme_ns_head *head =
@@ -740,9 +752,19 @@ long nvme_ns_head_chr_ioctl(struct file *file, unsigned int cmd,
void __user *argp = (void __user *)arg;
struct nvme_ns *ns;
int srcu_idx, ret = -EWOULDBLOCK;
+ unsigned int op_type = NVME_STAT_OTHER;
+
+ if (cmd == NVME_IOCTL_SUBMIT_IO) {
+ if (get_user(opcode, (u8 *)argp))
+ return -EFAULT;
+ if (opcode == nvme_cmd_write)
+ op_type = NVME_STAT_WRITE;
+ else if (opcode == nvme_cmd_read)
+ op_type = NVME_STAT_READ;
+ }
srcu_idx = srcu_read_lock(&head->srcu);
- ns = nvme_find_path(head);
+ ns = nvme_find_path(head, op_type);
if (!ns)
goto out_unlock;
@@ -762,7 +784,10 @@ int nvme_ns_head_chr_uring_cmd(struct io_uring_cmd *ioucmd,
struct cdev *cdev = file_inode(ioucmd->file)->i_cdev;
struct nvme_ns_head *head = container_of(cdev, struct nvme_ns_head, cdev);
int srcu_idx = srcu_read_lock(&head->srcu);
- struct nvme_ns *ns = nvme_find_path(head);
+ const struct nvme_uring_cmd *cmd = io_uring_sqe_cmd(ioucmd->sqe);
+ struct nvme_ns *ns = nvme_find_path(head,
+ READ_ONCE(cmd->opcode) & 1 ?
+ NVME_STAT_WRITE : NVME_STAT_READ);
int ret = -EINVAL;
if (ns)
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 543e17aead12..55dc28375662 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -6,6 +6,9 @@
#include <linux/backing-dev.h>
#include <linux/moduleparam.h>
#include <linux/vmalloc.h>
+#include <linux/blk-mq.h>
+#include <linux/math64.h>
+#include <linux/rculist.h>
#include <trace/events/block.h>
#include "nvme.h"
@@ -66,9 +69,10 @@ MODULE_PARM_DESC(multipath_always_on,
"create multipath node always except for private namespace with non-unique nsid; note that this also implicitly enables native multipath support");
static const char *nvme_iopolicy_names[] = {
- [NVME_IOPOLICY_NUMA] = "numa",
- [NVME_IOPOLICY_RR] = "round-robin",
- [NVME_IOPOLICY_QD] = "queue-depth",
+ [NVME_IOPOLICY_NUMA] = "numa",
+ [NVME_IOPOLICY_RR] = "round-robin",
+ [NVME_IOPOLICY_QD] = "queue-depth",
+ [NVME_IOPOLICY_ADAPTIVE] = "adaptive",
};
static int iopolicy = NVME_IOPOLICY_NUMA;
@@ -83,6 +87,8 @@ static int nvme_set_iopolicy(const char *val, const struct kernel_param *kp)
iopolicy = NVME_IOPOLICY_RR;
else if (!strncmp(val, "queue-depth", 11))
iopolicy = NVME_IOPOLICY_QD;
+ else if (!strncmp(val, "adaptive", 8))
+ iopolicy = NVME_IOPOLICY_ADAPTIVE;
else
return -EINVAL;
@@ -198,6 +204,204 @@ void nvme_mpath_start_request(struct request *rq)
}
EXPORT_SYMBOL_GPL(nvme_mpath_start_request);
+static void nvme_mpath_weight_work(struct work_struct *weight_work)
+{
+ int cpu, srcu_idx;
+ u32 weight;
+ struct nvme_ns *ns;
+ struct nvme_path_stat *stat;
+ struct nvme_path_work *work = container_of(weight_work,
+ struct nvme_path_work, weight_work);
+ struct nvme_ns_head *head = work->ns->head;
+ int op_type = work->op_type;
+ u64 total_score = 0;
+
+ cpu = get_cpu();
+
+ srcu_idx = srcu_read_lock(&head->srcu);
+ list_for_each_entry_srcu(ns, &head->list, siblings,
+ srcu_read_lock_held(&head->srcu)) {
+
+ stat = &this_cpu_ptr(ns->info)[op_type].stat;
+ if (!READ_ONCE(stat->slat_ns)) {
+ stat->score = 0;
+ continue;
+ }
+ /*
+ * Compute the path score as the inverse of smoothed
+ * latency, scaled by NSEC_PER_SEC. Floating point
+ * math is unavailable in the kernel, so fixed-point
+ * scaling is used instead. NSEC_PER_SEC is chosen
+ * because valid latencies are always < 1 second; longer
+ * latencies are ignored.
+ */
+ stat->score = div_u64(NSEC_PER_SEC, READ_ONCE(stat->slat_ns));
+
+ /* Compute total score. */
+ total_score += stat->score;
+ }
+
+ if (!total_score)
+ goto out;
+
+ /*
+ * After computing the total slatency, we derive per-path weight
+ * (normalized to the range 0–64). The weight represents the
+ * relative share of I/O the path should receive.
+ *
+ * - lower smoothed latency -> higher weight
+ * - higher smoothed slatency -> lower weight
+ *
+ * Next, while forwarding I/O, we assign "credits" to each path
+ * based on its weight (please also refer nvme_adaptive_path()):
+ * - Initially, credits = weight.
+ * - Each time an I/O is dispatched on a path, its credits are
+ * decremented proportionally.
+ * - When a path runs out of credits, it becomes temporarily
+ * ineligible until credit is refilled.
+ *
+ * I/O distribution is therefore governed by available credits,
+ * ensuring that over time the proportion of I/O sent to each
+ * path matches its weight (and thus its performance).
+ */
+ list_for_each_entry_srcu(ns, &head->list, siblings,
+ srcu_read_lock_held(&head->srcu)) {
+
+ stat = &this_cpu_ptr(ns->info)[op_type].stat;
+ weight = div_u64(stat->score * 64, total_score);
+
+ /*
+ * Ensure the path weight never drops below 1. A weight
+ * of 0 is used only for newly added paths. During
+ * bootstrap, a few I/Os are sent to such paths to
+ * establish an initial weight. Enforcing a minimum
+ * weight of 1 guarantees that no path is forgotten and
+ * that each path is probed at least occasionally.
+ */
+ if (!weight)
+ weight = 1;
+
+ WRITE_ONCE(stat->weight, weight);
+ }
+out:
+ srcu_read_unlock(&head->srcu, srcu_idx);
+ put_cpu();
+}
+
+/*
+ * Formula to calculate the EWMA (Exponentially Weighted Moving Average):
+ * ewma = (old_ewma * (EWMA_SHIFT - 1) + (EWMA_SHIFT)) / EWMA_SHIFT
+ * For instance, with EWMA_SHIFT = 3, this assigns 7/8 (~87.5 %) weight to
+ * the existing/old ewma and 1/8 (~12.5%) weight to the new sample.
+ */
+static inline u64 ewma_update(u64 old, u64 new)
+{
+ return (old * ((1 << NVME_DEFAULT_ADP_EWMA_SHIFT) - 1)
+ + new) >> NVME_DEFAULT_ADP_EWMA_SHIFT;
+}
+
+static void nvme_mpath_add_sample(struct request *rq, struct nvme_ns *ns)
+{
+ int cpu;
+ unsigned int op_type;
+ struct nvme_path_info *info;
+ struct nvme_path_stat *stat;
+ u64 now, latency, slat_ns, avg_lat_ns;
+ struct nvme_ns_head *head = ns->head;
+
+ if (list_is_singular(&head->list))
+ return;
+
+ now = ktime_get_ns();
+ latency = now >= rq->io_start_time_ns ? now - rq->io_start_time_ns : 0;
+ if (!latency)
+ return;
+
+ /*
+ * As completion code path is serialized(i.e. no same completion queue
+ * update code could run simultaneously on multiple cpu) we can safely
+ * access per cpu nvme path stat here from another cpu (in case the
+ * completion cpu is different from submission cpu).
+ * The only field which could be accessed simultaneously here is the
+ * path ->weight which may be accessed by this function as well as I/O
+ * submission path during path selection logic and we protect ->weight
+ * using READ_ONCE/WRITE_ONCE. Yes this may not be 100% accurate but
+ * we also don't need to be so accurate here as the path credit would
+ * be anyways refilled, based on path weight, once path consumes all
+ * its credits. And we limit path weight/credit max up to 100. Please
+ * also refer nvme_adaptive_path().
+ */
+ cpu = blk_mq_rq_cpu(rq);
+ op_type = nvme_data_dir(req_op(rq));
+ info = &per_cpu_ptr(ns->info, cpu)[op_type];
+ stat = &info->stat;
+
+ /*
+ * If latency > ~1s then ignore this sample to prevent EWMA from being
+ * skewed by pathological outliers (multi-second waits, controller
+ * timeouts etc.). This keeps path scores representative of normal
+ * performance and avoids instability from rare spikes. If such high
+ * latency is real, ANA state reporting or keep-alive error counters
+ * will mark the path unhealthy and remove it from the head node list,
+ * so we safely skip such sample here.
+ */
+ if (unlikely(latency > NSEC_PER_SEC)) {
+ stat->nr_ignored++;
+ dev_warn_ratelimited(ns->ctrl->device,
+ "ignoring sample with >1s latency (possible controller stall or timeout)\n");
+ return;
+ }
+
+ /*
+ * Accumulate latency samples and increment the batch count for each
+ * ~15 second interval. When the interval expires, compute the simple
+ * average latency over that window, then update the smoothed (EWMA)
+ * latency. The path weight is recalculated based on this smoothed
+ * latency.
+ */
+ stat->batch += latency;
+ stat->batch_count++;
+ stat->nr_samples++;
+
+ if (now > stat->last_weight_ts &&
+ (now - stat->last_weight_ts) >= NVME_DEFAULT_ADP_WEIGHT_TIMEOUT) {
+
+ stat->last_weight_ts = now;
+
+ /*
+ * Find simple average latency for the last epoch (~15 sec
+ * interval).
+ */
+ avg_lat_ns = div_u64(stat->batch, stat->batch_count);
+
+ /*
+ * Calculate smooth/EWMA (Exponentially Weighted Moving Average)
+ * latency. EWMA is preferred over simple average latency
+ * because it smooths naturally, reduces jitter from sudden
+ * spikes, and adapts faster to changing conditions. It also
+ * avoids storing historical samples, and works well for both
+ * slow and fast I/O rates.
+ * Formula:
+ * slat_ns = (prev_slat_ns * (WEIGHT - 1) + (latency)) / WEIGHT
+ * With WEIGHT = 8, this assigns 7/8 (~87.5 %) weight to the
+ * existing latency and 1/8 (~12.5%) weight to the new latency.
+ */
+ if (unlikely(!stat->slat_ns))
+ WRITE_ONCE(stat->slat_ns, avg_lat_ns);
+ else {
+ slat_ns = ewma_update(stat->slat_ns, avg_lat_ns);
+ WRITE_ONCE(stat->slat_ns, slat_ns);
+ }
+
+ stat->batch = stat->batch_count = 0;
+
+ /*
+ * Defer calculation of the path weight in per-cpu workqueue.
+ */
+ schedule_work_on(cpu, &info->work.weight_work);
+ }
+}
+
void nvme_mpath_end_request(struct request *rq)
{
struct nvme_ns *ns = rq->q->queuedata;
@@ -205,6 +409,9 @@ void nvme_mpath_end_request(struct request *rq)
if (nvme_req(rq)->flags & NVME_MPATH_CNT_ACTIVE)
atomic_dec_if_positive(&ns->ctrl->nr_active);
+ if (test_bit(NVME_NS_PATH_STAT, &ns->flags))
+ nvme_mpath_add_sample(rq, ns);
+
if (!(nvme_req(rq)->flags & NVME_MPATH_IO_STATS))
return;
bdev_end_io_acct(ns->head->disk->part0, req_op(rq),
@@ -238,6 +445,62 @@ static const char *nvme_ana_state_names[] = {
[NVME_ANA_CHANGE] = "change",
};
+static void nvme_mpath_reset_adaptive_path_stat(struct nvme_ns *ns)
+{
+ int i, cpu;
+ struct nvme_path_stat *stat;
+
+ for_each_possible_cpu(cpu) {
+ for (i = 0; i < NVME_NUM_STAT_GROUPS; i++) {
+ stat = &per_cpu_ptr(ns->info, cpu)[i].stat;
+ memset(stat, 0, sizeof(struct nvme_path_stat));
+ }
+ }
+}
+
+void nvme_mpath_cancel_adaptive_path_weight_work(struct nvme_ns *ns)
+{
+ int i, cpu;
+ struct nvme_path_info *info;
+
+ if (!test_bit(NVME_NS_PATH_STAT, &ns->flags))
+ return;
+
+ for_each_online_cpu(cpu) {
+ for (i = 0; i < NVME_NUM_STAT_GROUPS; i++) {
+ info = &per_cpu_ptr(ns->info, cpu)[i];
+ cancel_work_sync(&info->work.weight_work);
+ }
+ }
+}
+
+static bool nvme_mpath_enable_adaptive_path_policy(struct nvme_ns *ns)
+{
+ struct nvme_ns_head *head = ns->head;
+
+ if (!head->disk || head->subsys->iopolicy != NVME_IOPOLICY_ADAPTIVE)
+ return false;
+
+ if (test_and_set_bit(NVME_NS_PATH_STAT, &ns->flags))
+ return false;
+
+ blk_queue_flag_set(QUEUE_FLAG_SAME_FORCE, ns->queue);
+ blk_stat_enable_accounting(ns->queue);
+ return true;
+}
+
+static bool nvme_mpath_disable_adaptive_path_policy(struct nvme_ns *ns)
+{
+
+ if (!test_and_clear_bit(NVME_NS_PATH_STAT, &ns->flags))
+ return false;
+
+ blk_stat_disable_accounting(ns->queue);
+ blk_queue_flag_clear(QUEUE_FLAG_SAME_FORCE, ns->queue);
+ nvme_mpath_reset_adaptive_path_stat(ns);
+ return true;
+}
+
bool nvme_mpath_clear_current_path(struct nvme_ns *ns)
{
struct nvme_ns_head *head = ns->head;
@@ -253,6 +516,8 @@ bool nvme_mpath_clear_current_path(struct nvme_ns *ns)
changed = true;
}
}
+ if (nvme_mpath_disable_adaptive_path_policy(ns))
+ changed = true;
out:
return changed;
}
@@ -271,6 +536,45 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
srcu_read_unlock(&ctrl->srcu, srcu_idx);
}
+int nvme_alloc_ns_stat(struct nvme_ns *ns)
+{
+ int i, cpu;
+ struct nvme_path_work *work;
+ gfp_t gfp = GFP_KERNEL | __GFP_ZERO;
+
+ if (!ns->head->disk)
+ return 0;
+
+ ns->info = __alloc_percpu_gfp(NVME_NUM_STAT_GROUPS *
+ sizeof(struct nvme_path_info),
+ __alignof__(struct nvme_path_info), gfp);
+ if (!ns->info)
+ return -ENOMEM;
+
+ for_each_possible_cpu(cpu) {
+ for (i = 0; i < NVME_NUM_STAT_GROUPS; i++) {
+ work = &per_cpu_ptr(ns->info, cpu)[i].work;
+ work->ns = ns;
+ work->op_type = i;
+ INIT_WORK(&work->weight_work, nvme_mpath_weight_work);
+ }
+ }
+
+ return 0;
+}
+
+static void nvme_mpath_set_ctrl_paths(struct nvme_ctrl *ctrl)
+{
+ struct nvme_ns *ns;
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
+ srcu_read_lock_held(&ctrl->srcu))
+ nvme_mpath_enable_adaptive_path_policy(ns);
+ srcu_read_unlock(&ctrl->srcu, srcu_idx);
+}
+
void nvme_mpath_revalidate_paths(struct nvme_ns *ns)
{
struct nvme_ns_head *head = ns->head;
@@ -283,6 +587,8 @@ void nvme_mpath_revalidate_paths(struct nvme_ns *ns)
srcu_read_lock_held(&head->srcu)) {
if (capacity != get_capacity(ns->disk))
clear_bit(NVME_NS_READY, &ns->flags);
+
+ nvme_mpath_reset_adaptive_path_stat(ns);
}
srcu_read_unlock(&head->srcu, srcu_idx);
@@ -407,6 +713,92 @@ static struct nvme_ns *nvme_round_robin_path(struct nvme_ns_head *head)
return found;
}
+static inline bool nvme_state_is_live(enum nvme_ana_state state)
+{
+ return state == NVME_ANA_OPTIMIZED || state == NVME_ANA_NONOPTIMIZED;
+}
+
+static struct nvme_ns *nvme_adaptive_path(struct nvme_ns_head *head,
+ unsigned int op_type)
+{
+ struct nvme_ns *ns, *start, *found = NULL;
+ struct nvme_path_stat *stat;
+ u32 weight;
+ int cpu;
+
+ cpu = get_cpu();
+ ns = *this_cpu_ptr(head->adp_path);
+ if (unlikely(!ns)) {
+ ns = list_first_or_null_rcu(&head->list,
+ struct nvme_ns, siblings);
+ if (unlikely(!ns))
+ goto out;
+ }
+found_ns:
+ start = ns;
+ while (nvme_path_is_disabled(ns) ||
+ !nvme_state_is_live(ns->ana_state)) {
+ ns = list_next_entry_circular(ns, &head->list, siblings);
+
+ /*
+ * If we iterate through all paths in the list but find each
+ * path in list is either disabled or dead then bail out.
+ */
+ if (ns == start)
+ goto out;
+ }
+
+ stat = &this_cpu_ptr(ns->info)[op_type].stat;
+
+ /*
+ * When the head path-list is singular we don't calculate the
+ * only path weight for optimization as we don't need to forward
+ * I/O to more than one path. The another possibility is whenthe
+ * path is newly added, we don't know its weight. So we go round
+ * -robin for each such path and forward I/O to it.Once we start
+ * getting response for such I/Os, the path weight calculation
+ * would kick in and then we start using path credit for
+ * forwarding I/O.
+ */
+ weight = READ_ONCE(stat->weight);
+ if (!weight) {
+ found = ns;
+ goto out;
+ }
+
+ /*
+ * To keep path selection logic simple, we don't distinguish
+ * between ANA optimized and non-optimized states. The non-
+ * optimized path is expected to have a lower weight, and
+ * therefore fewer credits. As a result, only a small number of
+ * I/Os will be forwarded to paths in the non-optimized state.
+ */
+ if (stat->credit > 0) {
+ --stat->credit;
+ found = ns;
+ goto out;
+ } else {
+ /*
+ * Refill credit from path weight and move to next path. The
+ * refilled credit of the current path will be used next when
+ * all remainng paths exhaust its credits.
+ */
+ weight = READ_ONCE(stat->weight);
+ stat->credit = weight;
+ ns = list_next_entry_circular(ns, &head->list, siblings);
+ if (likely(ns))
+ goto found_ns;
+ }
+out:
+ if (found) {
+ stat->sel++;
+ *this_cpu_ptr(head->adp_path) = found;
+ }
+
+ put_cpu();
+ return found;
+}
+
static struct nvme_ns *nvme_queue_depth_path(struct nvme_ns_head *head)
{
struct nvme_ns *best_opt = NULL, *best_nonopt = NULL, *ns;
@@ -463,9 +855,12 @@ static struct nvme_ns *nvme_numa_path(struct nvme_ns_head *head)
return ns;
}
-inline struct nvme_ns *nvme_find_path(struct nvme_ns_head *head)
+inline struct nvme_ns *nvme_find_path(struct nvme_ns_head *head,
+ unsigned int op_type)
{
switch (READ_ONCE(head->subsys->iopolicy)) {
+ case NVME_IOPOLICY_ADAPTIVE:
+ return nvme_adaptive_path(head, op_type);
case NVME_IOPOLICY_QD:
return nvme_queue_depth_path(head);
case NVME_IOPOLICY_RR:
@@ -525,7 +920,7 @@ static void nvme_ns_head_submit_bio(struct bio *bio)
return;
srcu_idx = srcu_read_lock(&head->srcu);
- ns = nvme_find_path(head);
+ ns = nvme_find_path(head, nvme_data_dir(bio_op(bio)));
if (likely(ns)) {
bio_set_dev(bio, ns->disk->part0);
bio->bi_opf |= REQ_NVME_MPATH;
@@ -567,7 +962,7 @@ static int nvme_ns_head_get_unique_id(struct gendisk *disk, u8 id[16],
int srcu_idx, ret = -EWOULDBLOCK;
srcu_idx = srcu_read_lock(&head->srcu);
- ns = nvme_find_path(head);
+ ns = nvme_find_path(head, NVME_STAT_OTHER);
if (ns)
ret = nvme_ns_get_unique_id(ns, id, type);
srcu_read_unlock(&head->srcu, srcu_idx);
@@ -583,7 +978,7 @@ static int nvme_ns_head_report_zones(struct gendisk *disk, sector_t sector,
int srcu_idx, ret = -EWOULDBLOCK;
srcu_idx = srcu_read_lock(&head->srcu);
- ns = nvme_find_path(head);
+ ns = nvme_find_path(head, NVME_STAT_OTHER);
if (ns)
ret = nvme_ns_report_zones(ns, sector, nr_zones, cb, data);
srcu_read_unlock(&head->srcu, srcu_idx);
@@ -725,6 +1120,9 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
INIT_WORK(&head->partition_scan_work, nvme_partition_scan_work);
INIT_DELAYED_WORK(&head->remove_work, nvme_remove_head_work);
head->delayed_removal_secs = 0;
+ head->adp_path = alloc_percpu_gfp(struct nvme_ns*, GFP_KERNEL);
+ if (!head->adp_path)
+ return -ENOMEM;
/*
* If "multipath_always_on" is enabled, a multipath node is added
@@ -809,6 +1207,10 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
}
mutex_unlock(&head->lock);
+ mutex_lock(&nvme_subsystems_lock);
+ nvme_mpath_enable_adaptive_path_policy(ns);
+ mutex_unlock(&nvme_subsystems_lock);
+
synchronize_srcu(&head->srcu);
kblockd_schedule_work(&head->requeue_work);
}
@@ -857,11 +1259,6 @@ static int nvme_parse_ana_log(struct nvme_ctrl *ctrl, void *data,
return 0;
}
-static inline bool nvme_state_is_live(enum nvme_ana_state state)
-{
- return state == NVME_ANA_OPTIMIZED || state == NVME_ANA_NONOPTIMIZED;
-}
-
static void nvme_update_ns_ana_state(struct nvme_ana_group_desc *desc,
struct nvme_ns *ns)
{
@@ -1039,10 +1436,12 @@ static void nvme_subsys_iopolicy_update(struct nvme_subsystem *subsys,
WRITE_ONCE(subsys->iopolicy, iopolicy);
- /* iopolicy changes clear the mpath by design */
+ /* iopolicy changes clear/reset the mpath by design */
mutex_lock(&nvme_subsystems_lock);
list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
nvme_mpath_clear_ctrl_paths(ctrl);
+ list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
+ nvme_mpath_set_ctrl_paths(ctrl);
mutex_unlock(&nvme_subsystems_lock);
pr_notice("subsysnqn %s iopolicy changed from %s to %s\n",
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 102fae6a231c..715c7053054c 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -28,7 +28,10 @@ extern unsigned int nvme_io_timeout;
extern unsigned int admin_timeout;
#define NVME_ADMIN_TIMEOUT (admin_timeout * HZ)
-#define NVME_DEFAULT_KATO 5
+#define NVME_DEFAULT_KATO 5
+
+#define NVME_DEFAULT_ADP_EWMA_SHIFT 3
+#define NVME_DEFAULT_ADP_WEIGHT_TIMEOUT (15 * NSEC_PER_SEC)
#ifdef CONFIG_ARCH_NO_SG_CHAIN
#define NVME_INLINE_SG_CNT 0
@@ -421,6 +424,7 @@ enum nvme_iopolicy {
NVME_IOPOLICY_NUMA,
NVME_IOPOLICY_RR,
NVME_IOPOLICY_QD,
+ NVME_IOPOLICY_ADAPTIVE,
};
struct nvme_subsystem {
@@ -459,6 +463,37 @@ struct nvme_ns_ids {
u8 csi;
};
+enum nvme_stat_group {
+ NVME_STAT_READ,
+ NVME_STAT_WRITE,
+ NVME_STAT_OTHER,
+ NVME_NUM_STAT_GROUPS
+};
+
+struct nvme_path_stat {
+ u64 nr_samples; /* total num of samples processed */
+ u64 nr_ignored; /* num. of samples ignored */
+ u64 slat_ns; /* smoothed (ewma) latency in nanoseconds */
+ u64 score; /* score used for weight calculation */
+ u64 last_weight_ts; /* timestamp of the last weight calculation */
+ u64 sel; /* num of times this path is selcted for I/O */
+ u64 batch; /* accumulated latency sum for current window */
+ u32 batch_count; /* num of samples accumulated in current window */
+ u32 weight; /* path weight */
+ u32 credit; /* path credit for I/O forwarding */
+};
+
+struct nvme_path_work {
+ struct nvme_ns *ns; /* owning namespace */
+ struct work_struct weight_work; /* deferred work for weight calculation */
+ int op_type; /* op type : READ/WRITE/OTHER */
+};
+
+struct nvme_path_info {
+ struct nvme_path_stat stat; /* path statistics */
+ struct nvme_path_work work; /* background worker context */
+};
+
/*
* Anchor structure for namespaces. There is one for each namespace in a
* NVMe subsystem that any of our controllers can see, and the namespace
@@ -508,6 +543,9 @@ struct nvme_ns_head {
unsigned long flags;
struct delayed_work remove_work;
unsigned int delayed_removal_secs;
+
+ struct nvme_ns * __percpu *adp_path;
+
#define NVME_NSHEAD_DISK_LIVE 0
#define NVME_NSHEAD_QUEUE_IF_NO_PATH 1
struct nvme_ns __rcu *current_path[];
@@ -534,6 +572,7 @@ struct nvme_ns {
#ifdef CONFIG_NVME_MULTIPATH
enum nvme_ana_state ana_state;
u32 ana_grpid;
+ struct nvme_path_info __percpu *info;
#endif
struct list_head siblings;
struct kref kref;
@@ -545,6 +584,7 @@ struct nvme_ns {
#define NVME_NS_FORCE_RO 3
#define NVME_NS_READY 4
#define NVME_NS_SYSFS_ATTR_LINK 5
+#define NVME_NS_PATH_STAT 6
struct cdev cdev;
struct device cdev_device;
@@ -949,7 +989,17 @@ extern const struct attribute_group *nvme_dev_attr_groups[];
extern const struct block_device_operations nvme_bdev_ops;
void nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl);
-struct nvme_ns *nvme_find_path(struct nvme_ns_head *head);
+struct nvme_ns *nvme_find_path(struct nvme_ns_head *head, unsigned int op_type);
+static inline int nvme_data_dir(const enum req_op op)
+{
+ if (op == REQ_OP_READ)
+ return NVME_STAT_READ;
+ else if (op_is_write(op))
+ return NVME_STAT_WRITE;
+ else
+ return NVME_STAT_OTHER;
+}
+
#ifdef CONFIG_NVME_MULTIPATH
static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
{
@@ -972,12 +1022,14 @@ void nvme_mpath_init_ctrl(struct nvme_ctrl *ctrl);
void nvme_mpath_update(struct nvme_ctrl *ctrl);
void nvme_mpath_uninit(struct nvme_ctrl *ctrl);
void nvme_mpath_stop(struct nvme_ctrl *ctrl);
+void nvme_mpath_cancel_adaptive_path_weight_work(struct nvme_ns *ns);
bool nvme_mpath_clear_current_path(struct nvme_ns *ns);
void nvme_mpath_revalidate_paths(struct nvme_ns *ns);
void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl);
void nvme_mpath_remove_disk(struct nvme_ns_head *head);
void nvme_mpath_start_request(struct request *rq);
void nvme_mpath_end_request(struct request *rq);
+int nvme_alloc_ns_stat(struct nvme_ns *ns);
static inline void nvme_trace_bio_complete(struct request *req)
{
@@ -1005,6 +1057,13 @@ static inline bool nvme_mpath_queue_if_no_path(struct nvme_ns_head *head)
return true;
return false;
}
+static inline void nvme_free_ns_stat(struct nvme_ns *ns)
+{
+ if (!ns->head->disk)
+ return;
+
+ free_percpu(ns->info);
+}
#else
#define multipath false
static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
@@ -1096,6 +1155,17 @@ static inline bool nvme_mpath_queue_if_no_path(struct nvme_ns_head *head)
{
return false;
}
+static inline void nvme_mpath_cancel_adaptive_path_weight_work(
+ struct nvme_ns *ns)
+{
+}
+static inline int nvme_alloc_ns_stat(struct nvme_ns *ns)
+{
+ return 0;
+}
+static inline void nvme_free_ns_stat(struct nvme_ns *ns)
+{
+}
#endif /* CONFIG_NVME_MULTIPATH */
int nvme_ns_get_unique_id(struct nvme_ns *ns, u8 id[16],
diff --git a/drivers/nvme/host/pr.c b/drivers/nvme/host/pr.c
index ca6a74607b13..7aca2186c462 100644
--- a/drivers/nvme/host/pr.c
+++ b/drivers/nvme/host/pr.c
@@ -53,10 +53,12 @@ static int nvme_send_ns_head_pr_command(struct block_device *bdev,
struct nvme_command *c, void *data, unsigned int data_len)
{
struct nvme_ns_head *head = bdev->bd_disk->private_data;
- int srcu_idx = srcu_read_lock(&head->srcu);
- struct nvme_ns *ns = nvme_find_path(head);
+ int srcu_idx;
+ struct nvme_ns *ns;
int ret = -EWOULDBLOCK;
+ srcu_idx = srcu_read_lock(&head->srcu);
+ ns = nvme_find_path(head, NVME_STAT_OTHER);
if (ns) {
c->common.nsid = cpu_to_le32(ns->head->ns_id);
ret = nvme_submit_sync_cmd(ns->queue, c, data, data_len);
diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
index 29430949ce2f..1cbab90ed42e 100644
--- a/drivers/nvme/host/sysfs.c
+++ b/drivers/nvme/host/sysfs.c
@@ -194,7 +194,7 @@ static int ns_head_update_nuse(struct nvme_ns_head *head)
return 0;
srcu_idx = srcu_read_lock(&head->srcu);
- ns = nvme_find_path(head);
+ ns = nvme_find_path(head, NVME_STAT_OTHER);
if (!ns)
goto out_unlock;
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* [RFC PATCHv5 3/7] nvme: add generic debugfs support
2025-11-05 10:33 [RFC PATCHv5 0/7] nvme-multipath: introduce adaptive I/O policy Nilay Shroff
2025-11-05 10:33 ` [RFC PATCHv5 1/7] block: expose blk_stat_{enable,disable}_accounting() to drivers Nilay Shroff
2025-11-05 10:33 ` [RFC PATCHv5 2/7] nvme-multipath: add support for adaptive I/O policy Nilay Shroff
@ 2025-11-05 10:33 ` Nilay Shroff
2025-11-05 10:33 ` [RFC PATCHv5 4/7] nvme-multipath: add debugfs attribute adaptive_ewma_shift Nilay Shroff
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Nilay Shroff @ 2025-11-05 10:33 UTC (permalink / raw)
To: linux-nvme; +Cc: hare, hch, kbusch, sagi, dwagner, axboe, kanie, gjoyce
Add generic infrastructure for creating and managing debugfs files in
the NVMe module. This introduces helper APIs that allow NVMe drivers to
register and unregister debugfs entries, along with a reusable attribute
structure for defining new debugfs files.
The implementation uses seq_file interfaces to safely expose per-NS and
per-NS-head statistics, while supporting both simple show callbacks and
full seq_operations.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/Makefile | 2 +-
drivers/nvme/host/core.c | 3 +
drivers/nvme/host/debugfs.c | 138 ++++++++++++++++++++++++++++++++++
drivers/nvme/host/multipath.c | 2 +
drivers/nvme/host/nvme.h | 10 +++
5 files changed, 154 insertions(+), 1 deletion(-)
create mode 100644 drivers/nvme/host/debugfs.c
diff --git a/drivers/nvme/host/Makefile b/drivers/nvme/host/Makefile
index 6414ec968f99..7962dfc3b2ad 100644
--- a/drivers/nvme/host/Makefile
+++ b/drivers/nvme/host/Makefile
@@ -10,7 +10,7 @@ obj-$(CONFIG_NVME_FC) += nvme-fc.o
obj-$(CONFIG_NVME_TCP) += nvme-tcp.o
obj-$(CONFIG_NVME_APPLE) += nvme-apple.o
-nvme-core-y += core.o ioctl.o sysfs.o pr.o
+nvme-core-y += core.o ioctl.o sysfs.o pr.o debugfs.o
nvme-core-$(CONFIG_NVME_VERBOSE_ERRORS) += constants.o
nvme-core-$(CONFIG_TRACING) += trace.o
nvme-core-$(CONFIG_NVME_MULTIPATH) += multipath.o
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 47f375c63d2d..c15dfcaf3de2 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4187,6 +4187,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
if (device_add_disk(ctrl->device, ns->disk, nvme_ns_attr_groups))
goto out_cleanup_ns_from_list;
+ nvme_debugfs_register(ns->disk);
+
if (!nvme_ns_head_multipath(ns->head))
nvme_add_ns_cdev(ns);
@@ -4276,6 +4278,7 @@ static void nvme_ns_remove(struct nvme_ns *ns)
nvme_mpath_remove_sysfs_link(ns);
+ nvme_debugfs_unregister(ns->disk);
del_gendisk(ns->disk);
mutex_lock(&ns->ctrl->namespaces_lock);
diff --git a/drivers/nvme/host/debugfs.c b/drivers/nvme/host/debugfs.c
new file mode 100644
index 000000000000..6bb57c4b5c3b
--- /dev/null
+++ b/drivers/nvme/host/debugfs.c
@@ -0,0 +1,138 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 IBM Corporation
+ * Nilay Shroff <nilay@linux.ibm.com>
+ */
+
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+
+#include "nvme.h"
+
+struct nvme_debugfs_attr {
+ const char *name;
+ umode_t mode;
+ int (*show)(void *data, struct seq_file *m);
+ ssize_t (*write)(void *data, const char __user *buf, size_t count,
+ loff_t *ppos);
+ const struct seq_operations *seq_ops;
+};
+
+struct nvme_debugfs_ctx {
+ void *data;
+ struct nvme_debugfs_attr *attr;
+ int srcu_idx;
+};
+
+static int nvme_debugfs_show(struct seq_file *m, void *v)
+{
+ struct nvme_debugfs_ctx *ctx = m->private;
+ void *data = ctx->data;
+ struct nvme_debugfs_attr *attr = ctx->attr;
+
+ return attr->show(data, m);
+}
+
+static int nvme_debugfs_open(struct inode *inode, struct file *file)
+{
+ void *data = inode->i_private;
+ struct nvme_debugfs_attr *attr = debugfs_get_aux(file);
+ struct nvme_debugfs_ctx *ctx;
+ struct seq_file *m;
+ int ret;
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (WARN_ON_ONCE(!ctx))
+ return -ENOMEM;
+
+ ctx->data = data;
+ ctx->attr = attr;
+
+ if (attr->seq_ops) {
+ ret = seq_open(file, attr->seq_ops);
+ if (ret) {
+ kfree(ctx);
+ return ret;
+ }
+ m = file->private_data;
+ m->private = ctx;
+ return ret;
+ }
+
+ if (WARN_ON_ONCE(!attr->show)) {
+ kfree(ctx);
+ return -EPERM;
+ }
+
+ return single_open(file, nvme_debugfs_show, ctx);
+}
+
+static ssize_t nvme_debugfs_write(struct file *file, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct seq_file *m = file->private_data;
+ struct nvme_debugfs_ctx *ctx = m->private;
+ struct nvme_debugfs_attr *attr = ctx->attr;
+
+ if (!attr->write)
+ return -EPERM;
+
+ return attr->write(ctx->data, buf, count, ppos);
+}
+
+static int nvme_debugfs_release(struct inode *inode, struct file *file)
+{
+ struct seq_file *m = file->private_data;
+ struct nvme_debugfs_ctx *ctx = m->private;
+ struct nvme_debugfs_attr *attr = ctx->attr;
+ int ret;
+
+ if (attr->seq_ops)
+ ret = seq_release(inode, file);
+ else
+ ret = single_release(inode, file);
+
+ kfree(ctx);
+ return ret;
+}
+
+static const struct file_operations nvme_debugfs_fops = {
+ .owner = THIS_MODULE,
+ .open = nvme_debugfs_open,
+ .read = seq_read,
+ .write = nvme_debugfs_write,
+ .llseek = seq_lseek,
+ .release = nvme_debugfs_release,
+};
+
+
+static const struct nvme_debugfs_attr nvme_mpath_debugfs_attrs[] = {
+ {},
+};
+
+static const struct nvme_debugfs_attr nvme_ns_debugfs_attrs[] = {
+ {},
+};
+
+static void nvme_debugfs_create_files(struct request_queue *q,
+ const struct nvme_debugfs_attr *attr, void *data)
+{
+ if (WARN_ON_ONCE(!q->debugfs_dir))
+ return;
+
+ for (; attr->name; attr++)
+ debugfs_create_file_aux(attr->name, attr->mode, q->debugfs_dir,
+ data, (void *)attr, &nvme_debugfs_fops);
+}
+
+void nvme_debugfs_register(struct gendisk *disk)
+{
+ const struct nvme_debugfs_attr *attr;
+
+ if (nvme_disk_is_ns_head(disk))
+ attr = nvme_mpath_debugfs_attrs;
+ else
+ attr = nvme_ns_debugfs_attrs;
+
+ nvme_debugfs_create_files(disk->queue, attr, disk->private_data);
+}
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 55dc28375662..047dd9da9cbf 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -1086,6 +1086,7 @@ static void nvme_remove_head(struct nvme_ns_head *head)
nvme_cdev_del(&head->cdev, &head->cdev_device);
synchronize_srcu(&head->srcu);
+ nvme_debugfs_unregister(head->disk);
del_gendisk(head->disk);
}
nvme_put_ns_head(head);
@@ -1192,6 +1193,7 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
}
nvme_add_ns_head_cdev(head);
kblockd_schedule_work(&head->partition_scan_work);
+ nvme_debugfs_register(head->disk);
}
nvme_mpath_add_sysfs_link(ns->head);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 715c7053054c..1c1ec2a7f9ad 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -1000,6 +1000,16 @@ static inline int nvme_data_dir(const enum req_op op)
return NVME_STAT_OTHER;
}
+void nvme_debugfs_register(struct gendisk *disk);
+static inline void nvme_debugfs_unregister(struct gendisk *disk)
+{
+ /*
+ * Nothing to do for now. When the request queue is unregistered,
+ * all files under q->debugfs_dir are recursively deleted.
+ * This is just a placeholder; the compiler will optimize it out.
+ */
+}
+
#ifdef CONFIG_NVME_MULTIPATH
static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
{
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* [RFC PATCHv5 4/7] nvme-multipath: add debugfs attribute adaptive_ewma_shift
2025-11-05 10:33 [RFC PATCHv5 0/7] nvme-multipath: introduce adaptive I/O policy Nilay Shroff
` (2 preceding siblings ...)
2025-11-05 10:33 ` [RFC PATCHv5 3/7] nvme: add generic debugfs support Nilay Shroff
@ 2025-11-05 10:33 ` Nilay Shroff
2025-11-05 10:33 ` [RFC PATCHv5 5/7] nvme-multipath: add debugfs attribute adaptive_weight_timeout Nilay Shroff
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Nilay Shroff @ 2025-11-05 10:33 UTC (permalink / raw)
To: linux-nvme; +Cc: hare, hch, kbusch, sagi, dwagner, axboe, kanie, gjoyce
By default, the EWMA (Exponentially Weighted Moving Average) shift
value, used for storing latency samples for adaptive iopolicy, is set
to 3. The EWMA is calculated using the following formula:
ewma = (old * ((1 << ewma_shift) - 1) + new) >> ewma_shift;
The default value of 3 assigns ~87.5% weight to the existing EWMA value
and ~12.5% weight to the new latency sample. This provides a stable
average that smooths out short-term variations.
However, different workloads may require faster or slower adaptation to
changing conditions. This commit introduces a new debugfs attribute,
adaptive_ewma_shift, allowing users to tune the weighting factor.
For example:
- adaptive_ewma_shift = 2 => 75% old, 25% new
- adaptive_ewma_shift = 1 => 50% old, 50% new
- adaptive_ewma_shift = 0 => 0% old, 100% new
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/core.c | 3 +++
drivers/nvme/host/debugfs.c | 46 +++++++++++++++++++++++++++++++++++
drivers/nvme/host/multipath.c | 8 +++---
drivers/nvme/host/nvme.h | 1 +
4 files changed, 54 insertions(+), 4 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index c15dfcaf3de2..43b9b0d6cbdf 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3913,6 +3913,9 @@ static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
head->ids = info->ids;
head->shared = info->is_shared;
head->rotational = info->is_rotational;
+#ifdef CONFIG_NVME_MULTIPATH
+ head->adp_ewma_shift = NVME_DEFAULT_ADP_EWMA_SHIFT;
+#endif
ratelimit_state_init(&head->rs_nuse, 5 * HZ, 1);
ratelimit_set_flags(&head->rs_nuse, RATELIMIT_MSG_ON_RELEASE);
kref_init(&head->ref);
diff --git a/drivers/nvme/host/debugfs.c b/drivers/nvme/host/debugfs.c
index 6bb57c4b5c3b..e3c37041e8f2 100644
--- a/drivers/nvme/host/debugfs.c
+++ b/drivers/nvme/host/debugfs.c
@@ -105,8 +105,54 @@ static const struct file_operations nvme_debugfs_fops = {
.release = nvme_debugfs_release,
};
+#ifdef CONFIG_NVME_MULTIPATH
+static int nvme_adp_ewma_shift_show(void *data, struct seq_file *m)
+{
+ struct nvme_ns_head *head = data;
+
+ seq_printf(m, "%u\n", READ_ONCE(head->adp_ewma_shift));
+ return 0;
+}
+
+static ssize_t nvme_adp_ewma_shift_store(void *data, const char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ struct nvme_ns_head *head = data;
+ char kbuf[8];
+ u32 res;
+ int ret;
+ size_t len;
+ char *arg;
+
+ len = min(sizeof(kbuf) - 1, count);
+
+ if (copy_from_user(kbuf, ubuf, len))
+ return -EFAULT;
+
+ kbuf[len] = '\0';
+ arg = strstrip(kbuf);
+
+ ret = kstrtou32(arg, 0, &res);
+ if (ret)
+ return ret;
+
+ /*
+ * Values greater than 8 are nonsensical, as they effectively assign
+ * zero weight to new samples.
+ */
+ if (res > 8)
+ return -EINVAL;
+
+ WRITE_ONCE(head->adp_ewma_shift, res);
+ return count;
+}
+#endif
static const struct nvme_debugfs_attr nvme_mpath_debugfs_attrs[] = {
+#ifdef CONFIG_NVME_MULTIPATH
+ {"adaptive_ewma_shift", 0600, nvme_adp_ewma_shift_show,
+ nvme_adp_ewma_shift_store},
+#endif
{},
};
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 047dd9da9cbf..c7470cc8844e 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -294,10 +294,9 @@ static void nvme_mpath_weight_work(struct work_struct *weight_work)
* For instance, with EWMA_SHIFT = 3, this assigns 7/8 (~87.5 %) weight to
* the existing/old ewma and 1/8 (~12.5%) weight to the new sample.
*/
-static inline u64 ewma_update(u64 old, u64 new)
+static inline u64 ewma_update(u64 old, u64 new, u32 ewma_shift)
{
- return (old * ((1 << NVME_DEFAULT_ADP_EWMA_SHIFT) - 1)
- + new) >> NVME_DEFAULT_ADP_EWMA_SHIFT;
+ return (old * ((1 << ewma_shift) - 1) + new) >> ewma_shift;
}
static void nvme_mpath_add_sample(struct request *rq, struct nvme_ns *ns)
@@ -389,7 +388,8 @@ static void nvme_mpath_add_sample(struct request *rq, struct nvme_ns *ns)
if (unlikely(!stat->slat_ns))
WRITE_ONCE(stat->slat_ns, avg_lat_ns);
else {
- slat_ns = ewma_update(stat->slat_ns, avg_lat_ns);
+ slat_ns = ewma_update(stat->slat_ns, avg_lat_ns,
+ READ_ONCE(head->adp_ewma_shift));
WRITE_ONCE(stat->slat_ns, slat_ns);
}
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 1c1ec2a7f9ad..97de45634f08 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -545,6 +545,7 @@ struct nvme_ns_head {
unsigned int delayed_removal_secs;
struct nvme_ns * __percpu *adp_path;
+ u32 adp_ewma_shift;
#define NVME_NSHEAD_DISK_LIVE 0
#define NVME_NSHEAD_QUEUE_IF_NO_PATH 1
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* [RFC PATCHv5 5/7] nvme-multipath: add debugfs attribute adaptive_weight_timeout
2025-11-05 10:33 [RFC PATCHv5 0/7] nvme-multipath: introduce adaptive I/O policy Nilay Shroff
` (3 preceding siblings ...)
2025-11-05 10:33 ` [RFC PATCHv5 4/7] nvme-multipath: add debugfs attribute adaptive_ewma_shift Nilay Shroff
@ 2025-11-05 10:33 ` Nilay Shroff
2025-11-05 10:33 ` [RFC PATCHv5 6/7] nvme-multipath: add debugfs attribute adaptive_stat Nilay Shroff
2025-11-05 10:33 ` [RFC PATCHv5 7/7] nvme-multipath: add documentation for adaptive I/O policy Nilay Shroff
6 siblings, 0 replies; 8+ messages in thread
From: Nilay Shroff @ 2025-11-05 10:33 UTC (permalink / raw)
To: linux-nvme; +Cc: hare, hch, kbusch, sagi, dwagner, axboe, kanie, gjoyce
By default, the adaptive I/O policy accumulates latency samples over a
15-second window. When this window expires, the driver computes the
average latency and updates the smoothed (EWMA) latency value. The
path weight is then recalculated based on this data.
A 15-second window provides a good balance for most workloads, as it
helps smooth out transient latency spikes and produces a more stable
path weight profile. However, some workloads may benefit from faster
or slower adaptation to changing latency conditions.
This commit introduces a new debugfs attribute, adaptive_weight_timeout,
which allows users to configure the path weight calculation interval
based on their workload requirements.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/core.c | 1 +
drivers/nvme/host/debugfs.c | 40 ++++++++++++++++++++++++++++++++++-
drivers/nvme/host/multipath.c | 7 ++++--
drivers/nvme/host/nvme.h | 1 +
4 files changed, 46 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 43b9b0d6cbdf..d3828c4812fc 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3915,6 +3915,7 @@ static struct nvme_ns_head *nvme_alloc_ns_head(struct nvme_ctrl *ctrl,
head->rotational = info->is_rotational;
#ifdef CONFIG_NVME_MULTIPATH
head->adp_ewma_shift = NVME_DEFAULT_ADP_EWMA_SHIFT;
+ head->adp_weight_timeout = NVME_DEFAULT_ADP_WEIGHT_TIMEOUT;
#endif
ratelimit_state_init(&head->rs_nuse, 5 * HZ, 1);
ratelimit_set_flags(&head->rs_nuse, RATELIMIT_MSG_ON_RELEASE);
diff --git a/drivers/nvme/host/debugfs.c b/drivers/nvme/host/debugfs.c
index e3c37041e8f2..e382fa411b13 100644
--- a/drivers/nvme/host/debugfs.c
+++ b/drivers/nvme/host/debugfs.c
@@ -146,12 +146,50 @@ static ssize_t nvme_adp_ewma_shift_store(void *data, const char __user *ubuf,
WRITE_ONCE(head->adp_ewma_shift, res);
return count;
}
+
+static int nvme_adp_weight_timeout_show(void *data, struct seq_file *m)
+{
+ struct nvme_ns_head *head = data;
+
+ seq_printf(m, "%llu\n",
+ div_u64(READ_ONCE(head->adp_weight_timeout), NSEC_PER_SEC));
+ return 0;
+}
+
+static ssize_t nvme_adp_weight_timeout_store(void *data,
+ const char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ struct nvme_ns_head *head = data;
+ char kbuf[8];
+ u32 res;
+ int ret;
+ size_t len;
+ char *arg;
+
+ len = min(sizeof(kbuf) - 1, count);
+
+ if (copy_from_user(kbuf, ubuf, len))
+ return -EFAULT;
+
+ kbuf[len] = '\0';
+ arg = strstrip(kbuf);
+
+ ret = kstrtou32(arg, 0, &res);
+ if (ret)
+ return ret;
+
+ WRITE_ONCE(head->adp_weight_timeout, res * NSEC_PER_SEC);
+ return count;
+}
#endif
static const struct nvme_debugfs_attr nvme_mpath_debugfs_attrs[] = {
#ifdef CONFIG_NVME_MULTIPATH
- {"adaptive_ewma_shift", 0600, nvme_adp_ewma_shift_show,
+ {"adaptive_ewma_shift", 0600, nvme_adp_ewma_shift_show,
nvme_adp_ewma_shift_store},
+ {"adaptive_weight_timeout", 0600, nvme_adp_weight_timeout_show,
+ nvme_adp_weight_timeout_store},
#endif
{},
};
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index c7470cc8844e..e70a7d5cf036 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -362,8 +362,11 @@ static void nvme_mpath_add_sample(struct request *rq, struct nvme_ns *ns)
stat->batch_count++;
stat->nr_samples++;
- if (now > stat->last_weight_ts &&
- (now - stat->last_weight_ts) >= NVME_DEFAULT_ADP_WEIGHT_TIMEOUT) {
+ if (now > stat->last_weight_ts) {
+ u64 timeout = READ_ONCE(head->adp_weight_timeout);
+
+ if ((now - stat->last_weight_ts) < timeout)
+ return;
stat->last_weight_ts = now;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 97de45634f08..53d868cccbeb 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -546,6 +546,7 @@ struct nvme_ns_head {
struct nvme_ns * __percpu *adp_path;
u32 adp_ewma_shift;
+ u64 adp_weight_timeout;
#define NVME_NSHEAD_DISK_LIVE 0
#define NVME_NSHEAD_QUEUE_IF_NO_PATH 1
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* [RFC PATCHv5 6/7] nvme-multipath: add debugfs attribute adaptive_stat
2025-11-05 10:33 [RFC PATCHv5 0/7] nvme-multipath: introduce adaptive I/O policy Nilay Shroff
` (4 preceding siblings ...)
2025-11-05 10:33 ` [RFC PATCHv5 5/7] nvme-multipath: add debugfs attribute adaptive_weight_timeout Nilay Shroff
@ 2025-11-05 10:33 ` Nilay Shroff
2025-11-05 10:33 ` [RFC PATCHv5 7/7] nvme-multipath: add documentation for adaptive I/O policy Nilay Shroff
6 siblings, 0 replies; 8+ messages in thread
From: Nilay Shroff @ 2025-11-05 10:33 UTC (permalink / raw)
To: linux-nvme; +Cc: hare, hch, kbusch, sagi, dwagner, axboe, kanie, gjoyce
This commit introduces a new debugfs attribute, "adaptive_stat", under
both per-path and head debugfs directories (defined under /sys/kernel/
debug/block/). This attribute provides visibility into the internal
state of the adaptive I/O policy to aid in debugging and performance
analysis.
For per-path entries, "adaptive_stat" reports the corresponding path
statistics such as I/O weight, selection count, processed samples, and
ignored samples.
For head entries, it reports per-CPU statistics for each reachable path,
including I/O weight, path score, smoothed (EWMA) latency, selection
count, processed samples, and ignored samples.
These additions enhance observability of the adaptive I/O path selection
behavior and help diagnose imbalance or instability in multipath
performance.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/debugfs.c | 113 ++++++++++++++++++++++++++++++++++++
1 file changed, 113 insertions(+)
diff --git a/drivers/nvme/host/debugfs.c b/drivers/nvme/host/debugfs.c
index e382fa411b13..28de4a8e2333 100644
--- a/drivers/nvme/host/debugfs.c
+++ b/drivers/nvme/host/debugfs.c
@@ -182,6 +182,115 @@ static ssize_t nvme_adp_weight_timeout_store(void *data,
WRITE_ONCE(head->adp_weight_timeout, res * NSEC_PER_SEC);
return count;
}
+
+static void *nvme_mpath_adp_stat_start(struct seq_file *m, loff_t *pos)
+{
+ struct nvme_ns *ns;
+ struct nvme_debugfs_ctx *ctx = m->private;
+ struct nvme_ns_head *head = ctx->data;
+
+ /* Remember srcu index, so we can unlock later. */
+ ctx->srcu_idx = srcu_read_lock(&head->srcu);
+ ns = list_first_or_null_rcu(&head->list, struct nvme_ns, siblings);
+
+ while (*pos && ns) {
+ ns = list_next_or_null_rcu(&head->list, &ns->siblings,
+ struct nvme_ns, siblings);
+ (*pos)--;
+ }
+
+ return ns;
+}
+
+static void *nvme_mpath_adp_stat_next(struct seq_file *m, void *v, loff_t *pos)
+{
+ struct nvme_ns *ns = v;
+ struct nvme_debugfs_ctx *ctx = m->private;
+ struct nvme_ns_head *head = ctx->data;
+
+ (*pos)++;
+
+ return list_next_or_null_rcu(&head->list, &ns->siblings,
+ struct nvme_ns, siblings);
+}
+
+static void nvme_mpath_adp_stat_stop(struct seq_file *m, void *v)
+{
+ struct nvme_debugfs_ctx *ctx = m->private;
+ struct nvme_ns_head *head = ctx->data;
+ int srcu_idx = ctx->srcu_idx;
+
+ srcu_read_unlock(&head->srcu, srcu_idx);
+}
+
+static int nvme_mpath_adp_stat_show(struct seq_file *m, void *v)
+{
+ int i, cpu;
+ struct nvme_path_stat *stat;
+ struct nvme_ns *ns = v;
+
+ seq_printf(m, "%s:\n", ns->disk->disk_name);
+ for_each_online_cpu(cpu) {
+ seq_printf(m, "cpu %d : ", cpu);
+ for (i = 0; i < NVME_NUM_STAT_GROUPS; i++) {
+ stat = &per_cpu_ptr(ns->info, cpu)[i].stat;
+ seq_printf(m, "%u %u %llu %llu %llu %llu %llu ",
+ stat->weight, stat->credit, stat->score,
+ stat->slat_ns, stat->sel,
+ stat->nr_samples, stat->nr_ignored);
+ }
+ seq_putc(m, '\n');
+ }
+ return 0;
+}
+
+static const struct seq_operations nvme_mpath_adp_stat_seq_ops = {
+ .start = nvme_mpath_adp_stat_start,
+ .next = nvme_mpath_adp_stat_next,
+ .stop = nvme_mpath_adp_stat_stop,
+ .show = nvme_mpath_adp_stat_show
+};
+
+static void adp_stat_read_all(struct nvme_ns *ns, struct nvme_path_stat *batch)
+{
+ int i, cpu;
+ u32 ncpu[NVME_NUM_STAT_GROUPS] = {0};
+ struct nvme_path_stat *stat;
+
+ for_each_online_cpu(cpu) {
+ for (i = 0; i < NVME_NUM_STAT_GROUPS; i++) {
+ stat = &per_cpu_ptr(ns->info, cpu)[i].stat;
+ batch[i].sel += stat->sel;
+ batch[i].nr_samples += stat->nr_samples;
+ batch[i].nr_ignored += stat->nr_ignored;
+ batch[i].weight += stat->weight;
+ if (stat->weight)
+ ncpu[i]++;
+ }
+ }
+
+ for (i = 0; i < NVME_NUM_STAT_GROUPS; i++) {
+ if (!ncpu[i])
+ continue;
+ batch[i].weight = DIV_U64_ROUND_CLOSEST(batch[i].weight,
+ ncpu[i]);
+ }
+}
+
+static int nvme_ns_adp_stat_show(void *data, struct seq_file *m)
+{
+ int i;
+ struct nvme_path_stat stat[NVME_NUM_STAT_GROUPS] = {0};
+ struct nvme_ns *ns = (struct nvme_ns *)data;
+
+ adp_stat_read_all(ns, stat);
+ for (i = 0; i < NVME_NUM_STAT_GROUPS; i++) {
+ seq_printf(m, "%u %llu %llu %llu ",
+ stat[i].weight, stat[i].sel,
+ stat[i].nr_samples, stat[i].nr_ignored);
+ }
+ return 0;
+}
#endif
static const struct nvme_debugfs_attr nvme_mpath_debugfs_attrs[] = {
@@ -190,11 +299,15 @@ static const struct nvme_debugfs_attr nvme_mpath_debugfs_attrs[] = {
nvme_adp_ewma_shift_store},
{"adaptive_weight_timeout", 0600, nvme_adp_weight_timeout_show,
nvme_adp_weight_timeout_store},
+ {"adaptive_stat", 0400, .seq_ops = &nvme_mpath_adp_stat_seq_ops},
#endif
{},
};
static const struct nvme_debugfs_attr nvme_ns_debugfs_attrs[] = {
+#ifdef CONFIG_NVME_MULTIPATH
+ {"adaptive_stat", 0400, nvme_ns_adp_stat_show},
+#endif
{},
};
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread* [RFC PATCHv5 7/7] nvme-multipath: add documentation for adaptive I/O policy
2025-11-05 10:33 [RFC PATCHv5 0/7] nvme-multipath: introduce adaptive I/O policy Nilay Shroff
` (5 preceding siblings ...)
2025-11-05 10:33 ` [RFC PATCHv5 6/7] nvme-multipath: add debugfs attribute adaptive_stat Nilay Shroff
@ 2025-11-05 10:33 ` Nilay Shroff
6 siblings, 0 replies; 8+ messages in thread
From: Nilay Shroff @ 2025-11-05 10:33 UTC (permalink / raw)
To: linux-nvme; +Cc: hare, hch, kbusch, sagi, dwagner, axboe, kanie, gjoyce
Update the nvme-multipath documentation to describe the adaptive I/O
policy, its behavior, and when it is suitable for use.
Suggested-by: Guixin Liu <kanie@linux.alibaba.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
Documentation/admin-guide/nvme-multipath.rst | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/Documentation/admin-guide/nvme-multipath.rst b/Documentation/admin-guide/nvme-multipath.rst
index 97ca1ccef459..7befaab01cf5 100644
--- a/Documentation/admin-guide/nvme-multipath.rst
+++ b/Documentation/admin-guide/nvme-multipath.rst
@@ -70,3 +70,22 @@ When to use the queue-depth policy:
1. High load with small I/Os: Effectively balances load across paths when
the load is high, and I/O operations consist of small, relatively
fixed-sized requests.
+
+Adaptive
+--------
+
+The adaptive policy manages I/O requests based on path latency. It periodically
+calculates a weight for each path and distributes I/O accordingly. Paths with
+higher latency receive lower weights, resulting in fewer I/O requests being sent
+to them, while paths with lower latency handle a proportionally larger share of
+the I/O load.
+
+When to use the adaptive policy
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+1. Homogeneous Path Performance: Utilizes all available paths efficiently when
+ their performance characteristics (e.g., latency, bandwidth) are similar.
+
+2. Heterogeneous Path Performance: Dynamically distributes I/O based on per-path
+ performance characteristics. Paths with lower latency receive a higher share
+ of I/O compared to those with higher latency.
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread