public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] blk-iocost: fix busy_level reset when no IOs complete
@ 2026-03-29 15:41 Jialin Wang
  0 siblings, 0 replies; only message in thread
From: Jialin Wang @ 2026-03-29 15:41 UTC (permalink / raw)
  To: tj, josef, axboe; +Cc: cgroups, linux-block, linux-kernel, wjl.linux

When a disk is saturated, it is common for no IOs to complete within a
timer period. Currently, in this case, rq_wait_pct and missed_ppm are
calculated as 0, the iocost incorrectly interprets this as meeting QoS
targets and resets busy_level to 0.

This reset prevents busy_level from reaching the threshold (4) needed
to reduce vrate. On certain cloud storage, such as Azure Premium SSD,
we observed that iocost may fail to reduce vrate for tens of seconds
during saturation, failing to mitigate noisy neighbor issues.

Fix this by tracking the number of IO completions (nr_done) in a period.
If nr_done is 0, we adjust the logic:

* If there are lagging IOs, the saturation status is unknown, so we try
  to keep busy_level unchanged. To avoid drastic vrate oscillations, we
  clamp it between -4 and 4.
* If there are shortages but no lagging IOs, the vrate might be too low
  to issue any IOs. We should allow vrate to increase but not decrease.
* Otherwise, reset busy_level to 0.

Note that when nr_done is 0 and nr_lagging is 0, the adjustment logic
is nearly identical to the "QoS targets are being met with >25% margin"
state, which minimizes the risk of regressions.

The issue is consistently reproducible on Azure Standard_D8as_v5 (Dasv5)
VMs with 512GB Premium SSD (P20) using the script below. It was not
observed on GCP n2d VMs (100G pd-ssd and 1.5T local-ssd), and no
regressions were found with this patch. In this script, cgA saturates
the device. The iocost is expected to throttle it so that cgB's
completion latency remains low.

  BLK_DEVID="8:0"
  MODEL="rbps=173471131 rseqiops=3566 rrandiops=3566 wbps=173333269 wseqiops=3566 wrandiops=3566"
  QOS="rpct=90.00 rlat=3500 wpct=90 wlat=3500 min=80 max=10000"

  echo "$BLK_DEVID ctrl=user model=linear $MODEL" > /sys/fs/cgroup/io.cost.model
  echo "$BLK_DEVID enable=1 ctrl=user $QOS" > /sys/fs/cgroup/io.cost.qos

  CG_A="/sys/fs/cgroup/cgA"
  CG_B="/sys/fs/cgroup/cgB"

  FILE_A="/data0/A.fio.testfile"
  FILE_B="/data0/B.fio.testfile"
  RESULT_DIR="./iocost_results_$(date +%Y%m%d_%H%M%S)"

  mkdir -p "$CG_A" "$CG_B" "$RESULT_DIR"

  get_result() {
    local file=$1
    local label=$2

    local results=$(jq -r '
    .jobs[0].mixed |
    ( .iops | tonumber | round ) as $iops |
    ( .bw_bytes / 1024 / 1024 ) as $bps |
    ( .clat_ns.mean / 1000000 ) as $avg |
    ( .clat_ns.max / 1000000 ) as $max |
    ( .clat_ns.percentile["90.000000"] / 1000000 ) as $p90 |
    ( .clat_ns.percentile["99.000000"] / 1000000 ) as $p99 |
    ( .clat_ns.percentile["99.900000"] / 1000000 ) as $p999 |
    ( .clat_ns.percentile["99.990000"] / 1000000 ) as $p9999 |
    "\($iops)|\($bps)|\($avg)|\($max)|\($p90)|\($p99)|\($p999)|\($p9999)"
    ' "$file")

    IFS='|' read -r iops bps avg max p90 p99 p999 p9999 <<<"$results"
    printf "%-8s %-6s %-7.2f %-8.2f %-8.2f %-8.2f %-8.2f %-8.2f %-8.2f\n" \
           "$label" "$iops" "$bps" "$avg" "$max" "$p90" "$p99" "$p999" "$p9999"
  }

  run_fio() {
    local cg_path=$1
    local filename=$2
    local name=$3
    local bs=$4
    local qd=$5
    local out=$6
    shift 6
    local extra=$@

    (
      pid=$(sh -c 'echo $PPID')
      echo $pid >"${cg_path}/cgroup.procs"
      fio --name="$name" --filename="$filename" --direct=1 --rw=randrw --rwmixread=50 \
          --ioengine=libaio --bs="$bs" --iodepth="$qd" --size=4G --runtime=10 \
          --time_based --group_reporting --unified_rw_reporting=mixed \
          --output-format=json --output="$out" $extra >/dev/null 2>&1
    ) &
  }

  echo "Starting Test ..."

  for bs_b in "4k" "32k" "256k"; do
    echo "Running iteration: BS=$bs_b"
    out_a="${RESULT_DIR}/cgA_1m.json"
    out_b="${RESULT_DIR}/cgB_${bs_b}.json"

    # cgA: Heavy background (BS 1MB, QD 128)
    run_fio "$CG_A" "$FILE_A" "cgA" "1m" 128 "$out_a"
    # cgB: Latency sensitive (Variable BS, QD 1, Read/Write IOPS limit 100)
    run_fio "$CG_B" "$FILE_B" "cgB" "$bs_b" 1 "$out_b" "--rate_iops=100"

    wait
    SUMMARY_DATA+="$(get_result "$out_a" "cgA-1m")"$'\n'
    SUMMARY_DATA+="$(get_result "$out_b" "cgB-$bs_b")"$'\n\n'
  done

  # Final Output
  echo -e "\nFinal Results Summary:\n"

  printf "%-8s %-6s %-7s %-8s %-8s %-8s %-8s %-8s %-8s\n\n" \
         "CGROUP" "IOPS" "MB/s" "Avg(ms)" "Max(ms)" "P90(ms)" "P99" "P99.9" "P99.99"
  echo "$SUMMARY_DATA"

  echo "Results saved in $RESULT_DIR"

Before:
  CGROUP   IOPS   MB/s    Avg(ms)  Max(ms)  P90(ms)  P99      P99.9    P99.99

  cgA-1m   167    167.02  748.65   1641.43  960.50   1551.89  1635.78  1635.78
  cgB-4k   5      0.02    190.57   806.84   742.39   809.50   809.50   809.50

  cgA-1m   166    166.36  751.38   1744.31  994.05   1451.23  1736.44  1736.44
  cgB-32k  4      0.14    225.71   1057.25  759.17   1061.16  1061.16  1061.16

  cgA-1m   166    165.91  751.48   1610.94  1010.83  1417.67  1602.22  1619.00
  cgB-256k 5      1.26    198.50   1046.30  742.39   1044.38  1044.38  1044.38

After:
  CGROUP   IOPS   MB/s    Avg(ms)  Max(ms)  P90(ms)  P99      P99.9    P99.99

  cgA-1m   159    158.59  769.06   828.52   809.50   817.89   826.28   826.28
  cgB-4k   200    0.78    2.01     26.11    2.87     6.26     12.39    26.08

  cgA-1m   147    146.84  832.05   985.80   943.72   960.50   985.66   985.66
  cgB-32k  200    6.25    2.82     71.05    3.42     15.40    50.07    70.78

  cgA-1m   114    114.47  1044.98  1294.48  1199.57  1283.46  1300.23  1300.23
  cgB-256k 200    50.00   4.01     34.49    5.08     15.66    30.54    34.34

Signed-off-by: Jialin Wang <wjl.linux@gmail.com>
---
v2:
- Handle more edge cases to prevent potential regressions.

v1: https://lore.kernel.org/all/20260318163351.394528-1-wjl.linux@gmail.com/

 block/blk-iocost.c | 35 +++++++++++++++++++++++++++++------
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index d145db61e5c3..5184c6e25a0c 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -1596,7 +1596,8 @@ static enum hrtimer_restart iocg_waitq_timer_fn(struct hrtimer *timer)
 	return HRTIMER_NORESTART;
 }
 
-static void ioc_lat_stat(struct ioc *ioc, u32 *missed_ppm_ar, u32 *rq_wait_pct_p)
+static void ioc_lat_stat(struct ioc *ioc, u32 *missed_ppm_ar, u32 *rq_wait_pct_p,
+			 u32 *nr_done)
 {
 	u32 nr_met[2] = { };
 	u32 nr_missed[2] = { };
@@ -1633,6 +1634,8 @@ static void ioc_lat_stat(struct ioc *ioc, u32 *missed_ppm_ar, u32 *rq_wait_pct_p
 
 	*rq_wait_pct_p = div64_u64(rq_wait_ns * 100,
 				   ioc->period_us * NSEC_PER_USEC);
+
+	*nr_done = nr_met[READ] + nr_met[WRITE] + nr_missed[READ] + nr_missed[WRITE];
 }
 
 /* was iocg idle this period? */
@@ -2250,12 +2253,12 @@ static void ioc_timer_fn(struct timer_list *timer)
 	u64 usage_us_sum = 0;
 	u32 ppm_rthr;
 	u32 ppm_wthr;
-	u32 missed_ppm[2], rq_wait_pct;
+	u32 missed_ppm[2], rq_wait_pct, nr_done;
 	u64 period_vtime;
 	int prev_busy_level;
 
 	/* how were the latencies during the period? */
-	ioc_lat_stat(ioc, missed_ppm, &rq_wait_pct);
+	ioc_lat_stat(ioc, missed_ppm, &rq_wait_pct, &nr_done);
 
 	/* take care of active iocgs */
 	spin_lock_irq(&ioc->lock);
@@ -2397,9 +2400,29 @@ static void ioc_timer_fn(struct timer_list *timer)
 	 * and should increase vtime rate.
 	 */
 	prev_busy_level = ioc->busy_level;
-	if (rq_wait_pct > RQ_WAIT_BUSY_PCT ||
-	    missed_ppm[READ] > ppm_rthr ||
-	    missed_ppm[WRITE] > ppm_wthr) {
+	if (!nr_done) {
+		if (nr_lagging)
+			/*
+			 * When there are lagging IOs but no completions, we
+			 * don't know if the IO latency will meet the QoS
+			 * targets. The disk might be saturated or not. We
+			 * should not reset busy_level to 0 (which would
+			 * prevent vrate from scaling up or down), but rather
+			 * try to keep it unchanged. To avoid drastic vrate
+			 * oscillations, we clamp it between -4 and 4.
+			 */
+			ioc->busy_level = clamp(ioc->busy_level, -4, 4);
+		else if (nr_shortages)
+			/*
+			 * The vrate might be too low to issue any IOs. We
+			 * should allow vrate to increase but not decrease.
+			 */
+			ioc->busy_level = min(ioc->busy_level, 0);
+		else
+			ioc->busy_level = 0;
+	} else if (rq_wait_pct > RQ_WAIT_BUSY_PCT ||
+		   missed_ppm[READ] > ppm_rthr ||
+		   missed_ppm[WRITE] > ppm_wthr) {
 		/* clearly missing QoS targets, slow down vrate */
 		ioc->busy_level = max(ioc->busy_level, 0);
 		ioc->busy_level++;
-- 
2.53.0


^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2026-03-29 15:41 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-29 15:41 [PATCH v2] blk-iocost: fix busy_level reset when no IOs complete Jialin Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox