* [PATCH 0/3] A few cleanup patches for blk-iolatency.c
@ 2022-09-29 7:40 Kemeng Shi
2022-09-29 7:40 ` [PATCH 1/3] block: Remove redundant parent blkcg_gp check in check_scale_change Kemeng Shi
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Kemeng Shi @ 2022-09-29 7:40 UTC (permalink / raw)
To: tj, axboe; +Cc: cgroups, linux-block, linux-kernel, shikemeng
This series contains three cleanup patches to remove redundant check,
correct comment and simplify struct iolatency_grp in blk-iolatency.c.
Kemeng Shi (3):
block: Remove redundant parent blkcg_gp check in check_scale_change
block: Correct comment for scale_cookie_change
block: Replace struct rq_depth with unsigned int in struct
iolatency_grp
block/blk-iolatency.c | 33 ++++++++++++++-------------------
1 file changed, 14 insertions(+), 19 deletions(-)
--
2.30.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/3] block: Remove redundant parent blkcg_gp check in check_scale_change
2022-09-29 7:40 [PATCH 0/3] A few cleanup patches for blk-iolatency.c Kemeng Shi
@ 2022-09-29 7:40 ` Kemeng Shi
2022-10-17 18:23 ` Josef Bacik
2022-09-29 7:40 ` [PATCH 2/3] block: Correct comment for scale_cookie_change Kemeng Shi
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: Kemeng Shi @ 2022-09-29 7:40 UTC (permalink / raw)
To: tj, axboe; +Cc: cgroups, linux-block, linux-kernel, shikemeng
Function blkcg_iolatency_throttle will make sure blkg->parent is not
NULL before calls check_scale_change. And function check_scale_change
is only called in blkcg_iolatency_throttle.
Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
block/blk-iolatency.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index e285152345a2..a8cc5abe91e5 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -403,9 +403,6 @@ static void check_scale_change(struct iolatency_grp *iolat)
u64 scale_lat;
int direction = 0;
- if (lat_to_blkg(iolat)->parent == NULL)
- return;
-
parent = blkg_to_lat(lat_to_blkg(iolat)->parent);
if (!parent)
return;
--
2.30.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/3] block: Correct comment for scale_cookie_change
2022-09-29 7:40 [PATCH 0/3] A few cleanup patches for blk-iolatency.c Kemeng Shi
2022-09-29 7:40 ` [PATCH 1/3] block: Remove redundant parent blkcg_gp check in check_scale_change Kemeng Shi
@ 2022-09-29 7:40 ` Kemeng Shi
2022-10-17 18:20 ` Josef Bacik
2022-09-29 7:40 ` [PATCH 3/3] block: Replace struct rq_depth with unsigned int in struct iolatency_grp Kemeng Shi
2022-10-17 2:06 ` [PATCH 0/3] A few cleanup patches for blk-iolatency.c Kemeng Shi
3 siblings, 1 reply; 8+ messages in thread
From: Kemeng Shi @ 2022-09-29 7:40 UTC (permalink / raw)
To: tj, axboe; +Cc: cgroups, linux-block, linux-kernel, shikemeng
Default queue depth of iolatency_grp is unlimited, so we scale down
quickly(once by half) in scale_cookie_change. Remove the "subtract
1/16th" part which is not the truth.
Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
block/blk-iolatency.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index a8cc5abe91e5..2666afd7abdb 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -364,7 +364,7 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat,
}
/*
- * Change the queue depth of the iolatency_grp. We add/subtract 1/16th of the
+ * Change the queue depth of the iolatency_grp. We add 1/16th of the
* queue depth at a time so we don't get wild swings and hopefully dial in to
* fairer distribution of the overall queue depth.
*/
--
2.30.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/3] block: Replace struct rq_depth with unsigned int in struct iolatency_grp
2022-09-29 7:40 [PATCH 0/3] A few cleanup patches for blk-iolatency.c Kemeng Shi
2022-09-29 7:40 ` [PATCH 1/3] block: Remove redundant parent blkcg_gp check in check_scale_change Kemeng Shi
2022-09-29 7:40 ` [PATCH 2/3] block: Correct comment for scale_cookie_change Kemeng Shi
@ 2022-09-29 7:40 ` Kemeng Shi
2022-10-17 18:27 ` Josef Bacik
2022-10-17 2:06 ` [PATCH 0/3] A few cleanup patches for blk-iolatency.c Kemeng Shi
3 siblings, 1 reply; 8+ messages in thread
From: Kemeng Shi @ 2022-09-29 7:40 UTC (permalink / raw)
To: tj, axboe; +Cc: cgroups, linux-block, linux-kernel, shikemeng
We only need a max queue depth for every iolatency to limit the inflight io
number. Replace struct rq_depth with unsigned int to simplfy "struct
iolatency_grp" and save memory.
Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
block/blk-iolatency.c | 28 +++++++++++++---------------
1 file changed, 13 insertions(+), 15 deletions(-)
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 2666afd7abdb..55bc742d3b66 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -141,7 +141,7 @@ struct iolatency_grp {
struct latency_stat __percpu *stats;
struct latency_stat cur_stat;
struct blk_iolatency *blkiolat;
- struct rq_depth rq_depth;
+ unsigned int max_depth;
struct rq_wait rq_wait;
atomic64_t window_start;
atomic_t scale_cookie;
@@ -280,7 +280,7 @@ static void iolat_cleanup_cb(struct rq_wait *rqw, void *private_data)
static bool iolat_acquire_inflight(struct rq_wait *rqw, void *private_data)
{
struct iolatency_grp *iolat = private_data;
- return rq_wait_inc_below(rqw, iolat->rq_depth.max_depth);
+ return rq_wait_inc_below(rqw, iolat->max_depth);
}
static void __blkcg_iolatency_throttle(struct rq_qos *rqos,
@@ -372,7 +372,7 @@ static void scale_change(struct iolatency_grp *iolat, bool up)
{
unsigned long qd = iolat->blkiolat->rqos.q->nr_requests;
unsigned long scale = scale_amount(qd, up);
- unsigned long old = iolat->rq_depth.max_depth;
+ unsigned long old = iolat->max_depth;
if (old > qd)
old = qd;
@@ -384,12 +384,12 @@ static void scale_change(struct iolatency_grp *iolat, bool up)
if (old < qd) {
old += scale;
old = min(old, qd);
- iolat->rq_depth.max_depth = old;
+ iolat->max_depth = old;
wake_up_all(&iolat->rq_wait.wait);
}
} else {
old >>= 1;
- iolat->rq_depth.max_depth = max(old, 1UL);
+ iolat->max_depth = max(old, 1UL);
}
}
@@ -442,7 +442,7 @@ static void check_scale_change(struct iolatency_grp *iolat)
}
/* We're as low as we can go. */
- if (iolat->rq_depth.max_depth == 1 && direction < 0) {
+ if (iolat->max_depth == 1 && direction < 0) {
blkcg_use_delay(lat_to_blkg(iolat));
return;
}
@@ -450,7 +450,7 @@ static void check_scale_change(struct iolatency_grp *iolat)
/* We're back to the default cookie, unthrottle all the things. */
if (cur_cookie == DEFAULT_SCALE_COOKIE) {
blkcg_clear_delay(lat_to_blkg(iolat));
- iolat->rq_depth.max_depth = UINT_MAX;
+ iolat->max_depth = UINT_MAX;
wake_up_all(&iolat->rq_wait.wait);
return;
}
@@ -505,7 +505,7 @@ static void iolatency_record_time(struct iolatency_grp *iolat,
* We don't want to count issue_as_root bio's in the cgroups latency
* statistics as it could skew the numbers downwards.
*/
- if (unlikely(issue_as_root && iolat->rq_depth.max_depth != UINT_MAX)) {
+ if (unlikely(issue_as_root && iolat->max_depth != UINT_MAX)) {
u64 sub = iolat->min_lat_nsec;
if (req_time < sub)
blkcg_add_delay(lat_to_blkg(iolat), now, sub - req_time);
@@ -916,7 +916,7 @@ static void iolatency_ssd_stat(struct iolatency_grp *iolat, struct seq_file *s)
}
preempt_enable();
- if (iolat->rq_depth.max_depth == UINT_MAX)
+ if (iolat->max_depth == UINT_MAX)
seq_printf(s, " missed=%llu total=%llu depth=max",
(unsigned long long)stat.ps.missed,
(unsigned long long)stat.ps.total);
@@ -924,7 +924,7 @@ static void iolatency_ssd_stat(struct iolatency_grp *iolat, struct seq_file *s)
seq_printf(s, " missed=%llu total=%llu depth=%u",
(unsigned long long)stat.ps.missed,
(unsigned long long)stat.ps.total,
- iolat->rq_depth.max_depth);
+ iolat->max_depth);
}
static void iolatency_pd_stat(struct blkg_policy_data *pd, struct seq_file *s)
@@ -941,12 +941,12 @@ static void iolatency_pd_stat(struct blkg_policy_data *pd, struct seq_file *s)
avg_lat = div64_u64(iolat->lat_avg, NSEC_PER_USEC);
cur_win = div64_u64(iolat->cur_win_nsec, NSEC_PER_MSEC);
- if (iolat->rq_depth.max_depth == UINT_MAX)
+ if (iolat->max_depth == UINT_MAX)
seq_printf(s, " depth=max avg_lat=%llu win=%llu",
avg_lat, cur_win);
else
seq_printf(s, " depth=%u avg_lat=%llu win=%llu",
- iolat->rq_depth.max_depth, avg_lat, cur_win);
+ iolat->max_depth, avg_lat, cur_win);
}
static struct blkg_policy_data *iolatency_pd_alloc(gfp_t gfp,
@@ -990,9 +990,7 @@ static void iolatency_pd_init(struct blkg_policy_data *pd)
latency_stat_init(iolat, &iolat->cur_stat);
rq_wait_init(&iolat->rq_wait);
spin_lock_init(&iolat->child_lat.lock);
- iolat->rq_depth.queue_depth = blkg->q->nr_requests;
- iolat->rq_depth.max_depth = UINT_MAX;
- iolat->rq_depth.default_depth = iolat->rq_depth.queue_depth;
+ iolat->max_depth = UINT_MAX;
iolat->blkiolat = blkiolat;
iolat->cur_win_nsec = 100 * NSEC_PER_MSEC;
atomic64_set(&iolat->window_start, now);
--
2.30.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 0/3] A few cleanup patches for blk-iolatency.c
2022-09-29 7:40 [PATCH 0/3] A few cleanup patches for blk-iolatency.c Kemeng Shi
` (2 preceding siblings ...)
2022-09-29 7:40 ` [PATCH 3/3] block: Replace struct rq_depth with unsigned int in struct iolatency_grp Kemeng Shi
@ 2022-10-17 2:06 ` Kemeng Shi
3 siblings, 0 replies; 8+ messages in thread
From: Kemeng Shi @ 2022-10-17 2:06 UTC (permalink / raw)
To: tj, axboe; +Cc: cgroups, linux-block, linux-kernel
Friendly ping ...
on 9/29/2022 3:40 PM, Kemeng Shi wrote:
> This series contains three cleanup patches to remove redundant check,
> correct comment and simplify struct iolatency_grp in blk-iolatency.c.
>
> Kemeng Shi (3):
> block: Remove redundant parent blkcg_gp check in check_scale_change
> block: Correct comment for scale_cookie_change
> block: Replace struct rq_depth with unsigned int in struct
> iolatency_grp
>
> block/blk-iolatency.c | 33 ++++++++++++++-------------------
> 1 file changed, 14 insertions(+), 19 deletions(-)
>
--
Best wishes
Kemeng Shi
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/3] block: Correct comment for scale_cookie_change
2022-09-29 7:40 ` [PATCH 2/3] block: Correct comment for scale_cookie_change Kemeng Shi
@ 2022-10-17 18:20 ` Josef Bacik
0 siblings, 0 replies; 8+ messages in thread
From: Josef Bacik @ 2022-10-17 18:20 UTC (permalink / raw)
To: Kemeng Shi; +Cc: tj, axboe, cgroups, linux-block, linux-kernel
On Thu, Sep 29, 2022 at 03:40:54PM +0800, Kemeng Shi wrote:
> Default queue depth of iolatency_grp is unlimited, so we scale down
> quickly(once by half) in scale_cookie_change. Remove the "subtract
> 1/16th" part which is not the truth.
>
Ok sure, but at least update the comment to indicate what we actually do when
scaling down. Thanks,
Josef
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/3] block: Remove redundant parent blkcg_gp check in check_scale_change
2022-09-29 7:40 ` [PATCH 1/3] block: Remove redundant parent blkcg_gp check in check_scale_change Kemeng Shi
@ 2022-10-17 18:23 ` Josef Bacik
0 siblings, 0 replies; 8+ messages in thread
From: Josef Bacik @ 2022-10-17 18:23 UTC (permalink / raw)
To: Kemeng Shi; +Cc: tj, axboe, cgroups, linux-block, linux-kernel
On Thu, Sep 29, 2022 at 03:40:53PM +0800, Kemeng Shi wrote:
> Function blkcg_iolatency_throttle will make sure blkg->parent is not
> NULL before calls check_scale_change. And function check_scale_change
> is only called in blkcg_iolatency_throttle.
>
> Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Thanks,
Josef
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 3/3] block: Replace struct rq_depth with unsigned int in struct iolatency_grp
2022-09-29 7:40 ` [PATCH 3/3] block: Replace struct rq_depth with unsigned int in struct iolatency_grp Kemeng Shi
@ 2022-10-17 18:27 ` Josef Bacik
0 siblings, 0 replies; 8+ messages in thread
From: Josef Bacik @ 2022-10-17 18:27 UTC (permalink / raw)
To: Kemeng Shi; +Cc: tj, axboe, cgroups, linux-block, linux-kernel
On Thu, Sep 29, 2022 at 03:40:55PM +0800, Kemeng Shi wrote:
> We only need a max queue depth for every iolatency to limit the inflight io
> number. Replace struct rq_depth with unsigned int to simplfy "struct
> iolatency_grp" and save memory.
>
> Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Thanks,
Josef
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2022-10-17 18:27 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-09-29 7:40 [PATCH 0/3] A few cleanup patches for blk-iolatency.c Kemeng Shi
2022-09-29 7:40 ` [PATCH 1/3] block: Remove redundant parent blkcg_gp check in check_scale_change Kemeng Shi
2022-10-17 18:23 ` Josef Bacik
2022-09-29 7:40 ` [PATCH 2/3] block: Correct comment for scale_cookie_change Kemeng Shi
2022-10-17 18:20 ` Josef Bacik
2022-09-29 7:40 ` [PATCH 3/3] block: Replace struct rq_depth with unsigned int in struct iolatency_grp Kemeng Shi
2022-10-17 18:27 ` Josef Bacik
2022-10-17 2:06 ` [PATCH 0/3] A few cleanup patches for blk-iolatency.c Kemeng Shi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).