public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: Yu Kuai <yukuai1@huaweicloud.com>
Cc: axboe@kernel.dk, ming.lei@redhat.com, mkoutny@suse.com,
	linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	cgroups@vger.kernel.org, yukuai3@huawei.com, yi.zhang@huawei.com
Subject: Re: [PATCH v8 4/4] blk-throttle: fix io hung due to configuration updates
Date: Tue, 23 Aug 2022 08:08:43 -1000	[thread overview]
Message-ID: <YwUXq9XO4TstKJ66@slm.duckdns.org> (raw)
In-Reply-To: <20220823033130.874230-5-yukuai1@huaweicloud.com>

On Tue, Aug 23, 2022 at 11:31:30AM +0800, Yu Kuai wrote:
> From: Yu Kuai <yukuai3@huawei.com>
> 
> If new configuration is submitted while a bio is throttled, then new
> waiting time is recalculated regardless that the bio might already wait
> for some time:
> 
> tg_conf_updated
>  throtl_start_new_slice
>   tg_update_disptime
>   throtl_schedule_next_dispatch
> 
> Then io hung can be triggered by always submmiting new configuration
> before the throttled bio is dispatched.
> 
> Fix the problem by respecting the time that throttled bio already waited.
> In order to do that, add new fields to record how many bytes/io are
> waited, and use it to calculate wait time for throttled bio under new
> configuration.
> 
> Some simple test:
> 1)
> cd /sys/fs/cgroup/blkio/
> echo $$ > cgroup.procs
> echo "8:0 2048" > blkio.throttle.write_bps_device
> {
>         sleep 2
>         echo "8:0 1024" > blkio.throttle.write_bps_device
> } &
> dd if=/dev/zero of=/dev/sda bs=8k count=1 oflag=direct
> 
> 2)
> cd /sys/fs/cgroup/blkio/
> echo $$ > cgroup.procs
> echo "8:0 1024" > blkio.throttle.write_bps_device
> {
>         sleep 4
>         echo "8:0 2048" > blkio.throttle.write_bps_device
> } &
> dd if=/dev/zero of=/dev/sda bs=8k count=1 oflag=direct
> 
> test results: io finish time
> 	before this patch	with this patch
> 1)	10s			6s
> 2)	8s			6s
> 
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>

For 2-4,

 Acked-by: Tejun Heo <tj@kernel.org>

Thanks.

-- 
tejun

      parent reply	other threads:[~2022-08-23 19:25 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-23  3:31 [PATCH v8 0/4] blk-throttle bugfix Yu Kuai
2022-08-23  3:31 ` [PATCH v8 1/4] blk-throttle: fix that io throttle can only work for single bio Yu Kuai
2022-08-23 18:07   ` Tejun Heo
2022-08-24  1:15     ` Yu Kuai
2022-08-25 18:15       ` Tejun Heo
2022-08-26  1:07         ` Yu Kuai
2022-08-23  3:31 ` [PATCH v8 2/4] blk-throttle: prevent overflow while calculating wait time Yu Kuai
2022-08-23  3:31 ` [PATCH v8 3/4] blk-throttle: factor out code to calculate ios/bytes_allowed Yu Kuai
2022-08-23  3:31 ` [PATCH v8 4/4] blk-throttle: fix io hung due to configuration updates Yu Kuai
2022-08-23  9:41   ` Michal Koutný
2022-08-23 18:08   ` Tejun Heo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YwUXq9XO4TstKJ66@slm.duckdns.org \
    --to=tj@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=cgroups@vger.kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=mkoutny@suse.com \
    --cc=yi.zhang@huawei.com \
    --cc=yukuai1@huaweicloud.com \
    --cc=yukuai3@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox