Linux block layer
 help / color / mirror / Atom feed
From: Yu Kuai <yukuai1@huaweicloud.com>
To: Yu Kuai <yukuai1@huaweicloud.com>,
	tj@kernel.org, axboe@kernel.dk, paolo.valente@linaro.org,
	jack@suse.cz
Cc: cgroups@vger.kernel.org, linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org, yi.zhang@huawei.com,
	"yukuai (C)" <yukuai3@huawei.com>
Subject: Re: [patch v11 0/6] support concurrent sync io for bfq on a specail occasion
Date: Tue, 1 Nov 2022 19:32:45 +0800	[thread overview]
Message-ID: <b2099166-5386-d60b-0b31-e5cd40ef97da@huaweicloud.com> (raw)
In-Reply-To: <20220916071942.214222-1-yukuai1@huaweicloud.com>

Hi, Jens

在 2022/09/16 15:19, Yu Kuai 写道:
> From: Yu Kuai <yukuai3@huawei.com>
>
> 
> Currently, bfq can't handle sync io concurrently as long as they
> are not issued from root group. This is because
> 'bfqd->num_groups_with_pending_reqs > 0' is always true in
> bfq_asymmetric_scenario().
> 
> The way that bfqg is counted into 'num_groups_with_pending_reqs':
> 
> Before this patchset:
>   1) root group will never be counted.
>   2) Count if bfqg or it's child bfqgs have pending requests.
>   3) Don't count if bfqg and it's child bfqgs complete all the requests.
> 
> After this patchset:
>   1) root group is counted.
>   2) Count if bfqg has pending requests.
>   3) Don't count if bfqg complete all the requests.
> 
> With the above changes, concurrent sync io can be supported if only
> one group is activated.

Can you apply this patchset?

Thanks,
Kuai


  parent reply	other threads:[~2022-11-01 11:40 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-16  7:19 [patch v11 0/6] support concurrent sync io for bfq on a specail occasion Yu Kuai
2022-09-16  7:19 ` [patch v11 1/6] block, bfq: support to track if bfqq has pending requests Yu Kuai
2022-09-16  7:19 ` [patch v11 2/6] block, bfq: record how many queues have " Yu Kuai
2022-09-16  7:19 ` [patch v11 3/6] block, bfq: refactor the counting of 'num_groups_with_pending_reqs' Yu Kuai
2022-09-27 16:32   ` Paolo Valente
2022-09-27 16:33     ` Paolo VALENTE
2022-09-16  7:19 ` [patch v11 4/6] block, bfq: do not idle if only one group is activated Yu Kuai
2022-09-16  7:19 ` [patch v11 5/6] block, bfq: cleanup bfq_weights_tree add/remove apis Yu Kuai
2022-09-19  8:46   ` Jan Kara
2022-09-27 16:19     ` Paolo Valente
2022-09-16  7:19 ` [patch v11 6/6] block, bfq: cleanup __bfq_weights_tree_remove() Yu Kuai
2022-09-27 16:38 ` [patch v11 0/6] support concurrent sync io for bfq on a specail occasion Paolo Valente
2022-09-28  1:07   ` Yu Kuai
2022-10-11  8:11   ` Yu Kuai
2022-10-11  8:21     ` Paolo Valente
2022-10-11  9:36       ` Yu Kuai
2022-10-18  4:00         ` Yu Kuai
2022-10-25  6:34           ` Paolo VALENTE
2022-10-25  7:31             ` Yu Kuai
2022-11-01 11:32 ` Yu Kuai [this message]
2022-11-01 13:10 ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b2099166-5386-d60b-0b31-e5cd40ef97da@huaweicloud.com \
    --to=yukuai1@huaweicloud.com \
    --cc=axboe@kernel.dk \
    --cc=cgroups@vger.kernel.org \
    --cc=jack@suse.cz \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paolo.valente@linaro.org \
    --cc=tj@kernel.org \
    --cc=yi.zhang@huawei.com \
    --cc=yukuai3@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox