From: Ming Lei <ming.lei@redhat.com>
To: "jianchao.wang" <jianchao.w.wang@oracle.com>
Cc: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>
Subject: Re: No protection on the hctx->dispatch_busy
Date: Mon, 27 Aug 2018 15:34:17 +0800 [thread overview]
Message-ID: <20180827073416.GB20731@ming.t460p> (raw)
In-Reply-To: <923fdf9f-7081-4952-8778-34d01836bb2b@oracle.com>
On Mon, Aug 27, 2018 at 03:25:50PM +0800, jianchao.wang wrote:
>
>
> On 08/27/2018 03:00 PM, Ming Lei wrote:
> > On Mon, Aug 27, 2018 at 01:56:39PM +0800, jianchao.wang wrote:
> >> Hi Ming
> >>
> >> Currently, blk_mq_update_dispatch_busy is hooked in blk_mq_dispatch_rq_list
> >> and __blk_mq_issue_directly. blk_mq_update_dispatch_busy could be invoked on multiple
> >> cpus concurrently. But there is not any protection on the hctx->dispatch_busy. We cannot
> >> ensure the update on the dispatch_busy atomically.
> >
> > The update itself is atomic given type of this variable is 'unsigned int'.
>
> The blk_mq_update_dispatch_busy doesn't just write on a unsigned int variable,
> but read, calculate and write. The whole operation is not atomic.
It won't be a big deal since the update is exponential weighted wrt. busy, and again
hctx->dispatch_busy is just an hint for improving performance.
>
> >
> >>
> >>
> >> Look at the test result after applied the debug patch below:
> >>
> >> fio-1761 [000] .... 227.246251: blk_mq_update_dispatch_busy.part.50: old 0 ewma 2 cur 2
> >> fio-1766 [004] .... 227.246252: blk_mq_update_dispatch_busy.part.50: old 2 ewma 1 cur 1
> >> fio-1755 [000] .... 227.246366: blk_mq_update_dispatch_busy.part.50: old 1 ewma 0 cur 0
> >> fio-1754 [003] .... 227.266050: blk_mq_update_dispatch_busy.part.50: old 2 ewma 3 cur 3
> >> fio-1763 [007] .... 227.266050: blk_mq_update_dispatch_busy.part.50: old 0 ewma 2 cur 2
> >> fio-1761 [000] .... 227.266051: blk_mq_update_dispatch_busy.part.50: old 3 ewma 2 cur 2
> >> fio-1766 [004] .... 227.266051: blk_mq_update_dispatch_busy.part.50: old 3 ewma 2 cur 2
> >> fio-1760 [005] .... 227.266165: blk_mq_update_dispatch_busy.part.50: old 2 ewma 1 cur 1
> >>
> ...
> >>
> >> Is it expected ?
> >
> > Yes, it won't be a issue in reality given hctx->dispatch_busy is used as
> > a hint, and it often works as expected and hctx->dispatch_busy is convergent
> > finally because it is exponential weighted moving average.
> I just concern the value of dispatch_busy will bounce up and down in small range
> with high workload on 32 or higher core system due to the cache and non-atomic update
If it is figured out as busy on one path, we take it as busy, that is fine in current
usage. If the IO dispatch finally becomes not busy, hctx->dispatch_busy will be updated
as expected sooner or later.
In short, I don't see actual issue by this simple way without any protection.
Thanks,
Ming
prev parent reply other threads:[~2018-08-27 11:20 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-27 5:56 No protection on the hctx->dispatch_busy jianchao.wang
2018-08-27 7:00 ` Ming Lei
2018-08-27 7:25 ` jianchao.wang
2018-08-27 7:34 ` Ming Lei [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180827073416.GB20731@ming.t460p \
--to=ming.lei@redhat.com \
--cc=jianchao.w.wang@oracle.com \
--cc=linux-block@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox