linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Bart Van Assche <bvanassche@acm.org>
Cc: Jens Axboe <axboe@kernel.dk>,
	Christoph Hellwig <hch@infradead.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Dexuan Cui <decui@microsoft.com>,
	linux-scsi@vger.kernel.org, Tejun Heo <tj@kernel.org>,
	Lai Jiangshan <jiangshanlai@gmail.com>
Subject: Re: [PATCH] block: reduce kblockd_mod_delayed_work_on() CPU consumption
Date: Thu, 16 Dec 2021 15:22:09 +0800	[thread overview]
Message-ID: <YbrpIQUs4WOhyiIX@T590> (raw)
In-Reply-To: <883ad44e-8421-1cb5-f3f4-4a8d193e2d5a@acm.org>

On Wed, Dec 15, 2021 at 09:40:38AM -0800, Bart Van Assche wrote:
> On 12/14/21 7:59 AM, Jens Axboe wrote:
> > On 12/14/21 8:04 AM, Christoph Hellwig wrote:
> > > So why not do a non-delayed queue_work for that case?  Might be good
> > > to get the scsi and workqueue maintaines involved to understand the
> > > issue a bit better first.
> > 
> > We can probably get by with doing just that, and just ignore if a delayed
> > work timer is already running.
> > 
> > Dexuan, can you try this one?
> > 
> > diff --git a/block/blk-core.c b/block/blk-core.c
> > index 1378d084c770..c1833f95cb97 100644
> > --- a/block/blk-core.c
> > +++ b/block/blk-core.c
> > @@ -1484,6 +1484,8 @@ EXPORT_SYMBOL(kblockd_schedule_work);
> >   int kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork,
> >   				unsigned long delay)
> >   {
> > +	if (!delay)
> > +		return queue_work_on(cpu, kblockd_workqueue, &dwork->work);
> >   	return mod_delayed_work_on(cpu, kblockd_workqueue, dwork, delay);
> >   }
> >   EXPORT_SYMBOL(kblockd_mod_delayed_work_on);
> 
> As Christoph already mentioned, it would be great to receive feedback from the
> workqueue maintainer about this patch since I'm not aware of other kernel code
> that queues delayed_work in a similar way.
> Regarding the feedback from the view of the SCSI subsystem: I'd like to see the
> block layer core track whether or not a queue needs to be run such that the
> scsi_run_queue_async() call can be removed from scsi_end_request(). No such call

scsi_run_queue_async() is just for handling restart from running out of
scsi's device queue limit, which shouldn't be hot now, and it is for
handling scsi's own queue limit.

> was present in the original conversion of the SCSI core from the legacy block
> layer to blk-mq. See also commit d285203cf647 ("scsi: add support for a blk-mq
> based I/O path.").

That isn't true, see scsi_next_command()->scsi_run_queue().


Thanks,
Ming


      reply	other threads:[~2021-12-16  7:22 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-14 14:53 [PATCH] block: reduce kblockd_mod_delayed_work_on() CPU consumption Jens Axboe
2021-12-14 15:04 ` Christoph Hellwig
2021-12-14 15:59   ` Jens Axboe
2021-12-14 20:42     ` Dexuan Cui
2021-12-15 17:40     ` Bart Van Assche
2021-12-16  7:22       ` Ming Lei [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YbrpIQUs4WOhyiIX@T590 \
    --to=ming.lei@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=bvanassche@acm.org \
    --cc=decui@microsoft.com \
    --cc=hch@infradead.org \
    --cc=jiangshanlai@gmail.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).