From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753995AbdJIKkn (ORCPT ); Mon, 9 Oct 2017 06:40:43 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45992 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751550AbdJIKkl (ORCPT ); Mon, 9 Oct 2017 06:40:41 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 842FD8553E Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=ming.lei@redhat.com Date: Mon, 9 Oct 2017 18:40:20 +0800 From: Ming Lei To: Christoph Hellwig Cc: Jens Axboe , linux-block@vger.kernel.org, Mike Snitzer , dm-devel@redhat.com, Bart Van Assche , Laurence Oberman , Paolo Valente , Oleksandr Natalenko , Tom Nguyen , linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, Omar Sandoval Subject: Re: [PATCH V5 7/7] blk-mq-sched: don't dequeue request until all in ->dispatch are flushed Message-ID: <20171009104019.GC22168@ming.t460p> References: <20170930102720.30219-1-ming.lei@redhat.com> <20170930102720.30219-8-ming.lei@redhat.com> <20171003091128.GD21184@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171003091128.GD21184@infradead.org> User-Agent: Mutt/1.8.3 (2017-05-23) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Mon, 09 Oct 2017 10:40:41 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 03, 2017 at 02:11:28AM -0700, Christoph Hellwig wrote: > This looks good in general: > > Reviewed-by: Christoph Hellwig > > Minor nitpicks below: > > > const bool has_sched_dispatch = e && e->type->ops.mq.dispatch_request; > > This is now only tested once, so you can remove the local variable > for it. There are still two users of the local variable, so I suggest to keep it. > > > + /* > > + * We may clear DISPATCH_BUSY just after it > > + * is set from another context, the only cost > > + * is that one request is dequeued a bit early, > > + * we can survive that. Given the window is > > + * small enough, no need to worry about performance > > + * effect. > > + */ > > Use your 80 line real estate for comments please. OK. > > > if (!has_sched_dispatch) > > + if (!q->queue_depth) { > > + blk_mq_flush_busy_ctxs(hctx, &rq_list); > > + blk_mq_dispatch_rq_list(q, &rq_list); > > + } else { > > + blk_mq_do_dispatch_ctx(q, hctx); > > + } > > + } else { > > blk_mq_do_dispatch_sched(q, e, hctx); > > + } > > Maybe flatten this out to: > > if (e && e->type->ops.mq.dispatch_request) { > blk_mq_do_dispatch_sched(q, e, hctx); > } else if (q->queue_depth) { > blk_mq_do_dispatch_ctx(q, hctx); > } else { > blk_mq_flush_busy_ctxs(hctx, &rq_list); > blk_mq_dispatch_rq_list(q, &rq_list); > } > OK. -- Ming