From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE670C433EF for ; Thu, 17 Feb 2022 08:48:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238056AbiBQItD (ORCPT ); Thu, 17 Feb 2022 03:49:03 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:45772 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235121AbiBQItB (ORCPT ); Thu, 17 Feb 2022 03:49:01 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76C332A2282; Thu, 17 Feb 2022 00:48:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=9UdR4si8YbnluPQ6XJGvCQutAFn/OM8Aod+663L23pQ=; b=vdgCK1IGQ5HujA3sgA7lwcLQuk zqSVjRYhnco4cnF4Jugom4VNOjKZ3pw8gW++AQ9f+Y14tqPGayre6B1nObycRZEwZMx8UrqEKSuD6 4EiENGTTAjgvdOMl0ciEUPYkm2NMAaeEkuYQdfD15GLMI4IjXh/Mvr8L+Gxsr3lYBgj5DQn7f2Qf8 BB8WE+CfgiLe0NGxvoq8bI3HtONKYv9V8RKObq5gnyVn/tslvJp88J9zFMWGIztLrjGXjrgcIazFD VblsJqEIMGFKD+JJvXR591bbrAWm6dYzRw/HA61dMl9ig99vT/qVP6l32eCvKPgOGgVCHVoMemHtv RpjAcBwQ==; Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nKcSx-009ZzB-IH; Thu, 17 Feb 2022 08:48:43 +0000 Date: Thu, 17 Feb 2022 00:48:43 -0800 From: Christoph Hellwig To: "Wang Jianchao (Kuaishou)" Cc: Jens Axboe , Josef Bacik , Tejun Heo , Bart Van Assche , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [RFC V4 1/6] blk: prepare to make blk-rq-qos pluggable and modular Message-ID: References: <20220217031349.98561-1-jianchao.wan9@gmail.com> <20220217031349.98561-2-jianchao.wan9@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220217031349.98561-2-jianchao.wan9@gmail.com> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > { > struct request_queue *q = rqos->q; > - const char *dir_name = rq_qos_id_to_name(rqos->id); > + const char *dir_name; > + > + dir_name = rqos->ops->name ? rqos->ops->name : rq_qos_id_to_name(rqos->id); Overly long line here. And it would be much more readable if you used a good old if/else. > +static DEFINE_IDA(rq_qos_ida); > +static int nr_rqos_blkcg_pols; > +static DEFINE_MUTEX(rq_qos_mutex); > +static LIST_HEAD(rq_qos_list); Please use an allocating xarray instead of an IDA plus list. > + /* > + * queue must have been unregistered here, it is safe to iterate > + * the list w/o lock > + */ Please capitalize multi-line comments. > + * After the pluggable blk-qos, rqos's life cycle become complicated, > + * as we may modify the rqos list there. Except for the places where > + * queue is not registered, there are following places may access rqos > + * list concurrently: Code comments are not the place to explain history. PLease explain the current situation. > +struct rq_qos *rq_qos_get(struct request_queue *q, int id) > +{ > + struct rq_qos *rqos; > + > + spin_lock_irq(&q->queue_lock); Please don't use the grab all queue_lock for new code. It badly needs to be split and documented, and new code is the best place to start that. Also with all the new code please add a new config option that is selected by all rq-pos implementations so that blk-rq-qos.c only gets built when actually needed. > +static inline struct rq_qos *rq_qos_by_id(struct request_queue *q, int id) > +{ > + struct rq_qos *rqos; > + > + WARN_ON(!mutex_is_locked(&q->sysfs_lock) && !spin_is_locked(&q->queue_lock)); Another overly long line. And in doubt split this into two helpers so that you cna use lockdep_assert_held instead of doing the incorrect asserts.