linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Damien Le Moal <damien.lemoal@opensource.wdc.com>
To: Paolo Valente <paolo.valente@linaro.org>, Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org,
	arie.vanderhoeven@seagate.com, rory.c.chen@seagate.com,
	glen.valante@linaro.org, Gabriele Felici <felicigb@gmail.com>,
	Carmine Zaccagnino <carmine@carminezacc.com>
Subject: Re: [PATCH V12 1/8] block, bfq: split sync bfq_queues on a per-actuator basis
Date: Tue, 27 Dec 2022 10:37:00 +0900	[thread overview]
Message-ID: <71738b57-ecd0-f95c-9c42-b7686c3232b6@opensource.wdc.com> (raw)
In-Reply-To: <20221222152157.61789-2-paolo.valente@linaro.org>

On 12/23/22 00:21, Paolo Valente wrote:
> Single-LUN multi-actuator SCSI drives, as well as all multi-actuator
> SATA drives appear as a single device to the I/O subsystem [1].  Yet
> they address commands to different actuators internally, as a function
> of Logical Block Addressing (LBAs). A given sector is reachable by
> only one of the actuators. For example, Seagate’s Serial Advanced
> Technology Attachment (SATA) version contains two actuators and maps
> the lower half of the SATA LBA space to the lower actuator and the
> upper half to the upper actuator.
> 
> Evidently, to fully utilize actuators, no actuator must be left idle
> or underutilized while there is pending I/O for it. The block layer
> must somehow control the load of each actuator individually. This
> commit lays the ground for allowing BFQ to provide such a per-actuator
> control.
> 
> BFQ associates an I/O-request sync bfq_queue with each process doing
> synchronous I/O, or with a group of processes, in case of queue
> merging. Then BFQ serves one bfq_queue at a time. While in service, a
> bfq_queue is emptied in request-position order. Yet the same process,
> or group of processes, may generate I/O for different actuators. In
> this case, different streams of I/O (each for a different actuator)
> get all inserted into the same sync bfq_queue. So there is basically
> no individual control on when each stream is served, i.e., on when the
> I/O requests of the stream are picked from the bfq_queue and
> dispatched to the drive.
> 
> This commit enables BFQ to control the service of each actuator
> individually for synchronous I/O, by simply splitting each sync
> bfq_queue into N queues, one for each actuator. In other words, a sync
> bfq_queue is now associated to a pair (process, actuator). As a
> consequence of this split, the per-queue proportional-share policy
> implemented by BFQ will guarantee that the sync I/O generated for each
> actuator, by each process, receives its fair share of service.
> 
> This is just a preparatory patch. If the I/O of the same process
> happens to be sent to different queues, then each of these queues may
> undergo queue merging. To handle this event, the bfq_io_cq data
> structure must be properly extended. In addition, stable merging must
> be disabled to avoid loss of control on individual actuators. Finally,
> also async queues must be split. These issues are described in detail
> and addressed in next commits. As for this commit, although multiple
> per-process bfq_queues are provided, the I/O of each process or group
> of processes is still sent to only one queue, regardless of the
> actuator the I/O is for. The forwarding to distinct bfq_queues will be
> enabled after addressing the above issues.
> 
> [1] https://www.linaro.org/blog/budget-fair-queueing-bfq-linux-io-scheduler-optimizations-for-multi-actuator-sata-hard-drives/
> 
> Signed-off-by: Gabriele Felici <felicigb@gmail.com>
> Signed-off-by: Carmine Zaccagnino <carmine@carminezacc.com>
> Signed-off-by: Paolo Valente <paolo.valente@linaro.org>

One styles nit below.

Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>

> @@ -690,14 +700,25 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
>  		limit = (limit * depth) >> bfqd->full_depth_shift;
>  	}
>  
> -	/*
> -	 * Does queue (or any parent entity) exceed number of requests that
> -	 * should be available to it? Heavily limit depth so that it cannot
> -	 * consume more available requests and thus starve other entities.
> -	 */
> -	if (bfqq && bfqq_request_over_limit(bfqq, limit))
> -		depth = 1;
> +	for (act_idx = 0; act_idx < bfqd->num_actuators; act_idx++) {
> +		struct bfq_queue *bfqq;
> +
> +		if (bic)
> +			bfqq = bic_to_bfqq(bic, op_is_sync(opf), act_idx);
> +		else
> +			break;
>  
> +		/*
> +		 * Does queue (or any parent entity) exceed number of
> +		 * requests that should be available to it? Heavily
> +		 * limit depth so that it cannot consume more
> +		 * available requests and thus starve other entities.
> +		 */
> +		if (bfqq && bfqq_request_over_limit(bfqq, limit)) {
> +			depth = 1;
> +			break;
> +		}

You could reverse the if condition to make this cleaner, or even better,
include the bic test in the for loop:

for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) {
	struct bfq_queue *bfqq;

	/*
	 * Does queue (or any parent entity) exceed number of
	 * requests that should be available to it? Heavily
	 * limit depth so that it cannot consume more
	 * available requests and thus starve other entities.
	 */
	bfqq = bic_to_bfqq(bic, op_is_sync(opf), act_idx);
	if (bfqq && bfqq_request_over_limit(bfqq, limit)) {
		depth = 1;
		break;
	}
}


-- 
Damien Le Moal
Western Digital Research


  reply	other threads:[~2022-12-27  1:37 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-22 15:21 [PATCH V12 0/8] block, bfq: extend bfq to support multi-actuator drives Paolo Valente
2022-12-22 15:21 ` [PATCH V12 1/8] block, bfq: split sync bfq_queues on a per-actuator basis Paolo Valente
2022-12-27  1:37   ` Damien Le Moal [this message]
2022-12-29 20:35     ` Paolo Valente
2022-12-22 15:21 ` [PATCH V12 2/8] block, bfq: forbid stable merging of queues associated with different actuators Paolo Valente
2022-12-22 15:21 ` [PATCH V12 3/8] block, bfq: move io_cq-persistent bfqq data into a dedicated struct Paolo Valente
2022-12-22 15:21 ` [PATCH V12 4/8] block, bfq: turn bfqq_data into an array in bfq_io_cq Paolo Valente
2022-12-22 15:21 ` [PATCH V12 5/8] block, bfq: split also async bfq_queues on a per-actuator basis Paolo Valente
2022-12-22 15:21 ` [PATCH V12 6/8] block, bfq: retrieve independent access ranges from request queue Paolo Valente
2022-12-22 15:21 ` [PATCH V12 7/8] block, bfq: inject I/O to underutilized actuators Paolo Valente
2022-12-22 15:21 ` [PATCH V12 8/8] block, bfq: balance I/O injection among " Paolo Valente

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=71738b57-ecd0-f95c-9c42-b7686c3232b6@opensource.wdc.com \
    --to=damien.lemoal@opensource.wdc.com \
    --cc=arie.vanderhoeven@seagate.com \
    --cc=axboe@kernel.dk \
    --cc=carmine@carminezacc.com \
    --cc=felicigb@gmail.com \
    --cc=glen.valante@linaro.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paolo.valente@linaro.org \
    --cc=rory.c.chen@seagate.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).