public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: stable@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	patches@lists.linux.dev,
	Damien Le Moal <damien.lemoal@opensource.wdc.com>,
	Gabriele Felici <felicigb@gmail.com>,
	Carmine Zaccagnino <carmine@carminezacc.com>,
	Paolo Valente <paolo.valente@linaro.org>,
	Jens Axboe <axboe@kernel.dk>, Hagar Hemdan <hagarhem@amazon.com>
Subject: [PATCH 6.1 091/176] block, bfq: split sync bfq_queues on a per-actuator basis
Date: Wed,  5 Mar 2025 18:47:40 +0100	[thread overview]
Message-ID: <20250305174509.116478991@linuxfoundation.org> (raw)
In-Reply-To: <20250305174505.437358097@linuxfoundation.org>

6.1-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Paolo Valente <paolo.valente@linaro.org>

commit 9778369a2d6c5ed2b81a04164c4aa9da1bdb193d upstream.

Single-LUN multi-actuator SCSI drives, as well as all multi-actuator
SATA drives appear as a single device to the I/O subsystem [1].  Yet
they address commands to different actuators internally, as a function
of Logical Block Addressing (LBAs). A given sector is reachable by
only one of the actuators. For example, Seagate’s Serial Advanced
Technology Attachment (SATA) version contains two actuators and maps
the lower half of the SATA LBA space to the lower actuator and the
upper half to the upper actuator.

Evidently, to fully utilize actuators, no actuator must be left idle
or underutilized while there is pending I/O for it. The block layer
must somehow control the load of each actuator individually. This
commit lays the ground for allowing BFQ to provide such a per-actuator
control.

BFQ associates an I/O-request sync bfq_queue with each process doing
synchronous I/O, or with a group of processes, in case of queue
merging. Then BFQ serves one bfq_queue at a time. While in service, a
bfq_queue is emptied in request-position order. Yet the same process,
or group of processes, may generate I/O for different actuators. In
this case, different streams of I/O (each for a different actuator)
get all inserted into the same sync bfq_queue. So there is basically
no individual control on when each stream is served, i.e., on when the
I/O requests of the stream are picked from the bfq_queue and
dispatched to the drive.

This commit enables BFQ to control the service of each actuator
individually for synchronous I/O, by simply splitting each sync
bfq_queue into N queues, one for each actuator. In other words, a sync
bfq_queue is now associated to a pair (process, actuator). As a
consequence of this split, the per-queue proportional-share policy
implemented by BFQ will guarantee that the sync I/O generated for each
actuator, by each process, receives its fair share of service.

This is just a preparatory patch. If the I/O of the same process
happens to be sent to different queues, then each of these queues may
undergo queue merging. To handle this event, the bfq_io_cq data
structure must be properly extended. In addition, stable merging must
be disabled to avoid loss of control on individual actuators. Finally,
also async queues must be split. These issues are described in detail
and addressed in next commits. As for this commit, although multiple
per-process bfq_queues are provided, the I/O of each process or group
of processes is still sent to only one queue, regardless of the
actuator the I/O is for. The forwarding to distinct bfq_queues will be
enabled after addressing the above issues.

[1] https://www.linaro.org/blog/budget-fair-queueing-bfq-linux-io-scheduler-optimizations-for-multi-actuator-sata-hard-drives/

Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Signed-off-by: Gabriele Felici <felicigb@gmail.com>
Signed-off-by: Carmine Zaccagnino <carmine@carminezacc.com>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Link: https://lore.kernel.org/r/20230103145503.71712-2-paolo.valente@linaro.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Stable-dep-of: e8b8344de398 ("block, bfq: fix bfqq uaf in bfq_limit_depth()")
[Hagar: needed contextual fixes]
Signed-off-by: Hagar Hemdan <hagarhem@amazon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 block/bfq-cgroup.c  |   97 ++++++++++++++++---------------
 block/bfq-iosched.c |  160 ++++++++++++++++++++++++++++++++++------------------
 block/bfq-iosched.h |   51 +++++++++++++---
 3 files changed, 197 insertions(+), 111 deletions(-)

--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -704,6 +704,46 @@ void bfq_bfqq_move(struct bfq_data *bfqd
 	bfq_put_queue(bfqq);
 }
 
+static void bfq_sync_bfqq_move(struct bfq_data *bfqd,
+			       struct bfq_queue *sync_bfqq,
+			       struct bfq_io_cq *bic,
+			       struct bfq_group *bfqg,
+			       unsigned int act_idx)
+{
+	struct bfq_queue *bfqq;
+
+	if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) {
+		/* We are the only user of this bfqq, just move it */
+		if (sync_bfqq->entity.sched_data != &bfqg->sched_data)
+			bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
+		return;
+	}
+
+	/*
+	 * The queue was merged to a different queue. Check
+	 * that the merge chain still belongs to the same
+	 * cgroup.
+	 */
+	for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq)
+		if (bfqq->entity.sched_data != &bfqg->sched_data)
+			break;
+	if (bfqq) {
+		/*
+		 * Some queue changed cgroup so the merge is not valid
+		 * anymore. We cannot easily just cancel the merge (by
+		 * clearing new_bfqq) as there may be other processes
+		 * using this queue and holding refs to all queues
+		 * below sync_bfqq->new_bfqq. Similarly if the merge
+		 * already happened, we need to detach from bfqq now
+		 * so that we cannot merge bio to a request from the
+		 * old cgroup.
+		 */
+		bfq_put_cooperator(sync_bfqq);
+		bfq_release_process_ref(bfqd, sync_bfqq);
+		bic_set_bfqq(bic, NULL, true, act_idx);
+	}
+}
+
 /**
  * __bfq_bic_change_cgroup - move @bic to @bfqg.
  * @bfqd: the queue descriptor.
@@ -714,60 +754,25 @@ void bfq_bfqq_move(struct bfq_data *bfqd
  * sure that the reference to cgroup is valid across the call (see
  * comments in bfq_bic_update_cgroup on this issue)
  */
-static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+static void __bfq_bic_change_cgroup(struct bfq_data *bfqd,
 				     struct bfq_io_cq *bic,
 				     struct bfq_group *bfqg)
 {
-	struct bfq_queue *async_bfqq = bic_to_bfqq(bic, false);
-	struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, true);
-	struct bfq_entity *entity;
-
-	if (async_bfqq) {
-		entity = &async_bfqq->entity;
+	unsigned int act_idx;
 
-		if (entity->sched_data != &bfqg->sched_data) {
-			bic_set_bfqq(bic, NULL, false);
+	for (act_idx = 0; act_idx < bfqd->num_actuators; act_idx++) {
+		struct bfq_queue *async_bfqq = bic_to_bfqq(bic, false, act_idx);
+		struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, true, act_idx);
+
+		if (async_bfqq &&
+		    async_bfqq->entity.sched_data != &bfqg->sched_data) {
+			bic_set_bfqq(bic, NULL, false, act_idx);
 			bfq_release_process_ref(bfqd, async_bfqq);
 		}
-	}
 
-	if (sync_bfqq) {
-		if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) {
-			/* We are the only user of this bfqq, just move it */
-			if (sync_bfqq->entity.sched_data != &bfqg->sched_data)
-				bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
-		} else {
-			struct bfq_queue *bfqq;
-
-			/*
-			 * The queue was merged to a different queue. Check
-			 * that the merge chain still belongs to the same
-			 * cgroup.
-			 */
-			for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq)
-				if (bfqq->entity.sched_data !=
-				    &bfqg->sched_data)
-					break;
-			if (bfqq) {
-				/*
-				 * Some queue changed cgroup so the merge is
-				 * not valid anymore. We cannot easily just
-				 * cancel the merge (by clearing new_bfqq) as
-				 * there may be other processes using this
-				 * queue and holding refs to all queues below
-				 * sync_bfqq->new_bfqq. Similarly if the merge
-				 * already happened, we need to detach from
-				 * bfqq now so that we cannot merge bio to a
-				 * request from the old cgroup.
-				 */
-				bfq_put_cooperator(sync_bfqq);
-				bic_set_bfqq(bic, NULL, true);
-				bfq_release_process_ref(bfqd, sync_bfqq);
-			}
-		}
+		if (sync_bfqq)
+			bfq_sync_bfqq_move(bfqd, sync_bfqq, bic, bfqg, act_idx);
 	}
-
-	return bfqg;
 }
 
 void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -377,16 +377,23 @@ static const unsigned long bfq_late_stab
 #define RQ_BIC(rq)		((struct bfq_io_cq *)((rq)->elv.priv[0]))
 #define RQ_BFQQ(rq)		((rq)->elv.priv[1])
 
-struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync)
+struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync,
+			      unsigned int actuator_idx)
 {
-	return bic->bfqq[is_sync];
+	if (is_sync)
+		return bic->bfqq[1][actuator_idx];
+
+	return bic->bfqq[0][actuator_idx];
 }
 
 static void bfq_put_stable_ref(struct bfq_queue *bfqq);
 
-void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync)
+void bic_set_bfqq(struct bfq_io_cq *bic,
+		  struct bfq_queue *bfqq,
+		  bool is_sync,
+		  unsigned int actuator_idx)
 {
-	struct bfq_queue *old_bfqq = bic->bfqq[is_sync];
+	struct bfq_queue *old_bfqq = bic->bfqq[is_sync][actuator_idx];
 
 	/* Clear bic pointer if bfqq is detached from this bic */
 	if (old_bfqq && old_bfqq->bic == bic)
@@ -405,7 +412,10 @@ void bic_set_bfqq(struct bfq_io_cq *bic,
 	 * we cancel the stable merge if
 	 * bic->stable_merge_bfqq == bfqq.
 	 */
-	bic->bfqq[is_sync] = bfqq;
+	if (is_sync)
+		bic->bfqq[1][actuator_idx] = bfqq;
+	else
+		bic->bfqq[0][actuator_idx] = bfqq;
 
 	if (bfqq && bic->stable_merge_bfqq == bfqq) {
 		/*
@@ -680,9 +690,9 @@ static void bfq_limit_depth(blk_opf_t op
 {
 	struct bfq_data *bfqd = data->q->elevator->elevator_data;
 	struct bfq_io_cq *bic = bfq_bic_lookup(data->q);
-	struct bfq_queue *bfqq = bic ? bic_to_bfqq(bic, op_is_sync(opf)) : NULL;
 	int depth;
 	unsigned limit = data->q->nr_requests;
+	unsigned int act_idx;
 
 	/* Sync reads have full depth available */
 	if (op_is_sync(opf) && !op_is_write(opf)) {
@@ -692,14 +702,21 @@ static void bfq_limit_depth(blk_opf_t op
 		limit = (limit * depth) >> bfqd->full_depth_shift;
 	}
 
-	/*
-	 * Does queue (or any parent entity) exceed number of requests that
-	 * should be available to it? Heavily limit depth so that it cannot
-	 * consume more available requests and thus starve other entities.
-	 */
-	if (bfqq && bfqq_request_over_limit(bfqq, limit))
-		depth = 1;
+	for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) {
+		struct bfq_queue *bfqq =
+			bic_to_bfqq(bic, op_is_sync(opf), act_idx);
 
+		/*
+		 * Does queue (or any parent entity) exceed number of
+		 * requests that should be available to it? Heavily
+		 * limit depth so that it cannot consume more
+		 * available requests and thus starve other entities.
+		 */
+		if (bfqq && bfqq_request_over_limit(bfqq, limit)) {
+			depth = 1;
+			break;
+		}
+	}
 	bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u",
 		__func__, bfqd->wr_busy_queues, op_is_sync(opf), depth);
 	if (depth)
@@ -1820,6 +1837,18 @@ static bool bfq_bfqq_higher_class_or_wei
 	return bfqq_weight > in_serv_weight;
 }
 
+/*
+ * Get the index of the actuator that will serve bio.
+ */
+static unsigned int bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio)
+{
+	/*
+	 * Multi-actuator support not complete yet, so always return 0
+	 * for the moment (to keep incomplete mechanisms off).
+	 */
+	return 0;
+}
+
 static bool bfq_better_to_idle(struct bfq_queue *bfqq);
 
 static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
@@ -2150,7 +2179,7 @@ static void bfq_check_waker(struct bfq_d
 	 * We reset waker detection logic also if too much time has passed
  	 * since the first detection. If wakeups are rare, pointless idling
 	 * doesn't hurt throughput that much. The condition below makes sure
-	 * we do not uselessly idle blocking waker in more than 1/64 cases. 
+	 * we do not uselessly idle blocking waker in more than 1/64 cases.
 	 */
 	if (bfqd->last_completed_rq_bfqq !=
 	    bfqq->tentative_waker_bfqq ||
@@ -2486,7 +2515,8 @@ static bool bfq_bio_merge(struct request
 		 */
 		bfq_bic_update_cgroup(bic, bio);
 
-		bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf));
+		bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf),
+					     bfq_actuator_index(bfqd, bio));
 	} else {
 		bfqd->bio_bfqq = NULL;
 	}
@@ -3188,7 +3218,7 @@ static struct bfq_queue *bfq_merge_bfqqs
 	/*
 	 * Merge queues (that is, let bic redirect its requests to new_bfqq)
 	 */
-	bic_set_bfqq(bic, new_bfqq, true);
+	bic_set_bfqq(bic, new_bfqq, true, bfqq->actuator_idx);
 	bfq_mark_bfqq_coop(new_bfqq);
 	/*
 	 * new_bfqq now belongs to at least two bics (it is a shared queue):
@@ -4818,11 +4848,8 @@ check_queue:
 	 */
 	if (bfq_bfqq_wait_request(bfqq) ||
 	    (bfqq->dispatched != 0 && bfq_better_to_idle(bfqq))) {
-		struct bfq_queue *async_bfqq =
-			bfqq->bic && bfqq->bic->bfqq[0] &&
-			bfq_bfqq_busy(bfqq->bic->bfqq[0]) &&
-			bfqq->bic->bfqq[0]->next_rq ?
-			bfqq->bic->bfqq[0] : NULL;
+		unsigned int act_idx = bfqq->actuator_idx;
+		struct bfq_queue *async_bfqq = NULL;
 		struct bfq_queue *blocked_bfqq =
 			!hlist_empty(&bfqq->woken_list) ?
 			container_of(bfqq->woken_list.first,
@@ -4830,6 +4857,10 @@ check_queue:
 				     woken_list_node)
 			: NULL;
 
+		if (bfqq->bic && bfqq->bic->bfqq[0][act_idx] &&
+		    bfq_bfqq_busy(bfqq->bic->bfqq[0][act_idx]) &&
+		    bfqq->bic->bfqq[0][act_idx]->next_rq)
+			async_bfqq = bfqq->bic->bfqq[0][act_idx];
 		/*
 		 * The next four mutually-exclusive ifs decide
 		 * whether to try injection, and choose the queue to
@@ -4914,7 +4945,7 @@ check_queue:
 		    icq_to_bic(async_bfqq->next_rq->elv.icq) == bfqq->bic &&
 		    bfq_serv_to_charge(async_bfqq->next_rq, async_bfqq) <=
 		    bfq_bfqq_budget_left(async_bfqq))
-			bfqq = bfqq->bic->bfqq[0];
+			bfqq = bfqq->bic->bfqq[0][act_idx];
 		else if (bfqq->waker_bfqq &&
 			   bfq_bfqq_busy(bfqq->waker_bfqq) &&
 			   bfqq->waker_bfqq->next_rq &&
@@ -5375,48 +5406,54 @@ static void bfq_exit_bfqq(struct bfq_dat
 	bfq_release_process_ref(bfqd, bfqq);
 }
 
-static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync)
+static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync,
+			      unsigned int actuator_idx)
 {
-	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync);
+	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, actuator_idx);
 	struct bfq_data *bfqd;
 
 	if (bfqq)
 		bfqd = bfqq->bfqd; /* NULL if scheduler already exited */
 
 	if (bfqq && bfqd) {
-		unsigned long flags;
-
-		spin_lock_irqsave(&bfqd->lock, flags);
-		bic_set_bfqq(bic, NULL, is_sync);
+		bic_set_bfqq(bic, NULL, is_sync, actuator_idx);
 		bfq_exit_bfqq(bfqd, bfqq);
-		spin_unlock_irqrestore(&bfqd->lock, flags);
 	}
 }
 
 static void bfq_exit_icq(struct io_cq *icq)
 {
 	struct bfq_io_cq *bic = icq_to_bic(icq);
+	struct bfq_data *bfqd = bic_to_bfqd(bic);
+	unsigned long flags;
+	unsigned int act_idx;
+	/*
+	 * If bfqd and thus bfqd->num_actuators is not available any
+	 * longer, then cycle over all possible per-actuator bfqqs in
+	 * next loop. We rely on bic being zeroed on creation, and
+	 * therefore on its unused per-actuator fields being NULL.
+	 */
+	unsigned int num_actuators = BFQ_MAX_ACTUATORS;
 
-	if (bic->stable_merge_bfqq) {
-		struct bfq_data *bfqd = bic->stable_merge_bfqq->bfqd;
+	/*
+	 * bfqd is NULL if scheduler already exited, and in that case
+	 * this is the last time these queues are accessed.
+	 */
+	if (bfqd) {
+		spin_lock_irqsave(&bfqd->lock, flags);
+		num_actuators = bfqd->num_actuators;
+	}
 
-		/*
-		 * bfqd is NULL if scheduler already exited, and in
-		 * that case this is the last time bfqq is accessed.
-		 */
-		if (bfqd) {
-			unsigned long flags;
+	if (bic->stable_merge_bfqq)
+		bfq_put_stable_ref(bic->stable_merge_bfqq);
 
-			spin_lock_irqsave(&bfqd->lock, flags);
-			bfq_put_stable_ref(bic->stable_merge_bfqq);
-			spin_unlock_irqrestore(&bfqd->lock, flags);
-		} else {
-			bfq_put_stable_ref(bic->stable_merge_bfqq);
-		}
+	for (act_idx = 0; act_idx < num_actuators; act_idx++) {
+		bfq_exit_icq_bfqq(bic, true, act_idx);
+		bfq_exit_icq_bfqq(bic, false, act_idx);
 	}
 
-	bfq_exit_icq_bfqq(bic, true);
-	bfq_exit_icq_bfqq(bic, false);
+	if (bfqd)
+		spin_unlock_irqrestore(&bfqd->lock, flags);
 }
 
 /*
@@ -5493,25 +5530,27 @@ static void bfq_check_ioprio_change(stru
 
 	bic->ioprio = ioprio;
 
-	bfqq = bic_to_bfqq(bic, false);
+	bfqq = bic_to_bfqq(bic, false, bfq_actuator_index(bfqd, bio));
 	if (bfqq) {
 		struct bfq_queue *old_bfqq = bfqq;
 
 		bfqq = bfq_get_queue(bfqd, bio, false, bic, true);
-		bic_set_bfqq(bic, bfqq, false);
+		bic_set_bfqq(bic, bfqq, false, bfq_actuator_index(bfqd, bio));
 		bfq_release_process_ref(bfqd, old_bfqq);
 	}
 
-	bfqq = bic_to_bfqq(bic, true);
+	bfqq = bic_to_bfqq(bic, true, bfq_actuator_index(bfqd, bio));
 	if (bfqq)
 		bfq_set_next_ioprio_data(bfqq, bic);
 }
 
 static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
-			  struct bfq_io_cq *bic, pid_t pid, int is_sync)
+			  struct bfq_io_cq *bic, pid_t pid, int is_sync,
+			  unsigned int act_idx)
 {
 	u64 now_ns = ktime_get_ns();
 
+	bfqq->actuator_idx = act_idx;
 	RB_CLEAR_NODE(&bfqq->entity.rb_node);
 	INIT_LIST_HEAD(&bfqq->fifo);
 	INIT_HLIST_NODE(&bfqq->burst_list_node);
@@ -5762,7 +5801,7 @@ static struct bfq_queue *bfq_get_queue(s
 
 	if (bfqq) {
 		bfq_init_bfqq(bfqd, bfqq, bic, current->pid,
-			      is_sync);
+			      is_sync, bfq_actuator_index(bfqd, bio));
 		bfq_init_entity(&bfqq->entity, bfqg);
 		bfq_log_bfqq(bfqd, bfqq, "allocated");
 	} else {
@@ -6078,7 +6117,8 @@ static bool __bfq_insert_request(struct
 		 * then complete the merge and redirect it to
 		 * new_bfqq.
 		 */
-		if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq) {
+		if (bic_to_bfqq(RQ_BIC(rq), true,
+				bfq_actuator_index(bfqd, rq->bio)) == bfqq) {
 			while (bfqq != new_bfqq)
 				bfqq = bfq_merge_bfqqs(bfqd, RQ_BIC(rq), bfqq);
 		}
@@ -6632,7 +6672,7 @@ bfq_split_bfqq(struct bfq_io_cq *bic, st
 		return bfqq;
 	}
 
-	bic_set_bfqq(bic, NULL, true);
+	bic_set_bfqq(bic, NULL, true, bfqq->actuator_idx);
 
 	bfq_put_cooperator(bfqq);
 
@@ -6646,7 +6686,8 @@ static struct bfq_queue *bfq_get_bfqq_ha
 						   bool split, bool is_sync,
 						   bool *new_queue)
 {
-	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync);
+	unsigned int act_idx = bfq_actuator_index(bfqd, bio);
+	struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync, act_idx);
 
 	if (likely(bfqq && bfqq != &bfqd->oom_bfqq))
 		return bfqq;
@@ -6658,7 +6699,7 @@ static struct bfq_queue *bfq_get_bfqq_ha
 		bfq_put_queue(bfqq);
 	bfqq = bfq_get_queue(bfqd, bio, is_sync, bic, split);
 
-	bic_set_bfqq(bic, bfqq, is_sync);
+	bic_set_bfqq(bic, bfqq, is_sync, act_idx);
 	if (split && is_sync) {
 		if ((bic->was_in_burst_list && bfqd->large_burst) ||
 		    bic->saved_in_large_burst)
@@ -7139,8 +7180,10 @@ static int bfq_init_queue(struct request
 	 * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues.
 	 * Grab a permanent reference to it, so that the normal code flow
 	 * will not attempt to free it.
+	 * Set zero as actuator index: we will pretend that
+	 * all I/O requests are for the same actuator.
 	 */
-	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0);
+	bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0, 0);
 	bfqd->oom_bfqq.ref++;
 	bfqd->oom_bfqq.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO;
 	bfqd->oom_bfqq.new_ioprio_class = IOPRIO_CLASS_BE;
@@ -7159,6 +7202,13 @@ static int bfq_init_queue(struct request
 
 	bfqd->queue = q;
 
+	/*
+	 * Multi-actuator support not complete yet, unconditionally
+	 * set to only one actuator for the moment (to keep incomplete
+	 * mechanisms off).
+	 */
+	bfqd->num_actuators = 1;
+
 	INIT_LIST_HEAD(&bfqd->dispatch);
 
 	hrtimer_init(&bfqd->idle_slice_timer, CLOCK_MONOTONIC,
--- a/block/bfq-iosched.h
+++ b/block/bfq-iosched.h
@@ -33,6 +33,14 @@
  */
 #define BFQ_SOFTRT_WEIGHT_FACTOR	100
 
+/*
+ * Maximum number of actuators supported. This constant is used simply
+ * to define the size of the static array that will contain
+ * per-actuator data. The current value is hopefully a good upper
+ * bound to the possible number of actuators of any actual drive.
+ */
+#define BFQ_MAX_ACTUATORS 8
+
 struct bfq_entity;
 
 /**
@@ -225,12 +233,14 @@ struct bfq_ttime {
  * struct bfq_queue - leaf schedulable entity.
  *
  * A bfq_queue is a leaf request queue; it can be associated with an
- * io_context or more, if it  is  async or shared  between  cooperating
- * processes. @cgroup holds a reference to the cgroup, to be sure that it
- * does not disappear while a bfqq still references it (mostly to avoid
- * races between request issuing and task migration followed by cgroup
- * destruction).
- * All the fields are protected by the queue lock of the containing bfqd.
+ * io_context or more, if it is async or shared between cooperating
+ * processes. Besides, it contains I/O requests for only one actuator
+ * (an io_context is associated with a different bfq_queue for each
+ * actuator it generates I/O for). @cgroup holds a reference to the
+ * cgroup, to be sure that it does not disappear while a bfqq still
+ * references it (mostly to avoid races between request issuing and
+ * task migration followed by cgroup destruction).  All the fields are
+ * protected by the queue lock of the containing bfqd.
  */
 struct bfq_queue {
 	/* reference counter */
@@ -395,6 +405,9 @@ struct bfq_queue {
 	 * the woken queues when this queue exits.
 	 */
 	struct hlist_head woken_list;
+
+	/* index of the actuator this queue is associated with */
+	unsigned int actuator_idx;
 };
 
 /**
@@ -403,8 +416,17 @@ struct bfq_queue {
 struct bfq_io_cq {
 	/* associated io_cq structure */
 	struct io_cq icq; /* must be the first member */
-	/* array of two process queues, the sync and the async */
-	struct bfq_queue *bfqq[2];
+	/*
+	 * Matrix of associated process queues: first row for async
+	 * queues, second row sync queues. Each row contains one
+	 * column for each actuator. An I/O request generated by the
+	 * process is inserted into the queue pointed by bfqq[i][j] if
+	 * the request is to be served by the j-th actuator of the
+	 * drive, where i==0 or i==1, depending on whether the request
+	 * is async or sync. So there is a distinct queue for each
+	 * actuator.
+	 */
+	struct bfq_queue *bfqq[2][BFQ_MAX_ACTUATORS];
 	/* per (request_queue, blkcg) ioprio */
 	int ioprio;
 #ifdef CONFIG_BFQ_GROUP_IOSCHED
@@ -768,6 +790,13 @@ struct bfq_data {
 	 */
 	unsigned int word_depths[2][2];
 	unsigned int full_depth_shift;
+
+	/*
+	 * Number of independent actuators. This is equal to 1 in
+	 * case of single-actuator drives.
+	 */
+	unsigned int num_actuators;
+
 };
 
 enum bfqq_state_flags {
@@ -964,8 +993,10 @@ struct bfq_group {
 
 extern const int bfq_timeout;
 
-struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync);
-void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync);
+struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync,
+				unsigned int actuator_idx);
+void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, bool is_sync,
+				unsigned int actuator_idx);
 struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic);
 void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq);
 void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq,



  parent reply	other threads:[~2025-03-05 17:54 UTC|newest]

Thread overview: 185+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-05 17:46 [PATCH 6.1 000/176] 6.1.130-rc1 review Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 001/176] arm64: mte: Do not allow PROT_MTE on MAP_HUGETLB user mappings Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 002/176] md: use separate work_struct for md_start_sync() Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 003/176] md: factor out a helper from mddev_put() Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 004/176] md: simplify md_seq_ops Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 005/176] md/md-bitmap: replace md_bitmap_status() with a new helper md_bitmap_get_stats() Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 006/176] md/md-cluster: fix spares warnings for __le64 Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 007/176] md/md-bitmap: add sync_size into struct md_bitmap_stats Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 008/176] md/md-bitmap: Synchronize bitmap_get_stats() with bitmap lifetime Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 009/176] mm: update mark_victim tracepoints fields Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 010/176] memcg: fix soft lockup in the OOM process Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 011/176] spi: atmel-quadspi: Add support for configuring CS timing Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 012/176] spi: atmel-quadspi: switch to use modern name Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 013/176] spi: atmel-quadspi: Create `atmel_qspi_ops` to support newer SoC families Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 014/176] spi: atmel-qspi: Memory barriers after memory-mapped I/O Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 015/176] Bluetooth: qca: Support downloading board id specific NVM for WCN7850 Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 016/176] Bluetooth: qca: Update firmware-name to support board specific nvm Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 017/176] Bluetooth: qca: Fix poor RF performance for WCN6855 Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 018/176] clk: mediatek: clk-mtk: Add dummy clock ops Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 019/176] clk: mediatek: mt2701-vdec: fix conversion to mtk_clk_simple_probe Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 020/176] clk: mediatek: mt2701-bdp: add missing dummy clk Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 021/176] clk: mediatek: mt2701-img: " Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 022/176] ASoC: renesas: rz-ssi: Add a check for negative sample_space Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 023/176] scsi: core: Handle depopulation and restoration in progress Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 024/176] scsi: core: Do not retry I/Os during depopulation Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 025/176] arm64: dts: mediatek: mt8183: Disable DSI display output by default Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 026/176] arm64: dts: qcom: trim addresses to 8 digits Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 027/176] arm64: dts: qcom: sm8450: Fix CDSP memory length Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 028/176] tpm: Use managed allocation for bios event log Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 029/176] tpm: Change to kvalloc() in eventlog/acpi.c Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 030/176] soc: mediatek: mtk-devapc: Switch to devm_clk_get_enabled() Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 031/176] soc: mediatek: mtk-devapc: Fix leaking IO map on error paths Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 032/176] soc/mediatek: mtk-devapc: Convert to platform remove callback returning void Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 033/176] soc: mediatek: mtk-devapc: Fix leaking IO map on driver remove Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 034/176] media: Switch to use dev_err_probe() helper Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 035/176] media: uvcvideo: Fix crash during unbind if gpio unit is in use Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 036/176] media: uvcvideo: Refactor iterators Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 037/176] media: uvcvideo: Only save async fh if success Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 038/176] media: uvcvideo: Remove dangling pointers Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 039/176] USB: gadget: core: create sysfs link between udc and gadget Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 040/176] usb: gadget: core: flush gadget workqueue after device removal Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 041/176] USB: gadget: f_midi: f_midi_complete to call queue_work Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 042/176] ASoC: rockchip: i2s-tdm: fix shift config for SND_SOC_DAIFMT_DSP_[AB] Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 043/176] powerpc/64s/mm: Move __real_pte stubs into hash-4k.h Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 044/176] powerpc/64s: Rewrite __real_pte() and __rpte_to_hidx() as static inline Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 045/176] ALSA: hda/realtek: Fixup ALC225 depop procedure Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 046/176] powerpc/code-patching: Fix KASAN hit by not flagging text patching area as VM_ALLOC Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 047/176] geneve: Fix use-after-free in geneve_find_dev() Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 048/176] ALSA: hda/cirrus: Correct the full scale volume set logic Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 049/176] ibmvnic: Return error code on TX scrq flush fail Greg Kroah-Hartman
2025-03-05 17:46 ` [PATCH 6.1 050/176] ibmvnic: Introduce send sub-crq direct Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 051/176] ibmvnic: Add stat for tx direct vs tx batched Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 052/176] ibmvnic: Dont reference skb after sending to VIOS Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 053/176] gtp: Suppress list corruption splat in gtp_net_exit_batch_rtnl() Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 054/176] geneve: Suppress list corruption splat in geneve_destroy_tunnels() Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 055/176] flow_dissector: Fix handling of mixed port and port-range keys Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 056/176] flow_dissector: Fix port range key handling in BPF conversion Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 057/176] net: Add non-RCU dev_getbyhwaddr() helper Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 058/176] arp: switch to dev_getbyhwaddr() in arp_req_set_public() Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 059/176] net: axienet: Set mac_managed_pm Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 060/176] tcp: drop secpath at the same time as we currently drop dst Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 061/176] drm/tidss: Add simple K2G manual reset Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 062/176] drm/tidss: Fix race condition while handling interrupt registers Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 063/176] drm/rcar-du: dsi: Fix PHY lock bit check Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 064/176] bpf, test_run: Fix use-after-free issue in eth_skb_pkt_type() Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 065/176] strparser: Add read_sock callback Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 066/176] bpf: Fix wrong copied_seq calculation Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 067/176] power: supply: da9150-fg: fix potential overflow Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 068/176] nouveau/svm: fix missing folio unlock + put after make_device_exclusive_range() Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 069/176] drm/msm/dpu: Dont leak bits_per_component into random DSC_ENC fields Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 070/176] nvme/ioctl: add missing space in err message Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 071/176] bpf: skip non exist keys in generic_map_lookup_batch Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 072/176] drm/msm/dpu: Disable dither in phys encoder cleanup Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 073/176] drm/i915: Make sure all planes in use by the joiner have their crtc included Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 074/176] tee: optee: Fix supplicant wait loop Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 075/176] drop_monitor: fix incorrect initialization order Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 076/176] nfp: bpf: Add check for nfp_app_ctrl_msg_alloc() Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 077/176] ASoC: fsl_micfil: Enable default case in micfil_set_quality() Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 078/176] ALSA: hda: Add error check for snd_ctl_rename_id() in snd_hda_create_dig_out_ctls() Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 079/176] ALSA: hda/conexant: Add quirk for HP ProBook 450 G4 mute LED Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 080/176] acct: perform last write from workqueue Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 081/176] acct: block access to kernel internal filesystems Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 082/176] mm,madvise,hugetlb: check for 0-length range after end address adjustment Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 083/176] mtd: rawnand: cadence: fix error code in cadence_nand_init() Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 084/176] mtd: rawnand: cadence: use dma_map_resource for sdma address Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 085/176] mtd: rawnand: cadence: fix incorrect device in dma_unmap_single Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 086/176] smb: client: Add check for next_buffer in receive_encrypted_standard() Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 087/176] EDAC/qcom: Correct interrupt enable register configuration Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 088/176] ftrace: Correct preemption accounting for function tracing Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 089/176] ftrace: Do not add duplicate entries in subops manager ops Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 090/176] x86/cpu/kvm: SRSO: Fix possible missing IBPB on VM-Exit Greg Kroah-Hartman
2025-03-05 17:47 ` Greg Kroah-Hartman [this message]
2025-03-05 17:47 ` [PATCH 6.1 092/176] block, bfq: fix bfqq uaf in bfq_limit_depth() Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 093/176] media: mediatek: vcodec: Fix H264 multi stateless decoder smatch warning Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 094/176] spi: atmel-quadspi: Avoid overwriting delay register settings Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 095/176] spi: atmel-quadspi: Fix wrong register value written to MR Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 096/176] netfilter: allow exp not to be removed in nf_ct_find_expectation Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 097/176] RDMA/mlx5: Dont keep umrable page_shift in cache entries Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 098/176] RDMA/mlx5: Remove implicit ODP cache entry Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 099/176] RDMA/mlx5: Change the cache structure to an RB-tree Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 100/176] RDMA/mlx5: Introduce mlx5r_cache_rb_key Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 101/176] RDMA/mlx5: Cache all user cacheable mkeys on dereg MR flow Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 102/176] RDMA/mlx5: Add work to remove temporary entries from the cache Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 103/176] RDMA/mlx5: Implement mkeys management via LIFO queue Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 104/176] RDMA/mlx5: Fix the recovery flow of the UMR QP Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 105/176] IB/mlx5: Set and get correct qp_num for a DCT QP Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 106/176] ovl: fix UAF in ovl_dentry_update_reval by moving dput() in ovl_link_up Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 107/176] SUNRPC: convert RPC_TASK_* constants to enum Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 108/176] SUNRPC: Prevent looping due to rpc_signal_task() races Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 109/176] RDMA/mlx: Calling qp event handler in workqueue context Greg Kroah-Hartman
2025-03-05 17:47 ` [PATCH 6.1 110/176] RDMA/mlx5: Reduce QP table exposure Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 111/176] IB/core: Add support for XDR link speed Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 112/176] RDMA/mlx5: Fix AH static rate parsing Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 113/176] scsi: core: Clear driver private data when retrying request Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 114/176] RDMA/mlx5: Fix bind QP error cleanup flow Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 115/176] sunrpc: suppress warnings for unused procfs functions Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 116/176] ALSA: usb-audio: Avoid dropping MIDI events at closing multiple ports Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 117/176] Bluetooth: L2CAP: Fix L2CAP_ECRED_CONN_RSP response Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 118/176] afs: remove variable nr_servers Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 119/176] afs: Make it possible to find the volumes that are using a server Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 120/176] afs: Fix the server_list to unuse a displaced server rather than putting it Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 121/176] net: loopback: Avoid sending IP packets without an Ethernet header Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 122/176] net: set the minimum for net_hotdata.netdev_budget_usecs Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 123/176] net/ipv4: add tracepoint for icmp_send Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 124/176] ipv4: icmp: Pass full DS field to ip_route_input() Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 125/176] ipv4: icmp: Unmask upper DSCP bits in icmp_route_lookup() Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 126/176] ipvlan: Unmask upper DSCP bits in ipvlan_process_v4_outbound() Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 127/176] ipv4: Convert icmp_route_lookup() to dscp_t Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 128/176] ipv4: Convert ip_route_input() " Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 129/176] ipvlan: Prepare ipvlan_process_v4_outbound() to future .flowi4_tos conversion Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 130/176] ipvlan: ensure network headers are in skb linear part Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 131/176] net: cadence: macb: Synchronize stats calculations Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 132/176] ASoC: es8328: fix route from DAC to output Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 133/176] ipvs: Always clear ipvs_property flag in skb_scrub_packet() Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 134/176] tcp: Defer ts_recent changes until req is owned Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 135/176] net: Clear old fragment checksum value in napi_reuse_skb Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 136/176] net: mvpp2: cls: Fixed Non IP flow, with vlan tag flow defination Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 137/176] net/mlx5: IRQ, Fix null string in debug print Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 138/176] include: net: add static inline dst_dev_overhead() to dst.h Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 139/176] net: ipv6: seg6_iptunnel: mitigate 2-realloc issue Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 140/176] net: ipv6: fix dst ref loop on input in seg6 lwt Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 141/176] net: ipv6: rpl_iptunnel: mitigate 2-realloc issue Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 142/176] net: ipv6: fix dst ref loop on input in rpl lwt Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 143/176] mm: Dont pin ZERO_PAGE in pin_user_pages() Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 144/176] uprobes: Reject the shared zeropage in uprobe_write_opcode() Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 145/176] io_uring/net: save msg_control for compat Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 146/176] x86/CPU: Fix warm boot hang regression on AMD SC1100 SoC systems Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 147/176] phy: rockchip: naneng-combphy: compatible reset with old DT Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 148/176] tracing: Fix bad hist from corrupting named_triggers list Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 149/176] ftrace: Avoid potential division by zero in function_stat_show() Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 150/176] ALSA: usb-audio: Re-add sample rate quirk for Pioneer DJM-900NXS2 Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 151/176] perf/x86: Fix low freqency setting issue Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 152/176] perf/core: Fix low freq setting via IOC_PERIOD Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 153/176] drm/amd/display: Disable PSR-SU on eDP panels Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 154/176] drm/amd/display: Fix HPD after gpu reset Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 155/176] i2c: npcm: disable interrupt enable bit before devm_request_irq Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 156/176] usbnet: gl620a: fix endpoint checking in genelink_bind() Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 157/176] net: enetc: fix the off-by-one issue in enetc_map_tx_buffs() Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 158/176] net: enetc: keep track of correct Tx BD count in enetc_map_tx_tso_buffs() Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 159/176] net: enetc: update UDP checksum when updating originTimestamp field Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 160/176] net: enetc: correct the xdp_tx statistics Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 161/176] net: enetc: fix the off-by-one issue in enetc_map_tx_tso_buffs() Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 162/176] phy: tegra: xusb: reset VBUS & ID OVERRIDE Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 163/176] phy: exynos5-usbdrd: fix MPLL_MULTIPLIER and SSC_REFCLKSEL masks in refclk Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 164/176] mptcp: always handle address removal under msk socket lock Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 165/176] mptcp: reset when MPTCP opts are dropped after join Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 166/176] vmlinux.lds: Ensure that const vars with relocations are mapped R/O Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 167/176] sched/core: Prevent rescheduling when interrupts are disabled Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 168/176] riscv/futex: sign extend compare value in atomic cmpxchg Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 169/176] drm/amd/display: fixed integer types and null check locations Greg Kroah-Hartman
2025-03-05 17:48 ` [PATCH 6.1 170/176] amdgpu/pm/legacy: fix suspend/resume issues Greg Kroah-Hartman
2025-03-05 17:49 ` [PATCH 6.1 171/176] intel_idle: Handle older CPUs, which stop the TSC in deeper C states, correctly Greg Kroah-Hartman
2025-03-05 17:49 ` [PATCH 6.1 172/176] ptrace: Introduce exception_ip arch hook Greg Kroah-Hartman
2025-03-05 17:49 ` [PATCH 6.1 173/176] mm/memory: Use exception ip to search exception tables Greg Kroah-Hartman
2025-03-05 17:49 ` [PATCH 6.1 174/176] Squashfs: check the inode number is not the invalid value of zero Greg Kroah-Hartman
2025-03-10  1:56   ` Xiangyu Chen
2025-03-05 17:49 ` [PATCH 6.1 175/176] pfifo_tail_enqueue: Drop new packet when sch->limit == 0 Greg Kroah-Hartman
2025-03-05 17:49 ` [PATCH 6.1 176/176] media: mtk-vcodec: potential null pointer deference in SCP Greg Kroah-Hartman
2025-03-05 19:37 ` [PATCH 6.1 000/176] 6.1.130-rc1 review Pavel Machek
2025-03-06  1:09 ` SeongJae Park
2025-03-06  1:19 ` Peter Schneider
2025-03-06  8:23 ` Ron Economos
2025-03-06 13:15 ` Mark Brown
2025-03-06 14:52 ` Naresh Kamboju
2025-03-06 16:03 ` Shuah Khan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250305174509.116478991@linuxfoundation.org \
    --to=gregkh@linuxfoundation.org \
    --cc=axboe@kernel.dk \
    --cc=carmine@carminezacc.com \
    --cc=damien.lemoal@opensource.wdc.com \
    --cc=felicigb@gmail.com \
    --cc=hagarhem@amazon.com \
    --cc=paolo.valente@linaro.org \
    --cc=patches@lists.linux.dev \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox