* [PATCH v2 0/4] ublk: support device recovery without I/O queueing
@ 2024-09-17 0:21 Uday Shankar
2024-09-17 0:21 ` [PATCH v2 1/4] ublk: check recovery flags for validity Uday Shankar
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Uday Shankar @ 2024-09-17 0:21 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: Uday Shankar, linux-block
ublk currently supports the following behaviors on ublk server exit:
A: outstanding I/Os get errors, subsequently issued I/Os get errors
B: outstanding I/Os get errors, subsequently issued I/Os queue
C: outstanding I/Os get reissued, subsequently issued I/Os queue
and the following behaviors for recovery of preexisting block devices by
a future incarnation of the ublk server:
1: ublk devices stopped on ublk server exit (no recovery possible)
2: ublk devices are recoverable using start/end_recovery commands
The userspace interface allows selection of combinations of these
behaviors using flags specified at device creation time, namely:
default behavior: A + 1
UBLK_F_USER_RECOVERY: B + 2
UBLK_F_USER_RECOVERY|UBLK_F_USER_RECOVERY_REISSUE: C + 2
A + 2 is a currently unsupported behavior. This patch series aims to add
support for it.
Uday Shankar (4):
ublk: check recovery flags for validity
ublk: refactor recovery configuration flag helpers
ublk: merge stop_work and quiesce_work
ublk: support device recovery without I/O queueing
drivers/block/ublk_drv.c | 187 +++++++++++++++++++++++-----------
include/uapi/linux/ublk_cmd.h | 18 ++++
2 files changed, 145 insertions(+), 60 deletions(-)
base-commit: a46c4336b17af3badf37b3002c8421a21f8db6c7
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 1/4] ublk: check recovery flags for validity
2024-09-17 0:21 [PATCH v2 0/4] ublk: support device recovery without I/O queueing Uday Shankar
@ 2024-09-17 0:21 ` Uday Shankar
2024-09-25 3:43 ` Ming Lei
2024-09-17 0:21 ` [PATCH v2 2/4] ublk: refactor recovery configuration flag helpers Uday Shankar
` (2 subsequent siblings)
3 siblings, 1 reply; 8+ messages in thread
From: Uday Shankar @ 2024-09-17 0:21 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: Uday Shankar, linux-block
Setting UBLK_F_USER_RECOVERY_REISSUE without also setting
UBLK_F_USER_RECOVERY is currently silently equivalent to not setting any
recovery flags at all, even though that's obviously not intended. Check
for this case and fail add_dev (with a paranoid warning to aid debugging
any program which might rely on the old behavior) with EINVAL if it is
detected.
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
---
Changes since v1 (https://lore.kernel.org/linux-block/20240617194451.435445-2-ushankar@purestorage.com/):
- Replace switch statement with if statement
drivers/block/ublk_drv.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index bca06bfb4bc3..5e04a0fcd0b7 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -62,6 +62,9 @@
| UBLK_F_USER_COPY \
| UBLK_F_ZONED)
+#define UBLK_F_ALL_RECOVERY_FLAGS (UBLK_F_USER_RECOVERY \
+ | UBLK_F_USER_RECOVERY_REISSUE)
+
/* All UBLK_PARAM_TYPE_* should be included here */
#define UBLK_PARAM_TYPE_ALL \
(UBLK_PARAM_TYPE_BASIC | UBLK_PARAM_TYPE_DISCARD | \
@@ -2373,6 +2376,14 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
else if (!(info.flags & UBLK_F_UNPRIVILEGED_DEV))
return -EPERM;
+ /* forbid nonsense combinations of recovery flags */
+ if ((info.flags & UBLK_F_USER_RECOVERY_REISSUE) &&
+ !(info.flags & UBLK_F_USER_RECOVERY)) {
+ pr_warn("%s: invalid recovery flags %llx\n", __func__,
+ info.flags & UBLK_F_ALL_RECOVERY_FLAGS);
+ return -EINVAL;
+ }
+
/*
* unprivileged device can't be trusted, but RECOVERY and
* RECOVERY_REISSUE still may hang error handling, so can't
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 2/4] ublk: refactor recovery configuration flag helpers
2024-09-17 0:21 [PATCH v2 0/4] ublk: support device recovery without I/O queueing Uday Shankar
2024-09-17 0:21 ` [PATCH v2 1/4] ublk: check recovery flags for validity Uday Shankar
@ 2024-09-17 0:21 ` Uday Shankar
2024-09-25 3:54 ` Ming Lei
2024-09-17 0:21 ` [PATCH v2 3/4] ublk: merge stop_work and quiesce_work Uday Shankar
2024-09-17 0:21 ` [PATCH v2 4/4] ublk: support device recovery without I/O queueing Uday Shankar
3 siblings, 1 reply; 8+ messages in thread
From: Uday Shankar @ 2024-09-17 0:21 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: Uday Shankar, linux-block
ublk currently supports the following behaviors on ublk server exit:
A: outstanding I/Os get errors, subsequently issued I/Os get errors
B: outstanding I/Os get errors, subsequently issued I/Os queue
C: outstanding I/Os get reissued, subsequently issued I/Os queue
and the following behaviors for recovery of preexisting block devices by
a future incarnation of the ublk server:
1: ublk devices stopped on ublk server exit (no recovery possible)
2: ublk devices are recoverable using start/end_recovery commands
The userspace interface allows selection of combinations of these
behaviors using flags specified at device creation time, namely:
default behavior: A + 1
UBLK_F_USER_RECOVERY: B + 2
UBLK_F_USER_RECOVERY|UBLK_F_USER_RECOVERY_REISSUE: C + 2
We can't easily change the userspace interface to allow independent
selection of one of {A, B, C} and one of {1, 2}, but we can refactor the
internal helpers which test for the flags. Replace the existing helpers
with the following set:
ublk_nosrv_should_reissue_outstanding: tests for behavior C
ublk_nosrv_[dev_]should_queue_io: tests for behavior B
ublk_nosrv_should_stop_dev: tests for behavior 1
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
---
Changes since v1 (https://lore.kernel.org/linux-block/20240617194451.435445-3-ushankar@purestorage.com/):
- Make the fast-path test in ublk_queue_rq access the queue-local copy
of the device flags.
drivers/block/ublk_drv.c | 63 +++++++++++++++++++++++++++-------------
1 file changed, 43 insertions(+), 20 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 5e04a0fcd0b7..b069f4d2b9d2 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -675,22 +675,45 @@ static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub, int q_id)
PAGE_SIZE);
}
-static inline bool ublk_queue_can_use_recovery_reissue(
- struct ublk_queue *ubq)
+/*
+ * Should I/O outstanding to the ublk server when it exits be reissued?
+ * If not, outstanding I/O will get errors.
+ */
+static inline bool ublk_nosrv_should_reissue_outstanding(struct ublk_device *ub)
{
- return (ubq->flags & UBLK_F_USER_RECOVERY) &&
- (ubq->flags & UBLK_F_USER_RECOVERY_REISSUE);
+ return (ub->dev_info.flags & UBLK_F_USER_RECOVERY) &&
+ (ub->dev_info.flags & UBLK_F_USER_RECOVERY_REISSUE);
}
-static inline bool ublk_queue_can_use_recovery(
- struct ublk_queue *ubq)
+/*
+ * Should I/O issued while there is no ublk server queue? If not, I/O
+ * issued while there is no ublk server will get errors.
+ */
+static inline bool ublk_nosrv_dev_should_queue_io(struct ublk_device *ub)
+{
+ return ub->dev_info.flags & UBLK_F_USER_RECOVERY;
+}
+
+/*
+ * Same as ublk_nosrv_dev_should_queue_io, but uses a queue-local copy
+ * of the device flags for smaller cache footprint - better for fast
+ * paths.
+ */
+static inline bool ublk_nosrv_should_queue_io(struct ublk_queue *ubq)
{
return ubq->flags & UBLK_F_USER_RECOVERY;
}
-static inline bool ublk_can_use_recovery(struct ublk_device *ub)
+/*
+ * Should ublk devices be stopped (i.e. no recovery possible) when the
+ * ublk server exits? If not, devices can be used again by a future
+ * incarnation of a ublk server via the start_recovery/end_recovery
+ * commands.
+ */
+static inline bool ublk_nosrv_should_stop_dev(struct ublk_device *ub)
{
- return ub->dev_info.flags & UBLK_F_USER_RECOVERY;
+ return (!(ub->dev_info.flags & UBLK_F_USER_RECOVERY)) &&
+ (!(ub->dev_info.flags & UBLK_F_USER_RECOVERY_REISSUE));
}
static void ublk_free_disk(struct gendisk *disk)
@@ -1066,7 +1089,7 @@ static void __ublk_fail_req(struct ublk_queue *ubq, struct ublk_io *io,
{
WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_ACTIVE);
- if (ublk_queue_can_use_recovery_reissue(ubq))
+ if (ublk_nosrv_should_reissue_outstanding(ubq->dev))
blk_mq_requeue_request(req, false);
else
ublk_put_req_ref(ubq, req);
@@ -1094,7 +1117,7 @@ static inline void __ublk_abort_rq(struct ublk_queue *ubq,
struct request *rq)
{
/* We cannot process this rq so just requeue it. */
- if (ublk_queue_can_use_recovery(ubq))
+ if (ublk_nosrv_dev_should_queue_io(ubq->dev))
blk_mq_requeue_request(rq, false);
else
blk_mq_end_request(rq, BLK_STS_IOERR);
@@ -1239,10 +1262,10 @@ static enum blk_eh_timer_return ublk_timeout(struct request *rq)
struct ublk_device *ub = ubq->dev;
if (ublk_abort_requests(ub, ubq)) {
- if (ublk_can_use_recovery(ub))
- schedule_work(&ub->quiesce_work);
- else
+ if (ublk_nosrv_should_stop_dev(ub))
schedule_work(&ub->stop_work);
+ else
+ schedule_work(&ub->quiesce_work);
}
return BLK_EH_DONE;
}
@@ -1271,7 +1294,7 @@ static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
* Note: force_abort is guaranteed to be seen because it is set
* before request queue is unqiuesced.
*/
- if (ublk_queue_can_use_recovery(ubq) && unlikely(ubq->force_abort))
+ if (ublk_nosrv_should_queue_io(ubq) && unlikely(ubq->force_abort))
return BLK_STS_IOERR;
if (unlikely(ubq->canceling)) {
@@ -1492,10 +1515,10 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
ublk_cancel_cmd(ubq, io, issue_flags);
if (need_schedule) {
- if (ublk_can_use_recovery(ub))
- schedule_work(&ub->quiesce_work);
- else
+ if (ublk_nosrv_should_stop_dev(ub))
schedule_work(&ub->stop_work);
+ else
+ schedule_work(&ub->quiesce_work);
}
}
@@ -1600,7 +1623,7 @@ static void ublk_stop_dev(struct ublk_device *ub)
mutex_lock(&ub->mutex);
if (ub->dev_info.state == UBLK_S_DEV_DEAD)
goto unlock;
- if (ublk_can_use_recovery(ub)) {
+ if (ublk_nosrv_dev_should_queue_io(ub)) {
if (ub->dev_info.state == UBLK_S_DEV_LIVE)
__ublk_quiesce_dev(ub);
ublk_unquiesce_dev(ub);
@@ -2702,7 +2725,7 @@ static int ublk_ctrl_start_recovery(struct ublk_device *ub,
int i;
mutex_lock(&ub->mutex);
- if (!ublk_can_use_recovery(ub))
+ if (ublk_nosrv_should_stop_dev(ub))
goto out_unlock;
if (!ub->nr_queues_ready)
goto out_unlock;
@@ -2755,7 +2778,7 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
__func__, ub->dev_info.nr_hw_queues, header->dev_id);
mutex_lock(&ub->mutex);
- if (!ublk_can_use_recovery(ub))
+ if (ublk_nosrv_should_stop_dev(ub))
goto out_unlock;
if (ub->dev_info.state != UBLK_S_DEV_QUIESCED) {
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 3/4] ublk: merge stop_work and quiesce_work
2024-09-17 0:21 [PATCH v2 0/4] ublk: support device recovery without I/O queueing Uday Shankar
2024-09-17 0:21 ` [PATCH v2 1/4] ublk: check recovery flags for validity Uday Shankar
2024-09-17 0:21 ` [PATCH v2 2/4] ublk: refactor recovery configuration flag helpers Uday Shankar
@ 2024-09-17 0:21 ` Uday Shankar
2024-09-17 0:21 ` [PATCH v2 4/4] ublk: support device recovery without I/O queueing Uday Shankar
3 siblings, 0 replies; 8+ messages in thread
From: Uday Shankar @ 2024-09-17 0:21 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: Uday Shankar, linux-block
Save some lines by merging stop_work and quiesce_work into nosrv_work,
which looks at the recovery flags and does the right thing when the "no
ublk server" condition is detected.
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 64 ++++++++++++++++------------------------
1 file changed, 25 insertions(+), 39 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index b069f4d2b9d2..c7a0493b3545 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -182,8 +182,7 @@ struct ublk_device {
unsigned int nr_queues_ready;
unsigned int nr_privileged_daemon;
- struct work_struct quiesce_work;
- struct work_struct stop_work;
+ struct work_struct nosrv_work;
};
/* header of ublk_params */
@@ -1262,10 +1261,7 @@ static enum blk_eh_timer_return ublk_timeout(struct request *rq)
struct ublk_device *ub = ubq->dev;
if (ublk_abort_requests(ub, ubq)) {
- if (ublk_nosrv_should_stop_dev(ub))
- schedule_work(&ub->stop_work);
- else
- schedule_work(&ub->quiesce_work);
+ schedule_work(&ub->nosrv_work);
}
return BLK_EH_DONE;
}
@@ -1515,10 +1511,7 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
ublk_cancel_cmd(ubq, io, issue_flags);
if (need_schedule) {
- if (ublk_nosrv_should_stop_dev(ub))
- schedule_work(&ub->stop_work);
- else
- schedule_work(&ub->quiesce_work);
+ schedule_work(&ub->nosrv_work);
}
}
@@ -1581,20 +1574,6 @@ static void __ublk_quiesce_dev(struct ublk_device *ub)
ub->dev_info.state = UBLK_S_DEV_QUIESCED;
}
-static void ublk_quiesce_work_fn(struct work_struct *work)
-{
- struct ublk_device *ub =
- container_of(work, struct ublk_device, quiesce_work);
-
- mutex_lock(&ub->mutex);
- if (ub->dev_info.state != UBLK_S_DEV_LIVE)
- goto unlock;
- __ublk_quiesce_dev(ub);
- unlock:
- mutex_unlock(&ub->mutex);
- ublk_cancel_dev(ub);
-}
-
static void ublk_unquiesce_dev(struct ublk_device *ub)
{
int i;
@@ -1643,6 +1622,25 @@ static void ublk_stop_dev(struct ublk_device *ub)
ublk_cancel_dev(ub);
}
+static void ublk_nosrv_work(struct work_struct *work)
+{
+ struct ublk_device *ub =
+ container_of(work, struct ublk_device, nosrv_work);
+
+ if (ublk_nosrv_should_stop_dev(ub)) {
+ ublk_stop_dev(ub);
+ return;
+ }
+
+ mutex_lock(&ub->mutex);
+ if (ub->dev_info.state != UBLK_S_DEV_LIVE)
+ goto unlock;
+ __ublk_quiesce_dev(ub);
+ unlock:
+ mutex_unlock(&ub->mutex);
+ ublk_cancel_dev(ub);
+}
+
/* device can only be started after all IOs are ready */
static void ublk_mark_io_ready(struct ublk_device *ub, struct ublk_queue *ubq)
{
@@ -2157,14 +2155,6 @@ static int ublk_add_chdev(struct ublk_device *ub)
return ret;
}
-static void ublk_stop_work_fn(struct work_struct *work)
-{
- struct ublk_device *ub =
- container_of(work, struct ublk_device, stop_work);
-
- ublk_stop_dev(ub);
-}
-
/* align max io buffer size with PAGE_SIZE */
static void ublk_align_max_io_size(struct ublk_device *ub)
{
@@ -2189,8 +2179,7 @@ static int ublk_add_tag_set(struct ublk_device *ub)
static void ublk_remove(struct ublk_device *ub)
{
ublk_stop_dev(ub);
- cancel_work_sync(&ub->stop_work);
- cancel_work_sync(&ub->quiesce_work);
+ cancel_work_sync(&ub->nosrv_work);
cdev_device_del(&ub->cdev, &ub->cdev_dev);
ublk_put_device(ub);
ublks_added--;
@@ -2450,8 +2439,7 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
goto out_unlock;
mutex_init(&ub->mutex);
spin_lock_init(&ub->lock);
- INIT_WORK(&ub->quiesce_work, ublk_quiesce_work_fn);
- INIT_WORK(&ub->stop_work, ublk_stop_work_fn);
+ INIT_WORK(&ub->nosrv_work, ublk_nosrv_work);
ret = ublk_alloc_dev_number(ub, header->dev_id);
if (ret < 0)
@@ -2586,9 +2574,7 @@ static inline void ublk_ctrl_cmd_dump(struct io_uring_cmd *cmd)
static int ublk_ctrl_stop_dev(struct ublk_device *ub)
{
ublk_stop_dev(ub);
- cancel_work_sync(&ub->stop_work);
- cancel_work_sync(&ub->quiesce_work);
-
+ cancel_work_sync(&ub->nosrv_work);
return 0;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 4/4] ublk: support device recovery without I/O queueing
2024-09-17 0:21 [PATCH v2 0/4] ublk: support device recovery without I/O queueing Uday Shankar
` (2 preceding siblings ...)
2024-09-17 0:21 ` [PATCH v2 3/4] ublk: merge stop_work and quiesce_work Uday Shankar
@ 2024-09-17 0:21 ` Uday Shankar
2024-09-25 10:58 ` Ming Lei
3 siblings, 1 reply; 8+ messages in thread
From: Uday Shankar @ 2024-09-17 0:21 UTC (permalink / raw)
To: Ming Lei, Jens Axboe; +Cc: Uday Shankar, linux-block
ublk currently supports the following behaviors on ublk server exit:
A: outstanding I/Os get errors, subsequently issued I/Os get errors
B: outstanding I/Os get errors, subsequently issued I/Os queue
C: outstanding I/Os get reissued, subsequently issued I/Os queue
and the following behaviors for recovery of preexisting block devices by
a future incarnation of the ublk server:
1: ublk devices stopped on ublk server exit (no recovery possible)
2: ublk devices are recoverable using start/end_recovery commands
The userspace interface allows selection of combinations of these
behaviors using flags specified at device creation time, namely:
default behavior: A + 1
UBLK_F_USER_RECOVERY: B + 2
UBLK_F_USER_RECOVERY|UBLK_F_USER_RECOVERY_REISSUE: C + 2
The behavior A + 2 is currently unsupported. Add support for this
behavior under the new flag combination
UBLK_F_USER_RECOVERY|UBLK_F_USER_RECOVERY_FAIL_IO.
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
---
Changes since v1 (https://lore.kernel.org/linux-block/20240617194451.435445-5-ushankar@purestorage.com/):
- Change flag name from UBLK_F_USER_RECOVERY_NOQUEUE to
UBLK_F_USER_RECOVERY_FAIL_IO
- Require UBLK_F_USER_RECOVERY to be set along with the new flag for it
to be effective. This makes more sense, as UBLK_F_USER_RECOVERY
essentially selects behavior 2 above (and not setting
UBLK_F_USER_RECOVERY selects behavior 1).
- Add per-ublk-queue flag which is true iff device state is
UBLK_S_DEV_FAIL_IO. This lets us avoid fetching the device in the fast
path.
drivers/block/ublk_drv.c | 75 ++++++++++++++++++++++++++++-------
include/uapi/linux/ublk_cmd.h | 18 +++++++++
2 files changed, 79 insertions(+), 14 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index c7a0493b3545..548043eeefb9 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -60,10 +60,12 @@
| UBLK_F_UNPRIVILEGED_DEV \
| UBLK_F_CMD_IOCTL_ENCODE \
| UBLK_F_USER_COPY \
- | UBLK_F_ZONED)
+ | UBLK_F_ZONED \
+ | UBLK_F_USER_RECOVERY_FAIL_IO)
#define UBLK_F_ALL_RECOVERY_FLAGS (UBLK_F_USER_RECOVERY \
- | UBLK_F_USER_RECOVERY_REISSUE)
+ | UBLK_F_USER_RECOVERY_REISSUE \
+ | UBLK_F_USER_RECOVERY_FAIL_IO)
/* All UBLK_PARAM_TYPE_* should be included here */
#define UBLK_PARAM_TYPE_ALL \
@@ -146,6 +148,7 @@ struct ublk_queue {
bool force_abort;
bool timeout;
bool canceling;
+ bool fail_io; /* copy of dev->state == UBLK_S_DEV_FAIL_IO */
unsigned short nr_io_ready; /* how many ios setup */
spinlock_t cancel_lock;
struct ublk_device *dev;
@@ -690,7 +693,8 @@ static inline bool ublk_nosrv_should_reissue_outstanding(struct ublk_device *ub)
*/
static inline bool ublk_nosrv_dev_should_queue_io(struct ublk_device *ub)
{
- return ub->dev_info.flags & UBLK_F_USER_RECOVERY;
+ return (ub->dev_info.flags & UBLK_F_USER_RECOVERY) &&
+ !(ub->dev_info.flags & UBLK_F_USER_RECOVERY_FAIL_IO);
}
/*
@@ -700,7 +704,8 @@ static inline bool ublk_nosrv_dev_should_queue_io(struct ublk_device *ub)
*/
static inline bool ublk_nosrv_should_queue_io(struct ublk_queue *ubq)
{
- return ubq->flags & UBLK_F_USER_RECOVERY;
+ return (ubq->flags & UBLK_F_USER_RECOVERY) &&
+ !(ubq->flags & UBLK_F_USER_RECOVERY_FAIL_IO);
}
/*
@@ -712,7 +717,14 @@ static inline bool ublk_nosrv_should_queue_io(struct ublk_queue *ubq)
static inline bool ublk_nosrv_should_stop_dev(struct ublk_device *ub)
{
return (!(ub->dev_info.flags & UBLK_F_USER_RECOVERY)) &&
- (!(ub->dev_info.flags & UBLK_F_USER_RECOVERY_REISSUE));
+ (!(ub->dev_info.flags & UBLK_F_USER_RECOVERY_REISSUE)) &&
+ (!(ub->dev_info.flags & UBLK_F_USER_RECOVERY_FAIL_IO));
+}
+
+static inline bool ublk_dev_in_recoverable_state(struct ublk_device *ub)
+{
+ return ub->dev_info.state == UBLK_S_DEV_QUIESCED ||
+ ub->dev_info.state == UBLK_S_DEV_FAIL_IO;
}
static void ublk_free_disk(struct gendisk *disk)
@@ -1276,6 +1288,10 @@ static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
struct request *rq = bd->rq;
blk_status_t res;
+ if (unlikely(ubq->fail_io)) {
+ return BLK_STS_TARGET;
+ }
+
/* fill iod to slot in io cmd buffer */
res = ublk_setup_iod(ubq, rq);
if (unlikely(res != BLK_STS_OK))
@@ -1626,6 +1642,7 @@ static void ublk_nosrv_work(struct work_struct *work)
{
struct ublk_device *ub =
container_of(work, struct ublk_device, nosrv_work);
+ int i;
if (ublk_nosrv_should_stop_dev(ub)) {
ublk_stop_dev(ub);
@@ -1635,7 +1652,18 @@ static void ublk_nosrv_work(struct work_struct *work)
mutex_lock(&ub->mutex);
if (ub->dev_info.state != UBLK_S_DEV_LIVE)
goto unlock;
- __ublk_quiesce_dev(ub);
+
+ if (ublk_nosrv_dev_should_queue_io(ub)) {
+ __ublk_quiesce_dev(ub);
+ } else {
+ blk_mq_quiesce_queue(ub->ub_disk->queue);
+ for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
+ ublk_get_queue(ub, i)->fail_io = true;
+ }
+ blk_mq_unquiesce_queue(ub->ub_disk->queue);
+ ub->dev_info.state = UBLK_S_DEV_FAIL_IO;
+ }
+
unlock:
mutex_unlock(&ub->mutex);
ublk_cancel_dev(ub);
@@ -2389,8 +2417,13 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
return -EPERM;
/* forbid nonsense combinations of recovery flags */
- if ((info.flags & UBLK_F_USER_RECOVERY_REISSUE) &&
- !(info.flags & UBLK_F_USER_RECOVERY)) {
+ switch (info.flags & UBLK_F_ALL_RECOVERY_FLAGS) {
+ case 0:
+ case UBLK_F_USER_RECOVERY:
+ case (UBLK_F_USER_RECOVERY | UBLK_F_USER_RECOVERY_REISSUE):
+ case (UBLK_F_USER_RECOVERY | UBLK_F_USER_RECOVERY_FAIL_IO):
+ break;
+ default:
pr_warn("%s: invalid recovery flags %llx\n", __func__,
info.flags & UBLK_F_ALL_RECOVERY_FLAGS);
return -EINVAL;
@@ -2722,14 +2755,18 @@ static int ublk_ctrl_start_recovery(struct ublk_device *ub,
* and related io_uring ctx is freed so file struct of /dev/ublkcX is
* released.
*
+ * and one of the following holds
+ *
* (2) UBLK_S_DEV_QUIESCED is set, which means the quiesce_work:
* (a)has quiesced request queue
* (b)has requeued every inflight rqs whose io_flags is ACTIVE
* (c)has requeued/aborted every inflight rqs whose io_flags is NOT ACTIVE
* (d)has completed/camceled all ioucmds owned by ther dying process
+ *
+ * (3) UBLK_S_DEV_FAIL_IO is set, which means the queue is not
+ * quiesced, but all I/O is being immediately errored
*/
- if (test_bit(UB_STATE_OPEN, &ub->state) ||
- ub->dev_info.state != UBLK_S_DEV_QUIESCED) {
+ if (test_bit(UB_STATE_OPEN, &ub->state) || !ublk_dev_in_recoverable_state(ub)) {
ret = -EBUSY;
goto out_unlock;
}
@@ -2753,6 +2790,7 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe);
int ublksrv_pid = (int)header->data[0];
int ret = -EINVAL;
+ int i;
pr_devel("%s: Waiting for new ubq_daemons(nr: %d) are ready, dev id %d...\n",
__func__, ub->dev_info.nr_hw_queues, header->dev_id);
@@ -2767,18 +2805,27 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
if (ublk_nosrv_should_stop_dev(ub))
goto out_unlock;
- if (ub->dev_info.state != UBLK_S_DEV_QUIESCED) {
+ if (!ublk_dev_in_recoverable_state(ub)) {
ret = -EBUSY;
goto out_unlock;
}
ub->dev_info.ublksrv_pid = ublksrv_pid;
pr_devel("%s: new ublksrv_pid %d, dev id %d\n",
__func__, ublksrv_pid, header->dev_id);
+
+ blk_mq_quiesce_queue(ub->ub_disk->queue);
+ for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
+ ublk_get_queue(ub, i)->fail_io = false;
+ }
blk_mq_unquiesce_queue(ub->ub_disk->queue);
- pr_devel("%s: queue unquiesced, dev id %d.\n",
- __func__, header->dev_id);
- blk_mq_kick_requeue_list(ub->ub_disk->queue);
ub->dev_info.state = UBLK_S_DEV_LIVE;
+ if (ublk_nosrv_dev_should_queue_io(ub)) {
+ blk_mq_unquiesce_queue(ub->ub_disk->queue);
+ pr_devel("%s: queue unquiesced, dev id %d.\n",
+ __func__, header->dev_id);
+ blk_mq_kick_requeue_list(ub->ub_disk->queue);
+ }
+
ret = 0;
out_unlock:
mutex_unlock(&ub->mutex);
diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h
index c8dc5f8ea699..a2b3ea344639 100644
--- a/include/uapi/linux/ublk_cmd.h
+++ b/include/uapi/linux/ublk_cmd.h
@@ -147,8 +147,18 @@
*/
#define UBLK_F_NEED_GET_DATA (1UL << 2)
+/*
+ * - Block devices are recoverable if ublk server exits and restarts
+ * - Outstanding I/O when ublk server exits is met with errors
+ * - I/O issued while there is no ublk server queues
+ */
#define UBLK_F_USER_RECOVERY (1UL << 3)
+/*
+ * - Block devices are recoverable if ublk server exits and restarts
+ * - Outstanding I/O when ublk server exits is reissued
+ * - I/O issued while there is no ublk server queues
+ */
#define UBLK_F_USER_RECOVERY_REISSUE (1UL << 4)
/*
@@ -184,10 +194,18 @@
*/
#define UBLK_F_ZONED (1ULL << 8)
+/*
+ * - Block devices are recoverable if ublk server exits and restarts
+ * - Outstanding I/O when ublk server exits is met with errors
+ * - I/O issued while there is no ublk server is met with errors
+ */
+#define UBLK_F_USER_RECOVERY_FAIL_IO (1ULL << 9)
+
/* device state */
#define UBLK_S_DEV_DEAD 0
#define UBLK_S_DEV_LIVE 1
#define UBLK_S_DEV_QUIESCED 2
+#define UBLK_S_DEV_FAIL_IO 3
/* shipped via sqe->cmd of io_uring command */
struct ublksrv_ctrl_cmd {
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/4] ublk: check recovery flags for validity
2024-09-17 0:21 ` [PATCH v2 1/4] ublk: check recovery flags for validity Uday Shankar
@ 2024-09-25 3:43 ` Ming Lei
0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2024-09-25 3:43 UTC (permalink / raw)
To: Uday Shankar; +Cc: Jens Axboe, linux-block
On Mon, Sep 16, 2024 at 06:21:52PM -0600, Uday Shankar wrote:
> Setting UBLK_F_USER_RECOVERY_REISSUE without also setting
> UBLK_F_USER_RECOVERY is currently silently equivalent to not setting any
> recovery flags at all, even though that's obviously not intended. Check
> for this case and fail add_dev (with a paranoid warning to aid debugging
> any program which might rely on the old behavior) with EINVAL if it is
> detected.
>
> Signed-off-by: Uday Shankar <ushankar@purestorage.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Thanks,
Ming
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 2/4] ublk: refactor recovery configuration flag helpers
2024-09-17 0:21 ` [PATCH v2 2/4] ublk: refactor recovery configuration flag helpers Uday Shankar
@ 2024-09-25 3:54 ` Ming Lei
0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2024-09-25 3:54 UTC (permalink / raw)
To: Uday Shankar; +Cc: Jens Axboe, linux-block
On Mon, Sep 16, 2024 at 06:21:53PM -0600, Uday Shankar wrote:
> ublk currently supports the following behaviors on ublk server exit:
>
> A: outstanding I/Os get errors, subsequently issued I/Os get errors
> B: outstanding I/Os get errors, subsequently issued I/Os queue
> C: outstanding I/Os get reissued, subsequently issued I/Os queue
>
> and the following behaviors for recovery of preexisting block devices by
> a future incarnation of the ublk server:
>
> 1: ublk devices stopped on ublk server exit (no recovery possible)
> 2: ublk devices are recoverable using start/end_recovery commands
>
> The userspace interface allows selection of combinations of these
> behaviors using flags specified at device creation time, namely:
>
> default behavior: A + 1
> UBLK_F_USER_RECOVERY: B + 2
> UBLK_F_USER_RECOVERY|UBLK_F_USER_RECOVERY_REISSUE: C + 2
>
> We can't easily change the userspace interface to allow independent
> selection of one of {A, B, C} and one of {1, 2}, but we can refactor the
> internal helpers which test for the flags. Replace the existing helpers
> with the following set:
>
> ublk_nosrv_should_reissue_outstanding: tests for behavior C
> ublk_nosrv_[dev_]should_queue_io: tests for behavior B
> ublk_nosrv_should_stop_dev: tests for behavior 1
>
> Signed-off-by: Uday Shankar <ushankar@purestorage.com>
> ---
> Changes since v1 (https://lore.kernel.org/linux-block/20240617194451.435445-3-ushankar@purestorage.com/):
> - Make the fast-path test in ublk_queue_rq access the queue-local copy
> of the device flags.
>
> drivers/block/ublk_drv.c | 63 +++++++++++++++++++++++++++-------------
> 1 file changed, 43 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index 5e04a0fcd0b7..b069f4d2b9d2 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -675,22 +675,45 @@ static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub, int q_id)
> PAGE_SIZE);
> }
>
> -static inline bool ublk_queue_can_use_recovery_reissue(
> - struct ublk_queue *ubq)
> +/*
> + * Should I/O outstanding to the ublk server when it exits be reissued?
> + * If not, outstanding I/O will get errors.
> + */
> +static inline bool ublk_nosrv_should_reissue_outstanding(struct ublk_device *ub)
> {
> - return (ubq->flags & UBLK_F_USER_RECOVERY) &&
> - (ubq->flags & UBLK_F_USER_RECOVERY_REISSUE);
> + return (ub->dev_info.flags & UBLK_F_USER_RECOVERY) &&
> + (ub->dev_info.flags & UBLK_F_USER_RECOVERY_REISSUE);
> }
>
> -static inline bool ublk_queue_can_use_recovery(
> - struct ublk_queue *ubq)
> +/*
> + * Should I/O issued while there is no ublk server queue? If not, I/O
> + * issued while there is no ublk server will get errors.
> + */
> +static inline bool ublk_nosrv_dev_should_queue_io(struct ublk_device *ub)
> +{
> + return ub->dev_info.flags & UBLK_F_USER_RECOVERY;
> +}
> +
> +/*
> + * Same as ublk_nosrv_dev_should_queue_io, but uses a queue-local copy
> + * of the device flags for smaller cache footprint - better for fast
> + * paths.
> + */
> +static inline bool ublk_nosrv_should_queue_io(struct ublk_queue *ubq)
> {
> return ubq->flags & UBLK_F_USER_RECOVERY;
> }
>
> -static inline bool ublk_can_use_recovery(struct ublk_device *ub)
> +/*
> + * Should ublk devices be stopped (i.e. no recovery possible) when the
> + * ublk server exits? If not, devices can be used again by a future
> + * incarnation of a ublk server via the start_recovery/end_recovery
> + * commands.
> + */
> +static inline bool ublk_nosrv_should_stop_dev(struct ublk_device *ub)
> {
> - return ub->dev_info.flags & UBLK_F_USER_RECOVERY;
> + return (!(ub->dev_info.flags & UBLK_F_USER_RECOVERY)) &&
> + (!(ub->dev_info.flags & UBLK_F_USER_RECOVERY_REISSUE));
> }
It should be enough to check UBLK_F_USER_RECOVERY only since
UBLK_F_USER_RECOVERY_REISSUE implies UBLK_F_USER_RECOVERY.
Otherwise, this patch looks fine.
Thanks,
Ming
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 4/4] ublk: support device recovery without I/O queueing
2024-09-17 0:21 ` [PATCH v2 4/4] ublk: support device recovery without I/O queueing Uday Shankar
@ 2024-09-25 10:58 ` Ming Lei
0 siblings, 0 replies; 8+ messages in thread
From: Ming Lei @ 2024-09-25 10:58 UTC (permalink / raw)
To: Uday Shankar; +Cc: Jens Axboe, linux-block
On Mon, Sep 16, 2024 at 06:21:55PM -0600, Uday Shankar wrote:
> ublk currently supports the following behaviors on ublk server exit:
>
> A: outstanding I/Os get errors, subsequently issued I/Os get errors
> B: outstanding I/Os get errors, subsequently issued I/Os queue
> C: outstanding I/Os get reissued, subsequently issued I/Os queue
>
> and the following behaviors for recovery of preexisting block devices by
> a future incarnation of the ublk server:
>
> 1: ublk devices stopped on ublk server exit (no recovery possible)
> 2: ublk devices are recoverable using start/end_recovery commands
>
> The userspace interface allows selection of combinations of these
> behaviors using flags specified at device creation time, namely:
>
> default behavior: A + 1
> UBLK_F_USER_RECOVERY: B + 2
> UBLK_F_USER_RECOVERY|UBLK_F_USER_RECOVERY_REISSUE: C + 2
>
> The behavior A + 2 is currently unsupported. Add support for this
> behavior under the new flag combination
> UBLK_F_USER_RECOVERY|UBLK_F_USER_RECOVERY_FAIL_IO.
>
> Signed-off-by: Uday Shankar <ushankar@purestorage.com>
> ---
> Changes since v1 (https://lore.kernel.org/linux-block/20240617194451.435445-5-ushankar@purestorage.com/):
> - Change flag name from UBLK_F_USER_RECOVERY_NOQUEUE to
> UBLK_F_USER_RECOVERY_FAIL_IO
> - Require UBLK_F_USER_RECOVERY to be set along with the new flag for it
> to be effective. This makes more sense, as UBLK_F_USER_RECOVERY
> essentially selects behavior 2 above (and not setting
> UBLK_F_USER_RECOVERY selects behavior 1).
> - Add per-ublk-queue flag which is true iff device state is
> UBLK_S_DEV_FAIL_IO. This lets us avoid fetching the device in the fast
> path.
>
> drivers/block/ublk_drv.c | 75 ++++++++++++++++++++++++++++-------
> include/uapi/linux/ublk_cmd.h | 18 +++++++++
> 2 files changed, 79 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index c7a0493b3545..548043eeefb9 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -60,10 +60,12 @@
> | UBLK_F_UNPRIVILEGED_DEV \
> | UBLK_F_CMD_IOCTL_ENCODE \
> | UBLK_F_USER_COPY \
> - | UBLK_F_ZONED)
> + | UBLK_F_ZONED \
> + | UBLK_F_USER_RECOVERY_FAIL_IO)
>
> #define UBLK_F_ALL_RECOVERY_FLAGS (UBLK_F_USER_RECOVERY \
> - | UBLK_F_USER_RECOVERY_REISSUE)
> + | UBLK_F_USER_RECOVERY_REISSUE \
> + | UBLK_F_USER_RECOVERY_FAIL_IO)
>
> /* All UBLK_PARAM_TYPE_* should be included here */
> #define UBLK_PARAM_TYPE_ALL \
> @@ -146,6 +148,7 @@ struct ublk_queue {
> bool force_abort;
> bool timeout;
> bool canceling;
> + bool fail_io; /* copy of dev->state == UBLK_S_DEV_FAIL_IO */
> unsigned short nr_io_ready; /* how many ios setup */
> spinlock_t cancel_lock;
> struct ublk_device *dev;
> @@ -690,7 +693,8 @@ static inline bool ublk_nosrv_should_reissue_outstanding(struct ublk_device *ub)
> */
> static inline bool ublk_nosrv_dev_should_queue_io(struct ublk_device *ub)
> {
> - return ub->dev_info.flags & UBLK_F_USER_RECOVERY;
> + return (ub->dev_info.flags & UBLK_F_USER_RECOVERY) &&
> + !(ub->dev_info.flags & UBLK_F_USER_RECOVERY_FAIL_IO);
> }
>
> /*
> @@ -700,7 +704,8 @@ static inline bool ublk_nosrv_dev_should_queue_io(struct ublk_device *ub)
> */
> static inline bool ublk_nosrv_should_queue_io(struct ublk_queue *ubq)
> {
> - return ubq->flags & UBLK_F_USER_RECOVERY;
> + return (ubq->flags & UBLK_F_USER_RECOVERY) &&
> + !(ubq->flags & UBLK_F_USER_RECOVERY_FAIL_IO);
> }
>
> /*
> @@ -712,7 +717,14 @@ static inline bool ublk_nosrv_should_queue_io(struct ublk_queue *ubq)
> static inline bool ublk_nosrv_should_stop_dev(struct ublk_device *ub)
> {
> return (!(ub->dev_info.flags & UBLK_F_USER_RECOVERY)) &&
> - (!(ub->dev_info.flags & UBLK_F_USER_RECOVERY_REISSUE));
> + (!(ub->dev_info.flags & UBLK_F_USER_RECOVERY_REISSUE)) &&
> + (!(ub->dev_info.flags & UBLK_F_USER_RECOVERY_FAIL_IO));
> +}
> +
> +static inline bool ublk_dev_in_recoverable_state(struct ublk_device *ub)
> +{
> + return ub->dev_info.state == UBLK_S_DEV_QUIESCED ||
> + ub->dev_info.state == UBLK_S_DEV_FAIL_IO;
> }
>
> static void ublk_free_disk(struct gendisk *disk)
> @@ -1276,6 +1288,10 @@ static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
> struct request *rq = bd->rq;
> blk_status_t res;
>
> + if (unlikely(ubq->fail_io)) {
> + return BLK_STS_TARGET;
> + }
> +
> /* fill iod to slot in io cmd buffer */
> res = ublk_setup_iod(ubq, rq);
> if (unlikely(res != BLK_STS_OK))
> @@ -1626,6 +1642,7 @@ static void ublk_nosrv_work(struct work_struct *work)
> {
> struct ublk_device *ub =
> container_of(work, struct ublk_device, nosrv_work);
> + int i;
>
> if (ublk_nosrv_should_stop_dev(ub)) {
> ublk_stop_dev(ub);
> @@ -1635,7 +1652,18 @@ static void ublk_nosrv_work(struct work_struct *work)
> mutex_lock(&ub->mutex);
> if (ub->dev_info.state != UBLK_S_DEV_LIVE)
> goto unlock;
> - __ublk_quiesce_dev(ub);
> +
> + if (ublk_nosrv_dev_should_queue_io(ub)) {
> + __ublk_quiesce_dev(ub);
> + } else {
> + blk_mq_quiesce_queue(ub->ub_disk->queue);
> + for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
> + ublk_get_queue(ub, i)->fail_io = true;
> + }
> + blk_mq_unquiesce_queue(ub->ub_disk->queue);
> + ub->dev_info.state = UBLK_S_DEV_FAIL_IO;
> + }
> +
> unlock:
> mutex_unlock(&ub->mutex);
> ublk_cancel_dev(ub);
> @@ -2389,8 +2417,13 @@ static int ublk_ctrl_add_dev(struct io_uring_cmd *cmd)
> return -EPERM;
>
> /* forbid nonsense combinations of recovery flags */
> - if ((info.flags & UBLK_F_USER_RECOVERY_REISSUE) &&
> - !(info.flags & UBLK_F_USER_RECOVERY)) {
> + switch (info.flags & UBLK_F_ALL_RECOVERY_FLAGS) {
> + case 0:
> + case UBLK_F_USER_RECOVERY:
> + case (UBLK_F_USER_RECOVERY | UBLK_F_USER_RECOVERY_REISSUE):
> + case (UBLK_F_USER_RECOVERY | UBLK_F_USER_RECOVERY_FAIL_IO):
> + break;
> + default:
> pr_warn("%s: invalid recovery flags %llx\n", __func__,
> info.flags & UBLK_F_ALL_RECOVERY_FLAGS);
> return -EINVAL;
> @@ -2722,14 +2755,18 @@ static int ublk_ctrl_start_recovery(struct ublk_device *ub,
> * and related io_uring ctx is freed so file struct of /dev/ublkcX is
> * released.
> *
> + * and one of the following holds
> + *
> * (2) UBLK_S_DEV_QUIESCED is set, which means the quiesce_work:
> * (a)has quiesced request queue
> * (b)has requeued every inflight rqs whose io_flags is ACTIVE
> * (c)has requeued/aborted every inflight rqs whose io_flags is NOT ACTIVE
> * (d)has completed/camceled all ioucmds owned by ther dying process
> + *
> + * (3) UBLK_S_DEV_FAIL_IO is set, which means the queue is not
> + * quiesced, but all I/O is being immediately errored
> */
> - if (test_bit(UB_STATE_OPEN, &ub->state) ||
> - ub->dev_info.state != UBLK_S_DEV_QUIESCED) {
> + if (test_bit(UB_STATE_OPEN, &ub->state) || !ublk_dev_in_recoverable_state(ub)) {
> ret = -EBUSY;
> goto out_unlock;
> }
> @@ -2753,6 +2790,7 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
> const struct ublksrv_ctrl_cmd *header = io_uring_sqe_cmd(cmd->sqe);
> int ublksrv_pid = (int)header->data[0];
> int ret = -EINVAL;
> + int i;
>
> pr_devel("%s: Waiting for new ubq_daemons(nr: %d) are ready, dev id %d...\n",
> __func__, ub->dev_info.nr_hw_queues, header->dev_id);
> @@ -2767,18 +2805,27 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
> if (ublk_nosrv_should_stop_dev(ub))
> goto out_unlock;
>
> - if (ub->dev_info.state != UBLK_S_DEV_QUIESCED) {
> + if (!ublk_dev_in_recoverable_state(ub)) {
> ret = -EBUSY;
> goto out_unlock;
> }
> ub->dev_info.ublksrv_pid = ublksrv_pid;
> pr_devel("%s: new ublksrv_pid %d, dev id %d\n",
> __func__, ublksrv_pid, header->dev_id);
> +
> + blk_mq_quiesce_queue(ub->ub_disk->queue);
> + for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
> + ublk_get_queue(ub, i)->fail_io = false;
> + }
> blk_mq_unquiesce_queue(ub->ub_disk->queue);
> - pr_devel("%s: queue unquiesced, dev id %d.\n",
> - __func__, header->dev_id);
> - blk_mq_kick_requeue_list(ub->ub_disk->queue);
> ub->dev_info.state = UBLK_S_DEV_LIVE;
> + if (ublk_nosrv_dev_should_queue_io(ub)) {
> + blk_mq_unquiesce_queue(ub->ub_disk->queue);
> + pr_devel("%s: queue unquiesced, dev id %d.\n",
> + __func__, header->dev_id);
> + blk_mq_kick_requeue_list(ub->ub_disk->queue);
> + }
I'd suggest to change the above into the following:
if (ublk_nosrv_dev_should_queue_io(ub)) {
ub->dev_info.state = UBLK_S_DEV_LIVE;
blk_mq_unquiesce_queue(ub->ub_disk->queue);
pr_devel("%s: queue unquiesced, dev id %d.\n",
__func__, header->dev_id);
blk_mq_kick_requeue_list(ub->ub_disk->queue);
} else {
blk_mq_quiesce_queue(ub->ub_disk->queue);
ub->dev_info.state = UBLK_S_DEV_LIVE;
for (i = 0; i < ub->dev_info.nr_hw_queues; i++)
ublk_get_queue(ub, i)->fail_io = false;
blk_mq_unquiesce_queue(ub->ub_disk->queue);
}
- one more quiesce is avoided
- ub->dev_info.state is only updated if request queue is quiesced.
Otherwise, this patch looks fine.
Thanks,
Ming
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-09-25 10:58 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-17 0:21 [PATCH v2 0/4] ublk: support device recovery without I/O queueing Uday Shankar
2024-09-17 0:21 ` [PATCH v2 1/4] ublk: check recovery flags for validity Uday Shankar
2024-09-25 3:43 ` Ming Lei
2024-09-17 0:21 ` [PATCH v2 2/4] ublk: refactor recovery configuration flag helpers Uday Shankar
2024-09-25 3:54 ` Ming Lei
2024-09-17 0:21 ` [PATCH v2 3/4] ublk: merge stop_work and quiesce_work Uday Shankar
2024-09-17 0:21 ` [PATCH v2 4/4] ublk: support device recovery without I/O queueing Uday Shankar
2024-09-25 10:58 ` Ming Lei
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox