From: Damien Le Moal <dlemoal@kernel.org>
To: Jens Axboe <axboe@kernel.dk>, linux-block@vger.kernel.org
Subject: [PATCH v2 5/7] block: allow submitting all zone writes from a single context
Date: Thu, 26 Feb 2026 13:10:22 +0900 [thread overview]
Message-ID: <20260226041024.2154806-6-dlemoal@kernel.org> (raw)
In-Reply-To: <20260226041024.2154806-1-dlemoal@kernel.org>
In order to maintain sequential write patterns per zone with zoned block
devices, zone write plugging issues only a single write BIO per zone at
any time. This works well but has the side effect that when large
sequential write streams are issued by the user and these streams cross
zone boundaries, the device ends up receiving a discontiguous set of
write commands for different zones. The same also happens when a user
writes simultaneously at high queue depth multiple zones: the device
does not see all sequential writes per zone and receives discontiguous
writes to different zones. While this does not affect the performance of
solid state zoned block devices, when using an SMR HDD, this pattern
change from sequential writes to discontiguous writes to different zones
significantly increases head seek which results in degraded write
throughput.
In order to reduce this seek overhead for rotational media devices,
introduce a per disk zone write plugs kernel thread to issue all write
BIOs to zones. This single zone write issuing context is enabled for
any zoned block device that has a request queue flagged with the new
QUEUE_ZONED_QD1_WRITES flag.
The flag QUEUE_ZONED_QD1_WRITES is visible as the sysfs queue attribute
zoned_qd1_writes. For a regular block device, this attribute cannot be
changed and is always 0. For zoned block devices, a user can override
the default value set to force the global write maximum queue depth of
1 for a zoned block device, or clear this attribute to fallback to the
default behavior of zone write plugging which limits writes to QD=1 per
sequential zone.
Writing to a zoned block device flagged with QUEUE_ZONED_QD1_WRITES is
implemented using a list of zone write plugs that have a non-empty BIO
list. Listed zone write plugs are processed by the disk zone write plugs
worker kthread in FIFO order, and all BIOs of a zone write plug are all
processed before switching to the next listed zone write plug. A newly
submitted BIO for a non-FULL zone write plug that is not yet listed
causes the addition of the zone write plug at the end of the disk list
of zone write plugs.
Since the write BIOs queued in a zone write plug BIO list are
necessarilly sequential, for rotational media, using the single zone
write plugs kthread to issue all BIOs maintains a sequential write
pattern and thus reduces seek overhead and improves write throughput.
This processing essentially result in always writing to HDDs at QD=1,
which is not an issue for HDDs operating with write caching enabled.
Performance with write cache disabled is also not degraded thanks to
the efficient write handling of modern SMR HDDs.
A disk list of zone write plugs is defined using the new struct gendisk
zone_wplugs_list, and accesses to this list is protected using the
zone_wplugs_list_lock spinlock. The per disk kthread
(zone_wplugs_worker) code is implemented by the function
disk_zone_wplugs_worker(). A reference on listed zone write plugs is
always held until all BIOs of the zone write plug are processed by the
worker kthread. BIO issuing at QD=1 is driven using a completion
structure (zone_wplugs_worker_bio_done) and calls to blk_io_wait().
With this change, performance when sequentially writing the zones of a
30 TB SMR SATA HDD connected to an AHCI adapter changes as follows
(1MiB direct I/Os, results in MB/s unit):
+--------------------+
| Write BW (MB/s) |
+------------------+----------+---------+
| Sequential write | Baseline | Patched |
| Queue Depth | 6.19-rc8 | |
+------------------+----------+---------+
| 1 | 244 | 245 |
| 2 | 244 | 245 |
| 4 | 245 | 245 |
| 8 | 242 | 245 |
| 16 | 222 | 246 |
| 32 | 211 | 245 |
| 64 | 193 | 244 |
| 128 | 112 | 246 |
+------------------+----------+---------+
With the current code (baseline), as the sequential write stream crosses
a zone boundary, higher queue depth creates a gap between the
last IO to the previous zone and the first IOs to the following zones,
causing head seeks and degrading performance. Using the disk zone
write plugs worker thread, this pattern disappears and the maximum
throughput of the drive is maintained, leading to over 100%
improvements in throughput for high queue depth write.
Using 16 fio jobs all writing to randomly chosen zones at QD=32 with 1
MiB direct IOs, write throughput also increases significantly.
+--------------------+
| Write BW (MB/s) |
+------------------+----------+---------+
| Random write | Baseline | Patched |
| Number of zones | 6.19-rc7 | |
+------------------+----------+---------+
| 1 | 191 | 192 |
| 2 | 101 | 128 |
| 4 | 115 | 123 |
| 8 | 90 | 120 |
| 16 | 64 | 115 |
| 32 | 58 | 105 |
| 64 | 56 | 101 |
| 128 | 55 | 99 |
+------------------+----------+---------+
Tests using XFS shows that buffered write speed with 8 jobs writing
files increases by 12% to 35% depending on the workload.
+--------------------+
| Write BW (MB/s) |
+------------------+----------+---------+
| Workload | Baseline | Patched |
| | 6.19-rc7 | |
+------------------+----------+---------+
| 256MiB file size | 212 | 238 |
+------------------+----------+---------+
| 4MiB .. 128 MiB | 213 | 243 |
| random file size | | |
+------------------+----------+---------+
| 2MiB .. 8 MiB | 179 | 242 |
| random file size | | |
+------------------+----------+---------+
Performance gains are even more significant when using an HBA that
limits the maximum size of commands to a small value, e.g. HBAs
controlled with the mpi3mr driver limit commands to a maximum of 1 MiB.
In such case, the write throughput gains are over 40%.
+--------------------+
| Write BW (MB/s) |
+------------------+----------+---------+
| Workload | Baseline | Patched |
| | 6.19-rc7 | |
+------------------+----------+---------+
| 256MiB file size | 175 | 245 |
+------------------+----------+---------+
| 4MiB .. 128 MiB | 174 | 244 |
| random file size | | |
+------------------+----------+---------+
| 2MiB .. 8 MiB | 171 | 243 |
| random file size | | |
+------------------+----------+---------+
Signed-off-by: Damien Le Moal <dlemoal@kernel.org>
---
block/blk-mq-debugfs.c | 1 +
block/blk-sysfs.c | 38 ++++++++-
block/blk-zoned.c | 190 ++++++++++++++++++++++++++++++++++++-----
include/linux/blkdev.h | 8 ++
4 files changed, 215 insertions(+), 22 deletions(-)
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 28167c9baa55..047ec887456b 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -97,6 +97,7 @@ static const char *const blk_queue_flag_name[] = {
QUEUE_FLAG_NAME(NO_ELV_SWITCH),
QUEUE_FLAG_NAME(QOS_ENABLED),
QUEUE_FLAG_NAME(BIO_ISSUE_TIME),
+ QUEUE_FLAG_NAME(ZONED_QD1_WRITES),
};
#undef QUEUE_FLAG_NAME
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index f3b1968c80ce..bfa2ab82cc55 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -384,6 +384,39 @@ static ssize_t queue_nr_zones_show(struct gendisk *disk, char *page)
return queue_var_show(disk_nr_zones(disk), page);
}
+static ssize_t queue_zoned_qd1_writes_show(struct gendisk *disk, char *page)
+{
+ return queue_var_show(!!blk_queue_zoned_qd1_writes(disk->queue),
+ page);
+}
+
+static ssize_t queue_zoned_qd1_writes_store(struct gendisk *disk,
+ const char *page, size_t count)
+{
+ struct request_queue *q = disk->queue;
+ unsigned long qd1_writes;
+ unsigned int memflags;
+ ssize_t ret;
+
+ if (!blk_queue_is_zoned(q))
+ return -EOPNOTSUPP;
+
+ ret = queue_var_store(&qd1_writes, page, count);
+ if (ret < 0)
+ return ret;
+
+ memflags = blk_mq_freeze_queue(q);
+ blk_mq_quiesce_queue(q);
+ if (qd1_writes)
+ blk_queue_flag_set(QUEUE_FLAG_ZONED_QD1_WRITES, q);
+ else
+ blk_queue_flag_clear(QUEUE_FLAG_ZONED_QD1_WRITES, q);
+ blk_mq_unquiesce_queue(q);
+ blk_mq_unfreeze_queue(q, memflags);
+
+ return count;
+}
+
static ssize_t queue_iostats_passthrough_show(struct gendisk *disk, char *page)
{
return queue_var_show(!!blk_queue_passthrough_stat(disk->queue), page);
@@ -611,6 +644,7 @@ QUEUE_LIM_RO_ENTRY(queue_max_zone_append_sectors, "zone_append_max_bytes");
QUEUE_LIM_RO_ENTRY(queue_zone_write_granularity, "zone_write_granularity");
QUEUE_LIM_RO_ENTRY(queue_zoned, "zoned");
+QUEUE_RW_ENTRY(queue_zoned_qd1_writes, "zoned_qd1_writes");
QUEUE_RO_ENTRY(queue_nr_zones, "nr_zones");
QUEUE_LIM_RO_ENTRY(queue_max_open_zones, "max_open_zones");
QUEUE_LIM_RO_ENTRY(queue_max_active_zones, "max_active_zones");
@@ -748,6 +782,7 @@ static struct attribute *queue_attrs[] = {
&queue_nomerges_entry.attr,
&queue_poll_entry.attr,
&queue_poll_delay_entry.attr,
+ &queue_zoned_qd1_writes_entry.attr,
NULL,
};
@@ -780,7 +815,8 @@ static umode_t queue_attr_visible(struct kobject *kobj, struct attribute *attr,
struct request_queue *q = disk->queue;
if ((attr == &queue_max_open_zones_entry.attr ||
- attr == &queue_max_active_zones_entry.attr) &&
+ attr == &queue_max_active_zones_entry.attr ||
+ attr == &queue_zoned_qd1_writes_entry.attr) &&
!blk_queue_is_zoned(q))
return 0;
diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index 7577dd997b29..bacfb9da803c 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -16,6 +16,8 @@
#include <linux/spinlock.h>
#include <linux/refcount.h>
#include <linux/mempool.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
#include <trace/events/block.h>
@@ -40,6 +42,8 @@ static const char *const zone_cond_name[] = {
/*
* Per-zone write plug.
* @node: hlist_node structure for managing the plug using a hash table.
+ * @entry: list_head structure for listing the plug in the disk list of active
+ * zone write plugs.
* @bio_list: The list of BIOs that are currently plugged.
* @bio_work: Work struct to handle issuing of plugged BIOs
* @rcu_head: RCU head to free zone write plugs with an RCU grace period.
@@ -62,6 +66,7 @@ static const char *const zone_cond_name[] = {
*/
struct blk_zone_wplug {
struct hlist_node node;
+ struct list_head entry;
struct bio_list bio_list;
struct work_struct bio_work;
struct rcu_head rcu_head;
@@ -623,7 +628,19 @@ static void disk_mark_zone_wplug_dead(struct blk_zone_wplug *zwplug)
}
}
-static void blk_zone_wplug_bio_work(struct work_struct *work);
+static bool disk_zone_wplug_submit_bio(struct gendisk *disk,
+ struct blk_zone_wplug *zwplug);
+
+static void blk_zone_wplug_bio_work(struct work_struct *work)
+{
+ struct blk_zone_wplug *zwplug =
+ container_of(work, struct blk_zone_wplug, bio_work);
+
+ disk_zone_wplug_submit_bio(zwplug->disk, zwplug);
+
+ /* Drop the reference we took in disk_zone_wplug_schedule_work(). */
+ disk_put_zone_wplug(zwplug);
+}
/*
* Get a reference on the write plug for the zone containing @sector.
@@ -672,6 +689,7 @@ static struct blk_zone_wplug *disk_get_and_lock_zone_wplug(struct gendisk *disk,
zwplug->wp_offset = bdev_offset_from_zone_start(disk->part0, sector);
bio_list_init(&zwplug->bio_list);
INIT_WORK(&zwplug->bio_work, blk_zone_wplug_bio_work);
+ INIT_LIST_HEAD(&zwplug->entry);
zwplug->disk = disk;
spin_lock_irqsave(&zwplug->lock, *flags);
@@ -707,6 +725,7 @@ static inline void blk_zone_wplug_bio_io_error(struct blk_zone_wplug *zwplug,
*/
static void disk_zone_wplug_abort(struct blk_zone_wplug *zwplug)
{
+ struct gendisk *disk = zwplug->disk;
struct bio *bio;
lockdep_assert_held(&zwplug->lock);
@@ -720,6 +739,20 @@ static void disk_zone_wplug_abort(struct blk_zone_wplug *zwplug)
blk_zone_wplug_bio_io_error(zwplug, bio);
zwplug->flags &= ~BLK_ZONE_WPLUG_PLUGGED;
+
+ /*
+ * If we are using the per disk zone write plugs worker thread, remove
+ * the zone write plug from the work list and drop the reference we
+ * took when the zone write plug was added to that list.
+ */
+ if (blk_queue_zoned_qd1_writes(disk->queue)) {
+ spin_lock(&disk->zone_wplugs_list_lock);
+ if (!list_empty(&zwplug->entry)) {
+ list_del_init(&zwplug->entry);
+ disk_put_zone_wplug(zwplug);
+ }
+ spin_unlock(&disk->zone_wplugs_list_lock);
+ }
}
/*
@@ -1154,8 +1187,8 @@ void blk_zone_mgmt_bio_endio(struct bio *bio)
}
}
-static void disk_zone_wplug_schedule_bio_work(struct gendisk *disk,
- struct blk_zone_wplug *zwplug)
+static void disk_zone_wplug_schedule_work(struct gendisk *disk,
+ struct blk_zone_wplug *zwplug)
{
lockdep_assert_held(&zwplug->lock);
@@ -1168,6 +1201,7 @@ static void disk_zone_wplug_schedule_bio_work(struct gendisk *disk,
* and we also drop this reference if the work is already scheduled.
*/
WARN_ON_ONCE(!(zwplug->flags & BLK_ZONE_WPLUG_PLUGGED));
+ WARN_ON_ONCE(blk_queue_zoned_qd1_writes(disk->queue));
refcount_inc(&zwplug->ref);
if (!queue_work(disk->zone_wplugs_wq, &zwplug->bio_work))
disk_put_zone_wplug(zwplug);
@@ -1207,6 +1241,22 @@ static inline void disk_zone_wplug_add_bio(struct gendisk *disk,
bio_list_add(&zwplug->bio_list, bio);
trace_disk_zone_wplug_add_bio(zwplug->disk->queue, zwplug->zone_no,
bio->bi_iter.bi_sector, bio_sectors(bio));
+
+ /*
+ * If we are using the disk zone write plugs worker instead of the per
+ * zone write plug BIO work, add the zone write plug to the work list
+ * if it is not already there. Make sure to also get an extra reference
+ * on the zone write plug so that it does not go away until it is
+ * removed from the work list.
+ */
+ if (blk_queue_zoned_qd1_writes(disk->queue)) {
+ spin_lock(&disk->zone_wplugs_list_lock);
+ if (list_empty(&zwplug->entry)) {
+ list_add_tail(&zwplug->entry, &disk->zone_wplugs_list);
+ refcount_inc(&zwplug->ref);
+ }
+ spin_unlock(&disk->zone_wplugs_list_lock);
+ }
}
/*
@@ -1425,6 +1475,13 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs)
goto queue_bio;
}
+ /*
+ * For rotational devices, we will use the gendisk zone write plugs
+ * work instead of the per zone write plug BIO work, so queue the BIO.
+ */
+ if (blk_queue_zoned_qd1_writes(disk->queue))
+ goto queue_bio;
+
/* If the zone is already plugged, add the BIO to the BIO plug list. */
if (zwplug->flags & BLK_ZONE_WPLUG_PLUGGED)
goto queue_bio;
@@ -1447,7 +1504,10 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs)
if (!(zwplug->flags & BLK_ZONE_WPLUG_PLUGGED)) {
zwplug->flags |= BLK_ZONE_WPLUG_PLUGGED;
- disk_zone_wplug_schedule_bio_work(disk, zwplug);
+ if (blk_queue_zoned_qd1_writes(disk->queue))
+ wake_up_process(disk->zone_wplugs_worker);
+ else
+ disk_zone_wplug_schedule_work(disk, zwplug);
}
spin_unlock_irqrestore(&zwplug->lock, flags);
@@ -1588,16 +1648,22 @@ static void disk_zone_wplug_unplug_bio(struct gendisk *disk,
spin_lock_irqsave(&zwplug->lock, flags);
- /* Schedule submission of the next plugged BIO if we have one. */
- if (!bio_list_empty(&zwplug->bio_list)) {
- disk_zone_wplug_schedule_bio_work(disk, zwplug);
- spin_unlock_irqrestore(&zwplug->lock, flags);
- return;
- }
+ /*
+ * For rotational devices, signal the BIO completion to the zone write
+ * plug work. Otherwise, schedule submission of the next plugged BIO
+ * if we have one.
+ */
+ if (bio_list_empty(&zwplug->bio_list))
+ zwplug->flags &= ~BLK_ZONE_WPLUG_PLUGGED;
+
+ if (blk_queue_zoned_qd1_writes(disk->queue))
+ complete(&disk->zone_wplugs_worker_bio_done);
+ else if (!bio_list_empty(&zwplug->bio_list))
+ disk_zone_wplug_schedule_work(disk, zwplug);
- zwplug->flags &= ~BLK_ZONE_WPLUG_PLUGGED;
if (!zwplug->wp_offset || disk_zone_wplug_is_full(disk, zwplug))
disk_mark_zone_wplug_dead(zwplug);
+
spin_unlock_irqrestore(&zwplug->lock, flags);
}
@@ -1687,10 +1753,9 @@ void blk_zone_write_plug_finish_request(struct request *req)
disk_put_zone_wplug(zwplug);
}
-static void blk_zone_wplug_bio_work(struct work_struct *work)
+static bool disk_zone_wplug_submit_bio(struct gendisk *disk,
+ struct blk_zone_wplug *zwplug)
{
- struct blk_zone_wplug *zwplug =
- container_of(work, struct blk_zone_wplug, bio_work);
struct block_device *bdev;
unsigned long flags;
struct bio *bio;
@@ -1706,7 +1771,7 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
if (!bio) {
zwplug->flags &= ~BLK_ZONE_WPLUG_PLUGGED;
spin_unlock_irqrestore(&zwplug->lock, flags);
- goto put_zwplug;
+ return false;
}
trace_blk_zone_wplug_bio(zwplug->disk->queue, zwplug->zone_no,
@@ -1720,14 +1785,15 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
goto again;
}
- bdev = bio->bi_bdev;
-
/*
* blk-mq devices will reuse the extra reference on the request queue
* usage counter we took when the BIO was plugged, but the submission
* path for BIO-based devices will not do that. So drop this extra
* reference here.
*/
+ if (blk_queue_zoned_qd1_writes(disk->queue))
+ reinit_completion(&disk->zone_wplugs_worker_bio_done);
+ bdev = bio->bi_bdev;
if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO)) {
bdev->bd_disk->fops->submit_bio(bio);
blk_queue_exit(bdev->bd_disk->queue);
@@ -1735,14 +1801,78 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
blk_mq_submit_bio(bio);
}
-put_zwplug:
- /* Drop the reference we took in disk_zone_wplug_schedule_bio_work(). */
- disk_put_zone_wplug(zwplug);
+ return true;
+}
+
+static struct blk_zone_wplug *disk_get_zone_wplugs_work(struct gendisk *disk)
+{
+ struct blk_zone_wplug *zwplug;
+
+ spin_lock_irq(&disk->zone_wplugs_list_lock);
+ zwplug = list_first_entry_or_null(&disk->zone_wplugs_list,
+ struct blk_zone_wplug, entry);
+ if (zwplug)
+ list_del_init(&zwplug->entry);
+ spin_unlock_irq(&disk->zone_wplugs_list_lock);
+
+ return zwplug;
+}
+
+static int disk_zone_wplugs_worker(void *data)
+{
+ struct gendisk *disk = data;
+ struct blk_zone_wplug *zwplug;
+ unsigned int noio_flag;
+
+ noio_flag = memalloc_noio_save();
+ set_user_nice(current, MIN_NICE);
+ set_freezable();
+
+ for (;;) {
+ set_current_state(TASK_INTERRUPTIBLE | TASK_FREEZABLE);
+
+ zwplug = disk_get_zone_wplugs_work(disk);
+ if (zwplug) {
+ /*
+ * Process all BIOs of this zone write plug and then
+ * drop the reference we took when adding the zone write
+ * plug to the active list.
+ */
+ set_current_state(TASK_RUNNING);
+ while (disk_zone_wplug_submit_bio(disk, zwplug))
+ blk_wait_io(&disk->zone_wplugs_worker_bio_done);
+ disk_put_zone_wplug(zwplug);
+ continue;
+ }
+
+ /*
+ * Only sleep if nothing sets the state to running. Else check
+ * for zone write plugs work again as a newly submitted BIO
+ * might have added a zone write plug to the work list.
+ */
+ if (get_current_state() == TASK_RUNNING) {
+ try_to_freeze();
+ } else {
+ if (kthread_should_stop()) {
+ set_current_state(TASK_RUNNING);
+ break;
+ }
+ schedule();
+ }
+ }
+
+ WARN_ON_ONCE(!list_empty(&disk->zone_wplugs_list));
+ memalloc_noio_restore(noio_flag);
+
+ return 0;
}
void disk_init_zone_resources(struct gendisk *disk)
{
spin_lock_init(&disk->zone_wplugs_hash_lock);
+ spin_lock_init(&disk->zone_wplugs_list_lock);
+ INIT_LIST_HEAD(&disk->zone_wplugs_list);
+ init_completion(&disk->zone_wplugs_worker_bio_done);
}
/*
@@ -1758,6 +1888,7 @@ static int disk_alloc_zone_resources(struct gendisk *disk,
unsigned int pool_size)
{
unsigned int i;
+ int ret = -ENOMEM;
atomic_set(&disk->nr_zone_wplugs, 0);
disk->zone_wplugs_hash_bits =
@@ -1783,8 +1914,21 @@ static int disk_alloc_zone_resources(struct gendisk *disk,
if (!disk->zone_wplugs_wq)
goto destroy_pool;
+ disk->zone_wplugs_worker =
+ kthread_create(disk_zone_wplugs_worker, disk,
+ "%s_zwplugs_worker", disk->disk_name);
+ if (IS_ERR(disk->zone_wplugs_worker)) {
+ ret = PTR_ERR(disk->zone_wplugs_worker);
+ disk->zone_wplugs_worker = NULL;
+ goto destroy_wq;
+ }
+ wake_up_process(disk->zone_wplugs_worker);
+
return 0;
+destroy_wq:
+ destroy_workqueue(disk->zone_wplugs_wq);
+ disk->zone_wplugs_wq = NULL;
destroy_pool:
mempool_destroy(disk->zone_wplugs_pool);
disk->zone_wplugs_pool = NULL;
@@ -1792,7 +1936,7 @@ static int disk_alloc_zone_resources(struct gendisk *disk,
kfree(disk->zone_wplugs_hash);
disk->zone_wplugs_hash = NULL;
disk->zone_wplugs_hash_bits = 0;
- return -ENOMEM;
+ return ret;
}
static void disk_destroy_zone_wplugs_hash_table(struct gendisk *disk)
@@ -1842,6 +1986,10 @@ static void disk_set_zones_cond_array(struct gendisk *disk, u8 *zones_cond)
void disk_free_zone_resources(struct gendisk *disk)
{
+ if (disk->zone_wplugs_worker)
+ kthread_stop(disk->zone_wplugs_worker);
+ WARN_ON_ONCE(!list_empty(&disk->zone_wplugs_list));
+
if (disk->zone_wplugs_wq) {
destroy_workqueue(disk->zone_wplugs_wq);
disk->zone_wplugs_wq = NULL;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 689090023770..ac899cd0cd70 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -13,6 +13,7 @@
#include <linux/minmax.h>
#include <linux/timer.h>
#include <linux/workqueue.h>
+#include <linux/completion.h>
#include <linux/wait.h>
#include <linux/bio.h>
#include <linux/gfp.h>
@@ -204,6 +205,10 @@ struct gendisk {
struct mempool *zone_wplugs_pool;
struct hlist_head *zone_wplugs_hash;
struct workqueue_struct *zone_wplugs_wq;
+ spinlock_t zone_wplugs_list_lock;
+ struct list_head zone_wplugs_list;
+ struct task_struct *zone_wplugs_worker;
+ struct completion zone_wplugs_worker_bio_done;
#endif /* CONFIG_BLK_DEV_ZONED */
#if IS_ENABLED(CONFIG_CDROM)
@@ -668,6 +673,7 @@ enum {
QUEUE_FLAG_NO_ELV_SWITCH, /* can't switch elevator any more */
QUEUE_FLAG_QOS_ENABLED, /* qos is enabled */
QUEUE_FLAG_BIO_ISSUE_TIME, /* record bio->issue_time_ns */
+ QUEUE_FLAG_ZONED_QD1_WRITES, /* Limit zoned devices writes to QD=1 */
QUEUE_FLAG_MAX
};
@@ -707,6 +713,8 @@ void blk_queue_flag_clear(unsigned int flag, struct request_queue *q);
test_bit(QUEUE_FLAG_DISABLE_WBT_DEF, &(q)->queue_flags)
#define blk_queue_no_elv_switch(q) \
test_bit(QUEUE_FLAG_NO_ELV_SWITCH, &(q)->queue_flags)
+#define blk_queue_zoned_qd1_writes(q) \
+ test_bit(QUEUE_FLAG_ZONED_QD1_WRITES, &(q)->queue_flags)
extern void blk_set_pm_only(struct request_queue *q);
extern void blk_clear_pm_only(struct request_queue *q);
--
2.53.0
next prev parent reply other threads:[~2026-02-26 4:15 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-26 4:10 [PATCH v2 0/7] Improve zoned (SMR) HDD write throughput Damien Le Moal
2026-02-26 4:10 ` [PATCH v2 1/7] block: fix zone write plug removal Damien Le Moal
2026-02-26 16:10 ` Christoph Hellwig
2026-02-26 22:44 ` Damien Le Moal
2026-02-26 4:10 ` [PATCH v2 2/7] block: fix zone write plugs refcount handling in disk_zone_wplug_schedule_bio_work() Damien Le Moal
2026-02-26 16:11 ` Christoph Hellwig
2026-02-27 7:12 ` Johannes Thumshirn
2026-02-26 4:10 ` [PATCH v2 3/7] block: remove disk_zone_is_full() Damien Le Moal
2026-02-26 16:11 ` Christoph Hellwig
2026-02-26 4:10 ` [PATCH v2 4/7] block: rename struct gendisk zone_wplugs_lock field Damien Le Moal
2026-02-26 16:12 ` Christoph Hellwig
2026-02-26 4:10 ` Damien Le Moal [this message]
2026-02-26 16:16 ` [PATCH v2 5/7] block: allow submitting all zone writes from a single context Christoph Hellwig
2026-02-26 4:10 ` [PATCH v2 6/7] block: default to QD=1 writes for blk-mq rotational zoned devices Damien Le Moal
2026-02-26 16:16 ` Christoph Hellwig
2026-02-26 4:10 ` [PATCH v2 7/7] Documentation: ABI: stable: document the zoned_qd1_writes attribute Damien Le Moal
2026-02-26 16:17 ` Christoph Hellwig
2026-02-26 22:17 ` [PATCH v2 0/7] Improve zoned (SMR) HDD write throughput Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260226041024.2154806-6-dlemoal@kernel.org \
--to=dlemoal@kernel.org \
--cc=axboe@kernel.dk \
--cc=linux-block@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox