From: Bart Van Assche <bvanassche@acm.org>
To: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
linux-nvme@lists.infradead.org, Christoph Hellwig <hch@lst.de>,
Nitesh Shetty <nj.shetty@samsung.com>,
Bart Van Assche <bvanassche@acm.org>,
Kanchan Joshi <joshi.k@samsung.com>,
Anuj Gupta <anuj20.g@samsung.com>
Subject: [PATCH 01/12] block: Introduce queue limits for copy offloading
Date: Fri, 24 Apr 2026 15:41:50 -0700 [thread overview]
Message-ID: <20260424224201.1949243-2-bvanassche@acm.org> (raw)
In-Reply-To: <20260424224201.1949243-1-bvanassche@acm.org>
From: Nitesh Shetty <nj.shetty@samsung.com>
Add the following request queue limits:
- max_copy_hw_sectors: the maximum number of sectors supported by the
block driver for a single offloaded copy operation.
- max_copy_src_segments: the maximum number of source segments
supported by the block driver for a single offloaded copy operation.
- max_copy_dst_segments: the maximum number of destination segments
supported by the block driver for a single offloaded copy operation.
- max_user_copy_sectors: the maximum number of sectors configured by the
user for a single offloaded copy operation.
- max_copy_sectors: the maximum number of sectors for a single
offloaded copy operation. This is the minimum of the above two
parameters.
The default value for all these new limits is zero which means that copy
offloading is not supported unless if these limits are set by the block
driver.
ake the following two limits available in sysfs:
- copy_max_bytes (RW)
- copy_max_hw_bytes (RO)
These limits will be used by the function that implements copy
offloading to decide the bio size.
Signed-off-by: Nitesh Shetty <nj.shetty@samsung.com>
Signed-off-by: Kanchan Joshi <joshi.k@samsung.com>
Signed-off-by: Anuj Gupta <anuj20.g@samsung.com>
[ bvanassche: Added max_copy_{src,dst}_segments limits. Introduced
blk_validate_copy_limits(). Introduced BLK_FEAT_STACKING_COPY_OFFL.
Modified patch description. ]
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
Documentation/ABI/stable/sysfs-block | 24 +++++++++++++++++++
block/blk-settings.c | 36 ++++++++++++++++++++++++++++
block/blk-sysfs.c | 35 +++++++++++++++++++++++++++
include/linux/blkdev.h | 18 +++++++++++++-
4 files changed, 112 insertions(+), 1 deletion(-)
diff --git a/Documentation/ABI/stable/sysfs-block b/Documentation/ABI/stable/sysfs-block
index 900b3fc4c72d..bec5e04085da 100644
--- a/Documentation/ABI/stable/sysfs-block
+++ b/Documentation/ABI/stable/sysfs-block
@@ -239,6 +239,30 @@ Description:
last zone of the device which may be smaller.
+What: /sys/block/<disk>/queue/copy_max_bytes
+Date: May 2026
+Contact: linux-block@vger.kernel.org
+Description:
+ [RW] This is the maximum number of bytes that the block layer
+ will allow for a copy request. This is always smaller or
+ equal to the maximum size allowed by the block driver.
+ Any value higher than 'copy_max_hw_bytes' will be reduced to
+ 'copy_max_hw_bytes'. Writing '0' to this attribute will disable
+ copy offloading for this block device. If copy offloading is
+ disabled, copy requests will be translated into read and write
+ requests.
+
+
+What: /sys/block/<disk>/queue/copy_max_hw_bytes
+Date: May 2026
+Contact: linux-block@vger.kernel.org
+Description:
+ [RO] This is the maximum number of bytes that is allowed for
+ a single data copy request. Set by the block driver. The value
+ zero indicates that the block device does not support copy
+ offloading.
+
+
What: /sys/block/<disk>/queue/crypto/
Date: February 2022
Contact: linux-block@vger.kernel.org
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 78c83817b9d3..cb846ff2926e 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -57,6 +57,11 @@ void blk_set_stacking_limits(struct queue_limits *lim)
lim->max_hw_zone_append_sectors = UINT_MAX;
lim->max_user_discard_sectors = UINT_MAX;
lim->atomic_write_hw_max = UINT_MAX;
+
+ lim->max_user_copy_sectors = UINT_MAX;
+ lim->max_copy_hw_sectors = UINT_MAX;
+ lim->max_copy_src_segments = U16_MAX;
+ lim->max_copy_dst_segments = U16_MAX;
}
EXPORT_SYMBOL(blk_set_stacking_limits);
@@ -333,6 +338,21 @@ static void blk_validate_atomic_write_limits(struct queue_limits *lim)
lim->atomic_write_unit_max = 0;
}
+/*
+ * Check whether max_copy_hw_sectors and max_copy_{src,dst}_segments are
+ * either all nonzero or all zero.
+ */
+static int blk_validate_copy_limits(const struct queue_limits *lim)
+{
+ if (lim->max_copy_hw_sectors && lim->max_copy_src_segments &&
+ lim->max_copy_dst_segments)
+ return 0;
+ if (!lim->max_copy_hw_sectors && !lim->max_copy_src_segments &&
+ !lim->max_copy_dst_segments)
+ return 0;
+ return -EINVAL;
+}
+
/*
* Check that the limits in lim are valid, initialize defaults for unset
* values, and cap values based on others where needed.
@@ -510,6 +530,13 @@ int blk_validate_limits(struct queue_limits *lim)
err = blk_validate_integrity_limits(lim);
if (err)
return err;
+
+ err = blk_validate_copy_limits(lim);
+ if (err)
+ return err;
+ lim->max_copy_sectors =
+ min(lim->max_copy_hw_sectors, lim->max_user_copy_sectors);
+
return blk_validate_zoned_limits(lim);
}
EXPORT_SYMBOL_GPL(blk_validate_limits);
@@ -528,6 +555,7 @@ int blk_set_default_limits(struct queue_limits *lim)
*/
lim->max_user_discard_sectors = UINT_MAX;
lim->max_user_wzeroes_unmap_sectors = UINT_MAX;
+ lim->max_user_copy_sectors = UINT_MAX;
return blk_validate_limits(lim);
}
@@ -829,6 +857,14 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
t->max_segment_size = min_not_zero(t->max_segment_size,
b->max_segment_size);
+ t->max_copy_hw_sectors =
+ min(t->max_copy_hw_sectors, b->max_copy_hw_sectors);
+ t->max_copy_src_segments =
+ min(t->max_copy_src_segments, b->max_copy_src_segments);
+ t->max_copy_dst_segments =
+ min(t->max_copy_dst_segments, b->max_copy_dst_segments);
+ t->max_copy_sectors = min(t->max_copy_sectors, b->max_copy_sectors);
+
alignment = queue_limit_alignment_offset(b, start);
/* Bottom device has different alignment. Check that it is
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index f22c1f253eb3..8e1e14d1682d 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -325,6 +325,36 @@ queue_max_sectors_store(struct gendisk *disk, const char *page, size_t count,
return 0;
}
+static ssize_t queue_copy_hw_max_show(struct gendisk *disk, char *page)
+{
+ return queue_var_show(
+ disk->queue->limits.max_copy_hw_sectors << SECTOR_SHIFT, page);
+}
+
+static ssize_t queue_copy_max_show(struct gendisk *disk, char *page)
+{
+ return queue_var_show(
+ disk->queue->limits.max_copy_sectors << SECTOR_SHIFT, page);
+}
+
+static int queue_copy_max_store(struct gendisk *disk, const char *page,
+ size_t count, struct queue_limits *lim)
+{
+ unsigned long max_copy_bytes;
+ ssize_t ret;
+
+ ret = queue_var_store(&max_copy_bytes, page, count);
+ if (ret < 0)
+ return ret;
+
+ if ((max_copy_bytes >> SECTOR_SHIFT) > UINT_MAX)
+ return -EINVAL;
+
+ lim->max_user_copy_sectors = max_copy_bytes >> SECTOR_SHIFT;
+
+ return 0;
+}
+
static ssize_t queue_feature_store(struct gendisk *disk, const char *page,
size_t count, struct queue_limits *lim, blk_features_t feature)
{
@@ -652,6 +682,9 @@ QUEUE_RO_ENTRY(queue_nr_zones, "nr_zones");
QUEUE_LIM_RO_ENTRY(queue_max_open_zones, "max_open_zones");
QUEUE_LIM_RO_ENTRY(queue_max_active_zones, "max_active_zones");
+QUEUE_LIM_RO_ENTRY(queue_copy_hw_max, "copy_max_hw_bytes");
+QUEUE_LIM_RW_ENTRY(queue_copy_max, "copy_max_bytes");
+
QUEUE_RW_ENTRY(queue_nomerges, "nomerges");
QUEUE_LIM_RW_ENTRY(queue_iostats_passthrough, "iostats_passthrough");
QUEUE_RW_ENTRY(queue_rq_affinity, "rq_affinity");
@@ -760,6 +793,8 @@ static const struct attribute *const queue_attrs[] = {
&queue_max_hw_wzeroes_unmap_sectors_entry.attr,
&queue_max_wzeroes_unmap_sectors_entry.attr,
&queue_max_zone_append_sectors_entry.attr,
+ &queue_copy_hw_max_entry.attr,
+ &queue_copy_max_entry.attr,
&queue_zone_write_granularity_entry.attr,
&queue_rotational_entry.attr,
&queue_zoned_entry.attr,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 890128cdea1c..8ae64cc0546f 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -353,13 +353,17 @@ typedef unsigned int __bitwise blk_features_t;
#define BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE \
((__force blk_features_t)(1u << 15))
+/* block driver is a stacking block driver that supports copy offloading */
+#define BLK_FEAT_STACKING_COPY_OFFL ((__force blk_features_t)(1u << 16))
+
/*
* Flags automatically inherited when stacking limits.
*/
#define BLK_FEAT_INHERIT_MASK \
(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | BLK_FEAT_ROTATIONAL | \
BLK_FEAT_STABLE_WRITES | BLK_FEAT_ZONED | \
- BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE)
+ BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE | \
+ BLK_FEAT_STACKING_COPY_OFFL)
/* internal flags in queue_limits.flags */
typedef unsigned int __bitwise blk_flags_t;
@@ -415,6 +419,13 @@ struct queue_limits {
unsigned int atomic_write_hw_unit_max;
unsigned int atomic_write_unit_max;
+ /* copy offloading limits */
+ unsigned int max_copy_hw_sectors; /* set by block driver*/
+ uint16_t max_copy_src_segments; /* set by block driver*/
+ uint16_t max_copy_dst_segments; /* set by block driver*/
+ unsigned int max_user_copy_sectors; /* set via sysfs */
+ unsigned int max_copy_sectors; /* min() of the above */
+
unsigned short max_segments;
unsigned short max_integrity_segments;
unsigned short max_discard_segments;
@@ -1454,6 +1465,11 @@ static inline unsigned int bdev_discard_granularity(struct block_device *bdev)
return bdev_limits(bdev)->discard_granularity;
}
+static inline unsigned int bdev_max_copy_sectors(struct block_device *bdev)
+{
+ return bdev_get_queue(bdev)->limits.max_copy_sectors;
+}
+
static inline unsigned int
bdev_max_secure_erase_sectors(struct block_device *bdev)
{
next prev parent reply other threads:[~2026-04-24 22:42 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-24 22:41 [PATCH 00/12] Block storage copy offloading Bart Van Assche
2026-04-24 22:41 ` Bart Van Assche [this message]
2026-04-24 22:41 ` [PATCH 02/12] block: Add the REQ_OP_COPY_{SRC,DST} operations Bart Van Assche
2026-04-24 22:41 ` [PATCH 03/12] block: Introduce blkdev_copy_offload() Bart Van Assche
2026-04-24 22:41 ` [PATCH 04/12] block: Add an onloaded copy implementation Bart Van Assche
2026-04-24 22:41 ` [PATCH 05/12] block: Introduce accessor functions for copy offload bios Bart Van Assche
2026-04-24 22:41 ` [PATCH 06/12] fs/read_write: Generalize generic_copy_file_checks() Bart Van Assche
2026-04-24 22:41 ` [PATCH 07/12] fs, block: Add copy_file_range() support for block devices Bart Van Assche
2026-04-24 22:41 ` [PATCH 08/12] nvme: Add copy offloading support Bart Van Assche
2026-04-24 22:41 ` [PATCH 09/12] nvmet: Support the Copy command Bart Van Assche
2026-04-24 22:41 ` [PATCH 10/12] dm: Add support for copy offloading Bart Van Assche
2026-04-24 22:42 ` [PATCH 11/12] dm-linear: Enable " Bart Van Assche
2026-04-24 22:42 ` [PATCH 12/12] null_blk: Add support for REQ_OP_COPY_* Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260424224201.1949243-2-bvanassche@acm.org \
--to=bvanassche@acm.org \
--cc=anuj20.g@samsung.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=joshi.k@samsung.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=nj.shetty@samsung.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox