* [PATCH v6 0/8] Support limits below the page size
@ 2023-06-12 20:33 Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 1/8] block: Use pr_info() instead of printk(KERN_INFO ...) Bart Van Assche
` (9 more replies)
0 siblings, 10 replies; 24+ messages in thread
From: Bart Van Assche @ 2023-06-12 20:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas, Bart Van Assche
Hi Jens,
We want to improve Android performance by increasing the page size from 4 KiB
to 16 KiB. However, some of the storage controllers we care about do not support
DMA segments larger than 4 KiB. Hence the need support for DMA segments that are
smaller than the size of one virtual memory page. This patch series implements
that support. Please consider this patch series for the next merge window.
Thanks,
Bart.
Changes compared to v5:
- Rebased the entire series on top of the block layer for-next branch.
- Dropped patch "block: Add support for small segments in blk_rq_map_user_iov()"
because that patch prepares for a patch that has already been dropped.
- Modified a source code comment in patch 3/9 such that it fits in 80 columns.
Changes compared to v4:
- Fixed the debugfs patch such that the behavior for creating the block
debugfs directory is retained.
- Made the description of patch "Support configuring limits below the page
size" more detailed. Split that patch into two patches.
- Added patch "Use pr_info() instead of printk(KERN_INFO ...)".
Changes compared to v3:
- Removed CONFIG_BLK_SUB_PAGE_SEGMENTS and QUEUE_FLAG_SUB_PAGE_SEGMENTS.
Replaced these by a new member in struct queue_limits and a static branch.
- The static branch that controls whether or not sub-page limits are enabled
is set by the block layer core instead of by block drivers.
- Dropped the patches that are no longer needed (SCSI core and UFS Exynos
driver).
Changes compared to v2:
- For SCSI drivers, only set flag QUEUE_FLAG_SUB_PAGE_SEGMENTS if necessary.
- In the scsi_debug patch, sorted kernel module parameters alphabetically.
Only set flag QUEUE_FLAG_SUB_PAGE_SEGMENTS if necessary.
- Added a patch for the UFS Exynos driver that enables
CONFIG_BLK_SUB_PAGE_SEGMENTS if the page size exceeds 4 KiB.
Changes compared to v1:
- Added a CONFIG variable that controls whether or not small segment support
is enabled.
- Improved patch descriptions.
Bart Van Assche (8):
block: Use pr_info() instead of printk(KERN_INFO ...)
block: Prepare for supporting sub-page limits
block: Support configuring limits below the page size
block: Make sub_page_limit_queues available in debugfs
block: Support submitting passthrough requests with small segments
block: Add support for filesystem requests and small segments
scsi_debug: Support configuring the maximum segment size
null_blk: Support configuring the maximum segment size
block/blk-core.c | 4 ++
block/blk-map.c | 2 +-
block/blk-merge.c | 8 ++-
block/blk-mq-debugfs.c | 9 +++
block/blk-mq-debugfs.h | 6 ++
block/blk-mq.c | 2 +
block/blk-settings.c | 91 +++++++++++++++++++++++++++----
block/blk.h | 39 +++++++++++--
drivers/block/null_blk/main.c | 19 ++++++-
drivers/block/null_blk/null_blk.h | 1 +
drivers/scsi/scsi_debug.c | 4 ++
include/linux/blkdev.h | 2 +
12 files changed, 163 insertions(+), 24 deletions(-)
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v6 1/8] block: Use pr_info() instead of printk(KERN_INFO ...)
2023-06-12 20:33 [PATCH v6 0/8] Support limits below the page size Bart Van Assche
@ 2023-06-12 20:33 ` Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 2/8] block: Prepare for supporting sub-page limits Bart Van Assche
` (8 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Bart Van Assche @ 2023-06-12 20:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas, Bart Van Assche, Ming Lei, Keith Busch
Switch to the modern style of printing kernel messages. Use %u instead
of %d to print unsigned integers.
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Tested-by: Sandeep Dhavale <dhavale@google.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-settings.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 896b4654ab00..1d8d2ae7bdf4 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -127,8 +127,7 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
if ((max_hw_sectors << 9) < PAGE_SIZE) {
max_hw_sectors = 1 << (PAGE_SHIFT - 9);
- printk(KERN_INFO "%s: set to minimum %d\n",
- __func__, max_hw_sectors);
+ pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors);
}
max_hw_sectors = round_down(max_hw_sectors,
@@ -248,8 +247,7 @@ void blk_queue_max_segments(struct request_queue *q, unsigned short max_segments
{
if (!max_segments) {
max_segments = 1;
- printk(KERN_INFO "%s: set to minimum %d\n",
- __func__, max_segments);
+ pr_info("%s: set to minimum %u\n", __func__, max_segments);
}
q->limits.max_segments = max_segments;
@@ -285,8 +283,7 @@ void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
{
if (max_size < PAGE_SIZE) {
max_size = PAGE_SIZE;
- printk(KERN_INFO "%s: set to minimum %d\n",
- __func__, max_size);
+ pr_info("%s: set to minimum %u\n", __func__, max_size);
}
/* see blk_queue_virt_boundary() for the explanation */
@@ -740,8 +737,7 @@ void blk_queue_segment_boundary(struct request_queue *q, unsigned long mask)
{
if (mask < PAGE_SIZE - 1) {
mask = PAGE_SIZE - 1;
- printk(KERN_INFO "%s: set to minimum %lx\n",
- __func__, mask);
+ pr_info("%s: set to minimum %lx\n", __func__, mask);
}
q->limits.seg_boundary_mask = mask;
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v6 2/8] block: Prepare for supporting sub-page limits
2023-06-12 20:33 [PATCH v6 0/8] Support limits below the page size Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 1/8] block: Use pr_info() instead of printk(KERN_INFO ...) Bart Van Assche
@ 2023-06-12 20:33 ` Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 3/8] block: Support configuring limits below the page size Bart Van Assche
` (7 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Bart Van Assche @ 2023-06-12 20:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas, Bart Van Assche, Ming Lei, Keith Busch
Introduce variables that represent the lower configuration bounds. This
patch does not change any functionality.
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Tested-by: Sandeep Dhavale <dhavale@google.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-settings.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 1d8d2ae7bdf4..95d6e836c4a7 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -123,10 +123,11 @@ EXPORT_SYMBOL(blk_queue_bounce_limit);
void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_sectors)
{
struct queue_limits *limits = &q->limits;
+ unsigned int min_max_hw_sectors = PAGE_SIZE >> SECTOR_SHIFT;
unsigned int max_sectors;
- if ((max_hw_sectors << 9) < PAGE_SIZE) {
- max_hw_sectors = 1 << (PAGE_SHIFT - 9);
+ if (max_hw_sectors < min_max_hw_sectors) {
+ max_hw_sectors = min_max_hw_sectors;
pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors);
}
@@ -281,8 +282,10 @@ EXPORT_SYMBOL_GPL(blk_queue_max_discard_segments);
**/
void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
{
- if (max_size < PAGE_SIZE) {
- max_size = PAGE_SIZE;
+ unsigned int min_max_segment_size = PAGE_SIZE;
+
+ if (max_size < min_max_segment_size) {
+ max_size = min_max_segment_size;
pr_info("%s: set to minimum %u\n", __func__, max_size);
}
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v6 3/8] block: Support configuring limits below the page size
2023-06-12 20:33 [PATCH v6 0/8] Support limits below the page size Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 1/8] block: Use pr_info() instead of printk(KERN_INFO ...) Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 2/8] block: Prepare for supporting sub-page limits Bart Van Assche
@ 2023-06-12 20:33 ` Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 4/8] block: Make sub_page_limit_queues available in debugfs Bart Van Assche
` (6 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Bart Van Assche @ 2023-06-12 20:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas, Bart Van Assche, Ming Lei, Keith Busch
Allow block drivers to configure the following:
* Maximum number of hardware sectors values smaller than
PAGE_SIZE >> SECTOR_SHIFT. For PAGE_SIZE = 4096 this means that values
below 8 become supported.
* A maximum segment size below the page size. This is most useful
for page sizes above 4096 bytes.
The blk_sub_page_segments static branch will be used in later patches to
prevent that performance of block drivers that support segments >=
PAGE_SIZE and max_hw_sectors >= PAGE_SIZE >> SECTOR_SHIFT would be affected.
This patch may change the behavior of existing block drivers from not
working into working. If a block driver calls
blk_queue_max_hw_sectors() or blk_queue_max_segment_size(), this is
usually done to configure the maximum supported limits. An attempt to
configure a limit below what is supported by the block layer causes the
block layer to select a larger value. If that value is not supported by
the block driver, this may cause other data to be transferred than
requested, a kernel crash or other undesirable behavior.
Tested-by: Sandeep Dhavale <dhavale@google.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-core.c | 2 ++
block/blk-settings.c | 60 ++++++++++++++++++++++++++++++++++++++++++
block/blk.h | 9 +++++++
include/linux/blkdev.h | 2 ++
4 files changed, 73 insertions(+)
diff --git a/block/blk-core.c b/block/blk-core.c
index 2ae22bebeb3e..73b8b547ecb9 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -264,6 +264,8 @@ static void blk_free_queue_rcu(struct rcu_head *rcu_head)
static void blk_free_queue(struct request_queue *q)
{
blk_free_queue_stats(q->stats);
+ blk_disable_sub_page_limits(&q->limits);
+
if (queue_is_mq(q))
blk_mq_release(q);
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 95d6e836c4a7..607f21b99f3c 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -19,6 +19,11 @@
#include "blk-rq-qos.h"
#include "blk-wbt.h"
+/* Protects blk_nr_sub_page_limit_queues and blk_sub_page_limits changes. */
+static DEFINE_MUTEX(blk_sub_page_limit_lock);
+static uint32_t blk_nr_sub_page_limit_queues;
+DEFINE_STATIC_KEY_FALSE(blk_sub_page_limits);
+
void blk_queue_rq_timeout(struct request_queue *q, unsigned int timeout)
{
q->rq_timeout = timeout;
@@ -59,6 +64,7 @@ void blk_set_default_limits(struct queue_limits *lim)
lim->zoned = BLK_ZONED_NONE;
lim->zone_write_granularity = 0;
lim->dma_alignment = 511;
+ lim->sub_page_limits = false;
}
/**
@@ -101,6 +107,50 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
}
EXPORT_SYMBOL(blk_queue_bounce_limit);
+/**
+ * blk_enable_sub_page_limits - enable support for limits below the page size
+ * @lim: request queue limits for which to enable support of these features.
+ *
+ * Enable support for max_segment_size values smaller than PAGE_SIZE and for
+ * max_hw_sectors values below PAGE_SIZE >> SECTOR_SHIFT. Support for these
+ * features is not enabled all the time because of the runtime overhead of these
+ * features.
+ */
+static void blk_enable_sub_page_limits(struct queue_limits *lim)
+{
+ if (lim->sub_page_limits)
+ return;
+
+ lim->sub_page_limits = true;
+
+ mutex_lock(&blk_sub_page_limit_lock);
+ if (++blk_nr_sub_page_limit_queues == 1)
+ static_branch_enable(&blk_sub_page_limits);
+ mutex_unlock(&blk_sub_page_limit_lock);
+}
+
+/**
+ * blk_disable_sub_page_limits - disable support for limits below the page size
+ * @lim: request queue limits for which to enable support of these features.
+ *
+ * max_segment_size values smaller than PAGE_SIZE and for max_hw_sectors values
+ * below PAGE_SIZE >> SECTOR_SHIFT. Support for these features is not enabled
+ * all the time because of the runtime overhead of these features.
+ */
+void blk_disable_sub_page_limits(struct queue_limits *lim)
+{
+ if (!lim->sub_page_limits)
+ return;
+
+ lim->sub_page_limits = false;
+
+ mutex_lock(&blk_sub_page_limit_lock);
+ WARN_ON_ONCE(blk_nr_sub_page_limit_queues <= 0);
+ if (--blk_nr_sub_page_limit_queues == 0)
+ static_branch_disable(&blk_sub_page_limits);
+ mutex_unlock(&blk_sub_page_limit_lock);
+}
+
/**
* blk_queue_max_hw_sectors - set max sectors for a request for this queue
* @q: the request queue for the device
@@ -126,6 +176,11 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
unsigned int min_max_hw_sectors = PAGE_SIZE >> SECTOR_SHIFT;
unsigned int max_sectors;
+ if (max_hw_sectors < min_max_hw_sectors) {
+ blk_enable_sub_page_limits(limits);
+ min_max_hw_sectors = 1;
+ }
+
if (max_hw_sectors < min_max_hw_sectors) {
max_hw_sectors = min_max_hw_sectors;
pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors);
@@ -284,6 +339,11 @@ void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
{
unsigned int min_max_segment_size = PAGE_SIZE;
+ if (max_size < min_max_segment_size) {
+ blk_enable_sub_page_limits(&q->limits);
+ min_max_segment_size = SECTOR_SIZE;
+ }
+
if (max_size < min_max_segment_size) {
max_size = min_max_segment_size;
pr_info("%s: set to minimum %u\n", __func__, max_size);
diff --git a/block/blk.h b/block/blk.h
index 768852a84fef..d37ec737e05e 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -13,6 +13,7 @@ struct elevator_type;
#define BLK_MAX_TIMEOUT (5 * HZ)
extern struct dentry *blk_debugfs_root;
+DECLARE_STATIC_KEY_FALSE(blk_sub_page_limits);
struct blk_flush_queue {
unsigned int flush_pending_idx:1;
@@ -32,6 +33,14 @@ struct blk_flush_queue *blk_alloc_flush_queue(int node, int cmd_size,
gfp_t flags);
void blk_free_flush_queue(struct blk_flush_queue *q);
+static inline bool blk_queue_sub_page_limits(const struct queue_limits *lim)
+{
+ return static_branch_unlikely(&blk_sub_page_limits) &&
+ lim->sub_page_limits;
+}
+
+void blk_disable_sub_page_limits(struct queue_limits *q);
+
void blk_freeze_queue(struct request_queue *q);
void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
void blk_queue_start_drain(struct request_queue *q);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index ed44a997f629..54360ef85109 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -324,6 +324,8 @@ struct queue_limits {
* due to possible offsets.
*/
unsigned int dma_alignment;
+
+ bool sub_page_limits;
};
typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx,
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v6 4/8] block: Make sub_page_limit_queues available in debugfs
2023-06-12 20:33 [PATCH v6 0/8] Support limits below the page size Bart Van Assche
` (2 preceding siblings ...)
2023-06-12 20:33 ` [PATCH v6 3/8] block: Support configuring limits below the page size Bart Van Assche
@ 2023-06-12 20:33 ` Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 5/8] block: Support submitting passthrough requests with small segments Bart Van Assche
` (5 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Bart Van Assche @ 2023-06-12 20:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas, Bart Van Assche, Ming Lei, Keith Busch
This new debugfs attribute makes it easier to verify the code that tracks
how many queues require limits below the page size.
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-core.c | 2 ++
block/blk-mq-debugfs.c | 9 +++++++++
block/blk-mq-debugfs.h | 6 ++++++
block/blk-settings.c | 8 ++++++++
block/blk.h | 1 +
5 files changed, 26 insertions(+)
diff --git a/block/blk-core.c b/block/blk-core.c
index 73b8b547ecb9..ef6173ad4731 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -45,6 +45,7 @@
#include <trace/events/block.h>
#include "blk.h"
+#include "blk-mq-debugfs.h"
#include "blk-mq-sched.h"
#include "blk-pm.h"
#include "blk-cgroup.h"
@@ -1204,6 +1205,7 @@ int __init blk_dev_init(void)
sizeof(struct request_queue), 0, SLAB_PANIC, NULL);
blk_debugfs_root = debugfs_create_dir("block", NULL);
+ blk_mq_debugfs_init();
return 0;
}
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index c3b5930106b2..5649c9e3719d 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -846,3 +846,12 @@ void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx)
debugfs_remove_recursive(hctx->sched_debugfs_dir);
hctx->sched_debugfs_dir = NULL;
}
+
+DEFINE_DEBUGFS_ATTRIBUTE(blk_sub_page_limit_queues_fops,
+ blk_sub_page_limit_queues_get, NULL, "%llu\n");
+
+void blk_mq_debugfs_init(void)
+{
+ debugfs_create_file("sub_page_limit_queues", 0400, blk_debugfs_root,
+ NULL, &blk_sub_page_limit_queues_fops);
+}
diff --git a/block/blk-mq-debugfs.h b/block/blk-mq-debugfs.h
index 9c7d4b6117d4..7942119051f5 100644
--- a/block/blk-mq-debugfs.h
+++ b/block/blk-mq-debugfs.h
@@ -17,6 +17,8 @@ struct blk_mq_debugfs_attr {
const struct seq_operations *seq_ops;
};
+void blk_mq_debugfs_init(void);
+
int __blk_mq_debugfs_rq_show(struct seq_file *m, struct request *rq);
int blk_mq_debugfs_rq_show(struct seq_file *m, void *v);
@@ -36,6 +38,10 @@ void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx);
void blk_mq_debugfs_register_rqos(struct rq_qos *rqos);
void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos);
#else
+static inline void blk_mq_debugfs_init(void)
+{
+}
+
static inline void blk_mq_debugfs_register(struct request_queue *q)
{
}
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 607f21b99f3c..c1c4988cc575 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -107,6 +107,14 @@ void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
}
EXPORT_SYMBOL(blk_queue_bounce_limit);
+/* For debugfs. */
+int blk_sub_page_limit_queues_get(void *data, u64 *val)
+{
+ *val = READ_ONCE(blk_nr_sub_page_limit_queues);
+
+ return 0;
+}
+
/**
* blk_enable_sub_page_limits - enable support for limits below the page size
* @lim: request queue limits for which to enable support of these features.
diff --git a/block/blk.h b/block/blk.h
index d37ec737e05e..065449e7d0bd 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -39,6 +39,7 @@ static inline bool blk_queue_sub_page_limits(const struct queue_limits *lim)
lim->sub_page_limits;
}
+int blk_sub_page_limit_queues_get(void *data, u64 *val);
void blk_disable_sub_page_limits(struct queue_limits *q);
void blk_freeze_queue(struct request_queue *q);
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v6 5/8] block: Support submitting passthrough requests with small segments
2023-06-12 20:33 [PATCH v6 0/8] Support limits below the page size Bart Van Assche
` (3 preceding siblings ...)
2023-06-12 20:33 ` [PATCH v6 4/8] block: Make sub_page_limit_queues available in debugfs Bart Van Assche
@ 2023-06-12 20:33 ` Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 6/8] block: Add support for filesystem requests and " Bart Van Assche
` (4 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Bart Van Assche @ 2023-06-12 20:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas, Bart Van Assche, Ming Lei, Keith Busch
If the segment size is smaller than the page size there may be multiple
segments per bvec even if a bvec only contains a single page. Hence this
patch.
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-map.c | 2 +-
block/blk.h | 18 ++++++++++++++++++
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/block/blk-map.c b/block/blk-map.c
index 3551c3ff17cf..c1d92b0dcc5d 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -535,7 +535,7 @@ int blk_rq_append_bio(struct request *rq, struct bio *bio)
unsigned int nr_segs = 0;
bio_for_each_bvec(bv, bio, iter)
- nr_segs++;
+ nr_segs += blk_segments(&rq->q->limits, bv.bv_len);
if (!rq->bio) {
blk_rq_bio_prep(rq, bio, nr_segs);
diff --git a/block/blk.h b/block/blk.h
index 065449e7d0bd..18b898a38c72 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -86,6 +86,24 @@ struct bio_vec *bvec_alloc(mempool_t *pool, unsigned short *nr_vecs,
gfp_t gfp_mask);
void bvec_free(mempool_t *pool, struct bio_vec *bv, unsigned short nr_vecs);
+/* Number of DMA segments required to transfer @bytes data. */
+static inline unsigned int blk_segments(const struct queue_limits *limits,
+ unsigned int bytes)
+{
+ if (!blk_queue_sub_page_limits(limits))
+ return 1;
+
+ {
+ const unsigned int mss = limits->max_segment_size;
+
+ if (bytes <= mss)
+ return 1;
+ if (is_power_of_2(mss))
+ return round_up(bytes, mss) >> ilog2(mss);
+ return (bytes + mss - 1) / mss;
+ }
+}
+
static inline bool biovec_phys_mergeable(struct request_queue *q,
struct bio_vec *vec1, struct bio_vec *vec2)
{
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v6 6/8] block: Add support for filesystem requests and small segments
2023-06-12 20:33 [PATCH v6 0/8] Support limits below the page size Bart Van Assche
` (4 preceding siblings ...)
2023-06-12 20:33 ` [PATCH v6 5/8] block: Support submitting passthrough requests with small segments Bart Van Assche
@ 2023-06-12 20:33 ` Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 7/8] scsi_debug: Support configuring the maximum segment size Bart Van Assche
` (3 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Bart Van Assche @ 2023-06-12 20:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas, Bart Van Assche, Ming Lei, Keith Busch
Add support in the bio splitting code and also in the bio submission code
for bios with segments smaller than the page size.
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Tested-by: Sandeep Dhavale <dhavale@google.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
block/blk-merge.c | 8 ++++++--
block/blk-mq.c | 2 ++
block/blk.h | 11 +++++------
3 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 65e75efa9bd3..0b28f6df07bc 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -294,7 +294,8 @@ struct bio *bio_split_rw(struct bio *bio, const struct queue_limits *lim,
if (nsegs < lim->max_segments &&
bytes + bv.bv_len <= max_bytes &&
bv.bv_offset + bv.bv_len <= PAGE_SIZE) {
- nsegs++;
+ /* single-page bvec optimization */
+ nsegs += blk_segments(lim, bv.bv_len);
bytes += bv.bv_len;
} else {
if (bvec_split_segs(lim, &bv, &nsegs, &bytes,
@@ -544,7 +545,10 @@ static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
__blk_segment_map_sg_merge(q, &bvec, &bvprv, sg))
goto next_bvec;
- if (bvec.bv_offset + bvec.bv_len <= PAGE_SIZE)
+ if (bvec.bv_offset + bvec.bv_len <= PAGE_SIZE &&
+ (!blk_queue_sub_page_limits(&q->limits) ||
+ bvec.bv_len <= q->limits.max_segment_size))
+ /* single-segment bvec optimization */
nsegs += __blk_bvec_map_sg(bvec, sglist, sg);
else
nsegs += blk_bvec_map_sg(q, &bvec, sglist, sg);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 1749f5890606..ad787c14ea09 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2936,6 +2936,8 @@ void blk_mq_submit_bio(struct bio *bio)
bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);
if (!bio)
return;
+ } else if (bio->bi_vcnt == 1) {
+ nr_segs = blk_segments(&q->limits, bio->bi_io_vec[0].bv_len);
}
if (!bio_integrity_prep(bio))
diff --git a/block/blk.h b/block/blk.h
index 18b898a38c72..e905cc6364fa 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -332,13 +332,12 @@ static inline bool bio_may_exceed_limits(struct bio *bio,
}
/*
- * All drivers must accept single-segments bios that are <= PAGE_SIZE.
- * This is a quick and dirty check that relies on the fact that
- * bi_io_vec[0] is always valid if a bio has data. The check might
- * lead to occasional false negatives when bios are cloned, but compared
- * to the performance impact of cloned bios themselves the loop below
- * doesn't matter anyway.
+ * Check whether bio splitting should be performed. This check may
+ * trigger the bio splitting code even if splitting is not necessary.
*/
+ if (blk_queue_sub_page_limits(lim) && bio->bi_io_vec &&
+ bio->bi_io_vec->bv_len > lim->max_segment_size)
+ return true;
return lim->chunk_sectors || bio->bi_vcnt != 1 ||
bio->bi_io_vec->bv_len + bio->bi_io_vec->bv_offset > PAGE_SIZE;
}
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v6 7/8] scsi_debug: Support configuring the maximum segment size
2023-06-12 20:33 [PATCH v6 0/8] Support limits below the page size Bart Van Assche
` (5 preceding siblings ...)
2023-06-12 20:33 ` [PATCH v6 6/8] block: Add support for filesystem requests and " Bart Van Assche
@ 2023-06-12 20:33 ` Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 8/8] null_blk: " Bart Van Assche
` (2 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Bart Van Assche @ 2023-06-12 20:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas, Bart Van Assche, Douglas Gilbert,
Martin K . Petersen
Add a kernel module parameter for configuring the maximum segment size.
This patch enables testing SCSI support for segments smaller than the
page size.
Acked-by: Douglas Gilbert <dgilbert@interlog.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
drivers/scsi/scsi_debug.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
index 8c58128ad32a..e951c622bf64 100644
--- a/drivers/scsi/scsi_debug.c
+++ b/drivers/scsi/scsi_debug.c
@@ -752,6 +752,7 @@ static int sdebug_host_max_queue; /* per host */
static int sdebug_lowest_aligned = DEF_LOWEST_ALIGNED;
static int sdebug_max_luns = DEF_MAX_LUNS;
static int sdebug_max_queue = SDEBUG_CANQUEUE; /* per submit queue */
+static unsigned int sdebug_max_segment_size = BLK_MAX_SEGMENT_SIZE;
static unsigned int sdebug_medium_error_start = OPT_MEDIUM_ERR_ADDR;
static int sdebug_medium_error_count = OPT_MEDIUM_ERR_NUM;
static int sdebug_ndelay = DEF_NDELAY; /* if > 0 then unit is nanoseconds */
@@ -5735,6 +5736,7 @@ module_param_named(lowest_aligned, sdebug_lowest_aligned, int, S_IRUGO);
module_param_named(lun_format, sdebug_lun_am_i, int, S_IRUGO | S_IWUSR);
module_param_named(max_luns, sdebug_max_luns, int, S_IRUGO | S_IWUSR);
module_param_named(max_queue, sdebug_max_queue, int, S_IRUGO | S_IWUSR);
+module_param_named(max_segment_size, sdebug_max_segment_size, uint, S_IRUGO);
module_param_named(medium_error_count, sdebug_medium_error_count, int,
S_IRUGO | S_IWUSR);
module_param_named(medium_error_start, sdebug_medium_error_start, int,
@@ -5811,6 +5813,7 @@ MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)");
MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method");
MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)");
MODULE_PARM_DESC(max_queue, "max number of queued commands (1 to max(def))");
+MODULE_PARM_DESC(max_segment_size, "max bytes in a single segment");
MODULE_PARM_DESC(medium_error_count, "count of sectors to return follow on MEDIUM error");
MODULE_PARM_DESC(medium_error_start, "starting sector number to return MEDIUM error");
MODULE_PARM_DESC(ndelay, "response delay in nanoseconds (def=0 -> ignore)");
@@ -7723,6 +7726,7 @@ static int sdebug_driver_probe(struct device *dev)
sdebug_driver_template.can_queue = sdebug_max_queue;
sdebug_driver_template.cmd_per_lun = sdebug_max_queue;
+ sdebug_driver_template.max_segment_size = sdebug_max_segment_size;
if (!sdebug_clustering)
sdebug_driver_template.dma_boundary = PAGE_SIZE - 1;
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v6 8/8] null_blk: Support configuring the maximum segment size
2023-06-12 20:33 [PATCH v6 0/8] Support limits below the page size Bart Van Assche
` (6 preceding siblings ...)
2023-06-12 20:33 ` [PATCH v6 7/8] scsi_debug: Support configuring the maximum segment size Bart Van Assche
@ 2023-06-12 20:33 ` Bart Van Assche
2023-06-12 22:17 ` Damien Le Moal
2023-06-15 2:01 ` [PATCH v6 0/8] Support limits below the page size Bart Van Assche
2023-06-15 2:22 ` Jens Axboe
9 siblings, 1 reply; 24+ messages in thread
From: Bart Van Assche @ 2023-06-12 20:33 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas, Bart Van Assche, Chaitanya Kulkarni, Ming Lei,
Damien Le Moal
Add support for configuring the maximum segment size.
Add support for segments smaller than the page size.
This patch enables testing segments smaller than the page size with a
driver that does not call blk_rq_map_sg().
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
drivers/block/null_blk/main.c | 19 ++++++++++++++++---
drivers/block/null_blk/null_blk.h | 1 +
2 files changed, 17 insertions(+), 3 deletions(-)
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index b3fedafe301e..9c9098f1bd52 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -157,6 +157,10 @@ static int g_max_sectors;
module_param_named(max_sectors, g_max_sectors, int, 0444);
MODULE_PARM_DESC(max_sectors, "Maximum size of a command (in 512B sectors)");
+static unsigned int g_max_segment_size = BLK_MAX_SEGMENT_SIZE;
+module_param_named(max_segment_size, g_max_segment_size, int, 0444);
+MODULE_PARM_DESC(max_segment_size, "Maximum size of a segment in bytes");
+
static unsigned int nr_devices = 1;
module_param(nr_devices, uint, 0444);
MODULE_PARM_DESC(nr_devices, "Number of devices to register");
@@ -409,6 +413,7 @@ NULLB_DEVICE_ATTR(home_node, uint, NULL);
NULLB_DEVICE_ATTR(queue_mode, uint, NULL);
NULLB_DEVICE_ATTR(blocksize, uint, NULL);
NULLB_DEVICE_ATTR(max_sectors, uint, NULL);
+NULLB_DEVICE_ATTR(max_segment_size, uint, NULL);
NULLB_DEVICE_ATTR(irqmode, uint, NULL);
NULLB_DEVICE_ATTR(hw_queue_depth, uint, NULL);
NULLB_DEVICE_ATTR(index, uint, NULL);
@@ -550,6 +555,7 @@ static struct configfs_attribute *nullb_device_attrs[] = {
&nullb_device_attr_queue_mode,
&nullb_device_attr_blocksize,
&nullb_device_attr_max_sectors,
+ &nullb_device_attr_max_segment_size,
&nullb_device_attr_irqmode,
&nullb_device_attr_hw_queue_depth,
&nullb_device_attr_index,
@@ -652,7 +658,8 @@ static ssize_t memb_group_features_show(struct config_item *item, char *page)
return snprintf(page, PAGE_SIZE,
"badblocks,blocking,blocksize,cache_size,"
"completion_nsec,discard,home_node,hw_queue_depth,"
- "irqmode,max_sectors,mbps,memory_backed,no_sched,"
+ "irqmode,max_sectors,max_segment_size,mbps,"
+ "memory_backed,no_sched,"
"poll_queues,power,queue_mode,shared_tag_bitmap,size,"
"submit_queues,use_per_node_hctx,virt_boundary,zoned,"
"zone_capacity,zone_max_active,zone_max_open,"
@@ -722,6 +729,7 @@ static struct nullb_device *null_alloc_dev(void)
dev->queue_mode = g_queue_mode;
dev->blocksize = g_bs;
dev->max_sectors = g_max_sectors;
+ dev->max_segment_size = g_max_segment_size;
dev->irqmode = g_irqmode;
dev->hw_queue_depth = g_hw_queue_depth;
dev->blocking = g_blocking;
@@ -1248,6 +1256,8 @@ static int null_transfer(struct nullb *nullb, struct page *page,
unsigned int valid_len = len;
int err = 0;
+ WARN_ONCE(len > dev->max_segment_size, "%u > %u\n", len,
+ dev->max_segment_size);
if (!is_write) {
if (dev->zoned)
valid_len = null_zone_valid_read_len(nullb,
@@ -1283,7 +1293,8 @@ static int null_handle_rq(struct nullb_cmd *cmd)
spin_lock_irq(&nullb->lock);
rq_for_each_segment(bvec, rq, iter) {
- len = bvec.bv_len;
+ len = min(bvec.bv_len, nullb->dev->max_segment_size);
+ bvec.bv_len = len;
err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
op_is_write(req_op(rq)), sector,
rq->cmd_flags & REQ_FUA);
@@ -1310,7 +1321,8 @@ static int null_handle_bio(struct nullb_cmd *cmd)
spin_lock_irq(&nullb->lock);
bio_for_each_segment(bvec, bio, iter) {
- len = bvec.bv_len;
+ len = min(bvec.bv_len, nullb->dev->max_segment_size);
+ bvec.bv_len = len;
err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
op_is_write(bio_op(bio)), sector,
bio->bi_opf & REQ_FUA);
@@ -2161,6 +2173,7 @@ static int null_add_dev(struct nullb_device *dev)
dev->max_sectors = queue_max_hw_sectors(nullb->q);
dev->max_sectors = min(dev->max_sectors, BLK_DEF_MAX_SECTORS);
blk_queue_max_hw_sectors(nullb->q, dev->max_sectors);
+ blk_queue_max_segment_size(nullb->q, dev->max_segment_size);
if (dev->virt_boundary)
blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1);
diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h
index 929f659dd255..7bf80b0035f5 100644
--- a/drivers/block/null_blk/null_blk.h
+++ b/drivers/block/null_blk/null_blk.h
@@ -107,6 +107,7 @@ struct nullb_device {
unsigned int queue_mode; /* block interface */
unsigned int blocksize; /* block size */
unsigned int max_sectors; /* Max sectors per command */
+ unsigned int max_segment_size; /* Max size of a single DMA segment. */
unsigned int irqmode; /* IRQ completion handler */
unsigned int hw_queue_depth; /* queue depth */
unsigned int index; /* index of the disk, only valid with a disk */
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v6 8/8] null_blk: Support configuring the maximum segment size
2023-06-12 20:33 ` [PATCH v6 8/8] null_blk: " Bart Van Assche
@ 2023-06-12 22:17 ` Damien Le Moal
2023-06-13 0:44 ` Bart Van Assche
0 siblings, 1 reply; 24+ messages in thread
From: Damien Le Moal @ 2023-06-12 22:17 UTC (permalink / raw)
To: Bart Van Assche, Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas, Chaitanya Kulkarni, Ming Lei, Damien Le Moal
On 6/13/23 05:33, Bart Van Assche wrote:
> Add support for configuring the maximum segment size.
>
> Add support for segments smaller than the page size.
>
> This patch enables testing segments smaller than the page size with a
> driver that does not call blk_rq_map_sg().
>
> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Ming Lei <ming.lei@redhat.com>
> Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
> drivers/block/null_blk/main.c | 19 ++++++++++++++++---
> drivers/block/null_blk/null_blk.h | 1 +
> 2 files changed, 17 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
> index b3fedafe301e..9c9098f1bd52 100644
> --- a/drivers/block/null_blk/main.c
> +++ b/drivers/block/null_blk/main.c
> @@ -157,6 +157,10 @@ static int g_max_sectors;
> module_param_named(max_sectors, g_max_sectors, int, 0444);
> MODULE_PARM_DESC(max_sectors, "Maximum size of a command (in 512B sectors)");
>
> +static unsigned int g_max_segment_size = BLK_MAX_SEGMENT_SIZE;
> +module_param_named(max_segment_size, g_max_segment_size, int, 0444);
> +MODULE_PARM_DESC(max_segment_size, "Maximum size of a segment in bytes");
> +
> static unsigned int nr_devices = 1;
> module_param(nr_devices, uint, 0444);
> MODULE_PARM_DESC(nr_devices, "Number of devices to register");
> @@ -409,6 +413,7 @@ NULLB_DEVICE_ATTR(home_node, uint, NULL);
> NULLB_DEVICE_ATTR(queue_mode, uint, NULL);
> NULLB_DEVICE_ATTR(blocksize, uint, NULL);
> NULLB_DEVICE_ATTR(max_sectors, uint, NULL);
> +NULLB_DEVICE_ATTR(max_segment_size, uint, NULL);
> NULLB_DEVICE_ATTR(irqmode, uint, NULL);
> NULLB_DEVICE_ATTR(hw_queue_depth, uint, NULL);
> NULLB_DEVICE_ATTR(index, uint, NULL);
> @@ -550,6 +555,7 @@ static struct configfs_attribute *nullb_device_attrs[] = {
> &nullb_device_attr_queue_mode,
> &nullb_device_attr_blocksize,
> &nullb_device_attr_max_sectors,
> + &nullb_device_attr_max_segment_size,
> &nullb_device_attr_irqmode,
> &nullb_device_attr_hw_queue_depth,
> &nullb_device_attr_index,
> @@ -652,7 +658,8 @@ static ssize_t memb_group_features_show(struct config_item *item, char *page)
> return snprintf(page, PAGE_SIZE,
> "badblocks,blocking,blocksize,cache_size,"
> "completion_nsec,discard,home_node,hw_queue_depth,"
> - "irqmode,max_sectors,mbps,memory_backed,no_sched,"
> + "irqmode,max_sectors,max_segment_size,mbps,"
> + "memory_backed,no_sched,"
> "poll_queues,power,queue_mode,shared_tag_bitmap,size,"
> "submit_queues,use_per_node_hctx,virt_boundary,zoned,"
> "zone_capacity,zone_max_active,zone_max_open,"
> @@ -722,6 +729,7 @@ static struct nullb_device *null_alloc_dev(void)
> dev->queue_mode = g_queue_mode;
> dev->blocksize = g_bs;
> dev->max_sectors = g_max_sectors;
> + dev->max_segment_size = g_max_segment_size;
> dev->irqmode = g_irqmode;
> dev->hw_queue_depth = g_hw_queue_depth;
> dev->blocking = g_blocking;
> @@ -1248,6 +1256,8 @@ static int null_transfer(struct nullb *nullb, struct page *page,
> unsigned int valid_len = len;
> int err = 0;
>
> + WARN_ONCE(len > dev->max_segment_size, "%u > %u\n", len,
> + dev->max_segment_size);
> if (!is_write) {
> if (dev->zoned)
> valid_len = null_zone_valid_read_len(nullb,
> @@ -1283,7 +1293,8 @@ static int null_handle_rq(struct nullb_cmd *cmd)
>
> spin_lock_irq(&nullb->lock);
> rq_for_each_segment(bvec, rq, iter) {
> - len = bvec.bv_len;
> + len = min(bvec.bv_len, nullb->dev->max_segment_size);
> + bvec.bv_len = len;
I am still confused by this change... Why is it necessary ? If max_segment_size
is set correctly, how can we ever get a BIO with a bvec length exceeding that
maximum ? If that is the case, aren't we missing a bio_split() somewhere ?
> err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
> op_is_write(req_op(rq)), sector,
> rq->cmd_flags & REQ_FUA);
> @@ -1310,7 +1321,8 @@ static int null_handle_bio(struct nullb_cmd *cmd)
>
> spin_lock_irq(&nullb->lock);
> bio_for_each_segment(bvec, bio, iter) {
> - len = bvec.bv_len;
> + len = min(bvec.bv_len, nullb->dev->max_segment_size);
> + bvec.bv_len = len;
> err = null_transfer(nullb, bvec.bv_page, len, bvec.bv_offset,
> op_is_write(bio_op(bio)), sector,
> bio->bi_opf & REQ_FUA);
> @@ -2161,6 +2173,7 @@ static int null_add_dev(struct nullb_device *dev)
> dev->max_sectors = queue_max_hw_sectors(nullb->q);
> dev->max_sectors = min(dev->max_sectors, BLK_DEF_MAX_SECTORS);
> blk_queue_max_hw_sectors(nullb->q, dev->max_sectors);
> + blk_queue_max_segment_size(nullb->q, dev->max_segment_size);
>
> if (dev->virt_boundary)
> blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1);
> diff --git a/drivers/block/null_blk/null_blk.h b/drivers/block/null_blk/null_blk.h
> index 929f659dd255..7bf80b0035f5 100644
> --- a/drivers/block/null_blk/null_blk.h
> +++ b/drivers/block/null_blk/null_blk.h
> @@ -107,6 +107,7 @@ struct nullb_device {
> unsigned int queue_mode; /* block interface */
> unsigned int blocksize; /* block size */
> unsigned int max_sectors; /* Max sectors per command */
> + unsigned int max_segment_size; /* Max size of a single DMA segment. */
> unsigned int irqmode; /* IRQ completion handler */
> unsigned int hw_queue_depth; /* queue depth */
> unsigned int index; /* index of the disk, only valid with a disk */
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 8/8] null_blk: Support configuring the maximum segment size
2023-06-12 22:17 ` Damien Le Moal
@ 2023-06-13 0:44 ` Bart Van Assche
2023-06-13 6:47 ` Damien Le Moal
2023-06-13 6:52 ` Christoph Hellwig
0 siblings, 2 replies; 24+ messages in thread
From: Bart Van Assche @ 2023-06-13 0:44 UTC (permalink / raw)
To: Damien Le Moal, Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas, Chaitanya Kulkarni, Ming Lei, Damien Le Moal
On 6/12/23 15:17, Damien Le Moal wrote:
> On 6/13/23 05:33, Bart Van Assche wrote:
>> @@ -1283,7 +1293,8 @@ static int null_handle_rq(struct nullb_cmd *cmd)
>>
>> spin_lock_irq(&nullb->lock);
>> rq_for_each_segment(bvec, rq, iter) {
>> - len = bvec.bv_len;
>> + len = min(bvec.bv_len, nullb->dev->max_segment_size);
>> + bvec.bv_len = len;
>
> I am still confused by this change... Why is it necessary ? If max_segment_size
> is set correctly, how can we ever get a BIO with a bvec length exceeding that
> maximum ? If that is the case, aren't we missing a bio_split() somewhere ?
Hi Damien,
bio_split() enforces the max_sectors limit but not the max_segment_size
limit. __blk_rq_map_sg() enforces the max_segment_size limit. null_blk
does not call __blk_rq_map_sg(). Hence the above code to enforce the
max_segment_size limit.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 8/8] null_blk: Support configuring the maximum segment size
2023-06-13 0:44 ` Bart Van Assche
@ 2023-06-13 6:47 ` Damien Le Moal
2023-06-13 6:52 ` Christoph Hellwig
1 sibling, 0 replies; 24+ messages in thread
From: Damien Le Moal @ 2023-06-13 6:47 UTC (permalink / raw)
To: Bart Van Assche, Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas, Chaitanya Kulkarni, Ming Lei, Damien Le Moal
On 6/13/23 09:44, Bart Van Assche wrote:
> On 6/12/23 15:17, Damien Le Moal wrote:
>> On 6/13/23 05:33, Bart Van Assche wrote:
>>> @@ -1283,7 +1293,8 @@ static int null_handle_rq(struct nullb_cmd *cmd)
>>>
>>> spin_lock_irq(&nullb->lock);
>>> rq_for_each_segment(bvec, rq, iter) {
>>> - len = bvec.bv_len;
>>> + len = min(bvec.bv_len, nullb->dev->max_segment_size);
>>> + bvec.bv_len = len;
>>
>> I am still confused by this change... Why is it necessary ? If max_segment_size
>> is set correctly, how can we ever get a BIO with a bvec length exceeding that
>> maximum ? If that is the case, aren't we missing a bio_split() somewhere ?
>
> Hi Damien,
>
> bio_split() enforces the max_sectors limit but not the max_segment_size
> limit. __blk_rq_map_sg() enforces the max_segment_size limit. null_blk
> does not call __blk_rq_map_sg(). Hence the above code to enforce the
> max_segment_size limit.
OK. That is where I was confused :)
Thanks !
>
> Thanks,
>
> Bart.
>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 8/8] null_blk: Support configuring the maximum segment size
2023-06-13 0:44 ` Bart Van Assche
2023-06-13 6:47 ` Damien Le Moal
@ 2023-06-13 6:52 ` Christoph Hellwig
1 sibling, 0 replies; 24+ messages in thread
From: Christoph Hellwig @ 2023-06-13 6:52 UTC (permalink / raw)
To: Bart Van Assche
Cc: Damien Le Moal, Jens Axboe, linux-block, Christoph Hellwig,
Luis Chamberlain, Sandeep Dhavale, Juan Yescas,
Chaitanya Kulkarni, Ming Lei, Damien Le Moal
On Mon, Jun 12, 2023 at 05:44:51PM -0700, Bart Van Assche wrote:
>
> Hi Damien,
>
> bio_split() enforces the max_sectors limit but not the max_segment_size
> limit.
bio_split() doesn't enforce any limit, but it's also not used by
null_blk or blk-mq.
bio_split_to_limits enforces max_segment_size in bvec_split_segs.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 0/8] Support limits below the page size
2023-06-12 20:33 [PATCH v6 0/8] Support limits below the page size Bart Van Assche
` (7 preceding siblings ...)
2023-06-12 20:33 ` [PATCH v6 8/8] null_blk: " Bart Van Assche
@ 2023-06-15 2:01 ` Bart Van Assche
2023-06-15 2:22 ` Jens Axboe
9 siblings, 0 replies; 24+ messages in thread
From: Bart Van Assche @ 2023-06-15 2:01 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas
On 6/12/23 13:33, Bart Van Assche wrote:
> We want to improve Android performance by increasing the page size from 4 KiB
> to 16 KiB. However, some of the storage controllers we care about do not support
> DMA segments larger than 4 KiB. Hence the need support for DMA segments that are
> smaller than the size of one virtual memory page. This patch series implements
> that support. Please consider this patch series for the next merge window.
(replying to my own email)
Hi Jens,
Can you please take a look at this patch series? I think it is ready to
be merged.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 0/8] Support limits below the page size
2023-06-12 20:33 [PATCH v6 0/8] Support limits below the page size Bart Van Assche
` (8 preceding siblings ...)
2023-06-15 2:01 ` [PATCH v6 0/8] Support limits below the page size Bart Van Assche
@ 2023-06-15 2:22 ` Jens Axboe
2023-06-15 4:15 ` Christoph Hellwig
2023-06-15 14:16 ` Bart Van Assche
9 siblings, 2 replies; 24+ messages in thread
From: Jens Axboe @ 2023-06-15 2:22 UTC (permalink / raw)
To: Bart Van Assche
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas
On 6/12/23 2:33?PM, Bart Van Assche wrote:
> Hi Jens,
>
> We want to improve Android performance by increasing the page size
> from 4 KiB to 16 KiB. However, some of the storage controllers we care
> about do not support DMA segments larger than 4 KiB. Hence the need
> support for DMA segments that are smaller than the size of one virtual
> memory page. This patch series implements that support. Please
> consider this patch series for the next merge window.
I'm usually a fan of putting code in the core so we don't have to in
drivers, that's how most of the block layer is designed. But this seems
niche enough that perhaps it's worth considering just remapping these in
the driver? It's peppering changes all over delicate parts of the core
for cases that 99.9% don't need to worry about or should worry about.
I will say that I do think the patches do look better than they did in
earlier versions, however.
Maybe we've already discussed this before, but let's please have the
discussion again. Because I'd really love to avoid this code, if at all
possible.
--
Jens Axboe
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 0/8] Support limits below the page size
2023-06-15 2:22 ` Jens Axboe
@ 2023-06-15 4:15 ` Christoph Hellwig
2023-06-15 13:55 ` Bart Van Assche
2023-06-15 14:16 ` Jens Axboe
2023-06-15 14:16 ` Bart Van Assche
1 sibling, 2 replies; 24+ messages in thread
From: Christoph Hellwig @ 2023-06-15 4:15 UTC (permalink / raw)
To: Jens Axboe
Cc: Bart Van Assche, linux-block, Christoph Hellwig, Luis Chamberlain,
Sandeep Dhavale, Juan Yescas
On Wed, Jun 14, 2023 at 08:22:31PM -0600, Jens Axboe wrote:
> I'm usually a fan of putting code in the core so we don't have to in
> drivers, that's how most of the block layer is designed. But this seems
> niche enough that perhaps it's worth considering just remapping these in
> the driver? It's peppering changes all over delicate parts of the core
> for cases that 99.9% don't need to worry about or should worry about.
> I will say that I do think the patches do look better than they did in
> earlier versions, however.
>
> Maybe we've already discussed this before, but let's please have the
> discussion again. Because I'd really love to avoid this code, if at all
> possible.
I really hate having this core complexity, but I suspect trying to driver
hacks would be even worse than that, especially as this goes through
the SCSI midlayer. I think the answer is simply that if Google keeps
buying broken hardware for their products from Samsung they just need
to stick to a 4k page size instead of moving to a larger one.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 0/8] Support limits below the page size
2023-06-15 4:15 ` Christoph Hellwig
@ 2023-06-15 13:55 ` Bart Van Assche
2023-06-16 7:02 ` Christoph Hellwig
2023-06-15 14:16 ` Jens Axboe
1 sibling, 1 reply; 24+ messages in thread
From: Bart Van Assche @ 2023-06-15 13:55 UTC (permalink / raw)
To: Christoph Hellwig, Jens Axboe
Cc: linux-block, Luis Chamberlain, Sandeep Dhavale, Juan Yescas
On 6/14/23 21:15, Christoph Hellwig wrote:
> I really hate having this core complexity, but I suspect trying to driver
> hacks would be even worse than that, especially as this goes through
> the SCSI midlayer. I think the answer is simply that if Google keeps
> buying broken hardware for their products from Samsung they just need
> to stick to a 4k page size instead of moving to a larger one.
Although I do not like it that the Exynos UFS controller does not follow
the UFS standard, this UFS controller is used much more widely than only
in devices produced by my employer. See e.g. the output of the following
grep command:
$ git grep -nH '\.compatible' */*/ufs-exynos.c
drivers/ufs/host/ufs-exynos.c:1739: { .compatible = "samsung,exynos7-ufs",
drivers/ufs/host/ufs-exynos.c:1741: { .compatible =
"samsung,exynosautov9-ufs",
drivers/ufs/host/ufs-exynos.c:1743: { .compatible =
"samsung,exynosautov9-ufs-vh",
drivers/ufs/host/ufs-exynos.c:1745: { .compatible = "tesla,fsd-ufs",
Thanks,
Bart.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 0/8] Support limits below the page size
2023-06-15 2:22 ` Jens Axboe
2023-06-15 4:15 ` Christoph Hellwig
@ 2023-06-15 14:16 ` Bart Van Assche
1 sibling, 0 replies; 24+ messages in thread
From: Bart Van Assche @ 2023-06-15 14:16 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas
On 6/14/23 19:22, Jens Axboe wrote:
> On 6/12/23 2:33?PM, Bart Van Assche wrote:
>> We want to improve Android performance by increasing the page size
>> from 4 KiB to 16 KiB. However, some of the storage controllers we care
>> about do not support DMA segments larger than 4 KiB. Hence the need
>> support for DMA segments that are smaller than the size of one virtual
>> memory page. This patch series implements that support. Please
>> consider this patch series for the next merge window.
>
> I'm usually a fan of putting code in the core so we don't have to in
> drivers, that's how most of the block layer is designed. But this seems
> niche enough that perhaps it's worth considering just remapping these in
> the driver? It's peppering changes all over delicate parts of the core
> for cases that 99.9% don't need to worry about or should worry about.
> I will say that I do think the patches do look better than they did in
> earlier versions, however.
>
> Maybe we've already discussed this before, but let's please have the
> discussion again. Because I'd really love to avoid this code, if at all
> possible.
Hi Jens,
These are my arguments in favor of having this functionality in the
block layer core instead of in the UFS driver:
* This functionality is useful for multiple block drivers. It is also
useful for block drivers with a max_segment_size limit less than 64
KiB on systems with a 64 KiB page size. E.g. the sbp2 driver and
several ATA and MMC drivers set the max_segment_size limit to a value
less than 64 KiB.
* The UFS 3.1 devices in my test setup support read bandwidths up to 2
GiB/s and more than 100K IOPS. UFSHCI 4.0 controllers support a link
bandwidth that is the double of UFSHCI 3.x controllers and also
support higher queue depths (up to 512 instead of 32). In other words,
performance matters for UFS devices. Having the SCSI core build an SG
list and making the UFS driver rework that SG list probably would
affect performance negatively.
* The MMC driver is more complicated than needed because the block layer
core does not yet support the limits of MMC devices. I think that this
patch series will allow to simplify the MMC driver. From
drivers/mmc/block.c:
/*
* The block layer doesn't support all sector count
* restrictions, so we need to be prepared for too big
* requests.
*/
* Care has been taken not to affect performance or maintainability
of the block layer core in a negative way.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 0/8] Support limits below the page size
2023-06-15 4:15 ` Christoph Hellwig
2023-06-15 13:55 ` Bart Van Assche
@ 2023-06-15 14:16 ` Jens Axboe
1 sibling, 0 replies; 24+ messages in thread
From: Jens Axboe @ 2023-06-15 14:16 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Bart Van Assche, linux-block, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas
On 6/14/23 10:15?PM, Christoph Hellwig wrote:
> On Wed, Jun 14, 2023 at 08:22:31PM -0600, Jens Axboe wrote:
>> I'm usually a fan of putting code in the core so we don't have to in
>> drivers, that's how most of the block layer is designed. But this seems
>> niche enough that perhaps it's worth considering just remapping these in
>> the driver? It's peppering changes all over delicate parts of the core
>> for cases that 99.9% don't need to worry about or should worry about.
>> I will say that I do think the patches do look better than they did in
>> earlier versions, however.
>>
>> Maybe we've already discussed this before, but let's please have the
>> discussion again. Because I'd really love to avoid this code, if at all
>> possible.
>
> I really hate having this core complexity, but I suspect trying to driver
> hacks would be even worse than that, especially as this goes through
> the SCSI midlayer. I think the answer is simply that if Google keeps
> buying broken hardware for their products from Samsung they just need
> to stick to a 4k page size instead of moving to a larger one.
I would tend to agree with that. Vendors buy cheaper things all the time
to cut cost, and then have to deal with the fallout of that. I see quite
a bit of that on the storage front.
--
Jens Axboe
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 0/8] Support limits below the page size
2023-06-15 13:55 ` Bart Van Assche
@ 2023-06-16 7:02 ` Christoph Hellwig
2023-06-16 20:26 ` Bart Van Assche
2023-08-04 23:11 ` Juan Yescas
0 siblings, 2 replies; 24+ messages in thread
From: Christoph Hellwig @ 2023-06-16 7:02 UTC (permalink / raw)
To: Bart Van Assche
Cc: Christoph Hellwig, Jens Axboe, linux-block, Luis Chamberlain,
Sandeep Dhavale, Juan Yescas
On Thu, Jun 15, 2023 at 06:55:36AM -0700, Bart Van Assche wrote:
> On 6/14/23 21:15, Christoph Hellwig wrote:
>> I really hate having this core complexity, but I suspect trying to driver
>> hacks would be even worse than that, especially as this goes through
>> the SCSI midlayer. I think the answer is simply that if Google keeps
>> buying broken hardware for their products from Samsung they just need
>> to stick to a 4k page size instead of moving to a larger one.
>
> Although I do not like it that the Exynos UFS controller does not follow
> the UFS standard, this UFS controller is used much more widely than only in
> devices produced by my employer. See e.g. the output of the following grep
> command:
But it seems like no one is insisting on using it with larger than 4k
page sizes. I think we should just prohibit using the driver for those
kernel configs and be done with it.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 0/8] Support limits below the page size
2023-06-16 7:02 ` Christoph Hellwig
@ 2023-06-16 20:26 ` Bart Van Assche
2023-06-16 21:48 ` Jens Axboe
2023-08-04 23:11 ` Juan Yescas
1 sibling, 1 reply; 24+ messages in thread
From: Bart Van Assche @ 2023-06-16 20:26 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Jens Axboe, linux-block, Luis Chamberlain, Sandeep Dhavale,
Juan Yescas
On 6/16/23 00:02, Christoph Hellwig wrote:
> But it seems like no one is insisting on using it with larger than 4k
> page sizes.
The Android common kernel (ACK) team is working on bringing up 16K page
size support. This involves kernel changes and also changes in user
space code. Once 16K page size support is ready, I expect that more
users will ask for 16K page size support in Android and also that more
users will ask for small segment size support.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 0/8] Support limits below the page size
2023-06-16 20:26 ` Bart Van Assche
@ 2023-06-16 21:48 ` Jens Axboe
2023-06-16 22:28 ` Bart Van Assche
0 siblings, 1 reply; 24+ messages in thread
From: Jens Axboe @ 2023-06-16 21:48 UTC (permalink / raw)
To: Bart Van Assche, Christoph Hellwig
Cc: linux-block, Luis Chamberlain, Sandeep Dhavale, Juan Yescas
On 6/16/23 2:26?PM, Bart Van Assche wrote:
> On 6/16/23 00:02, Christoph Hellwig wrote:
>> But it seems like no one is insisting on using it with larger than 4k
>> page sizes.
>
> The Android common kernel (ACK) team is working on bringing up 16K
> page size support. This involves kernel changes and also changes in
> user space code. Once 16K page size support is ready, I expect that
> more users will ask for 16K page size support in Android and also that
> more users will ask for small segment size support.
Like Christoph said in a previous email, gate the 16K page sizes on
hardware that can sanely support it. If it can't, then it runs 4K
kernels. Nudge the vendors to ensure what they deliver comply with that,
I believe Google has quite some pull in terms of that...
--
Jens Axboe
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 0/8] Support limits below the page size
2023-06-16 21:48 ` Jens Axboe
@ 2023-06-16 22:28 ` Bart Van Assche
0 siblings, 0 replies; 24+ messages in thread
From: Bart Van Assche @ 2023-06-16 22:28 UTC (permalink / raw)
To: Jens Axboe, Christoph Hellwig
Cc: linux-block, Luis Chamberlain, Sandeep Dhavale, Juan Yescas
On 6/16/23 14:48, Jens Axboe wrote:
> Nudge the vendors to ensure what they deliver comply with that,
> I believe Google has quite some pull in terms of that...
It would be great if vendors of Android devices would ask the Google
Android team for its opinion before selecting hardware components.
However, that's not how it works. I think that it's more likely that
Android vendors will put Google under pressure to support the hardware
they have selected instead of Android vendors asking Google for its
opinion about which hardware components to select.
Additionally, bring-up of 16K page size support for Android happens with
existing Android hardware. This patch series helps 16K page size support
bring-up effort because 16K page size support is being tested on Android
devices with an Exynos UFS host controller.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v6 0/8] Support limits below the page size
2023-06-16 7:02 ` Christoph Hellwig
2023-06-16 20:26 ` Bart Van Assche
@ 2023-08-04 23:11 ` Juan Yescas
1 sibling, 0 replies; 24+ messages in thread
From: Juan Yescas @ 2023-08-04 23:11 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Bart Van Assche, Jens Axboe, linux-block, Luis Chamberlain,
Sandeep Dhavale
On Fri, Jun 16, 2023 at 12:02 AM Christoph Hellwig <hch@lst.de> wrote:
>
> On Thu, Jun 15, 2023 at 06:55:36AM -0700, Bart Van Assche wrote:
> > On 6/14/23 21:15, Christoph Hellwig wrote:
> >> I really hate having this core complexity, but I suspect trying to driver
> >> hacks would be even worse than that, especially as this goes through
> >> the SCSI midlayer. I think the answer is simply that if Google keeps
> >> buying broken hardware for their products from Samsung they just need
> >> to stick to a 4k page size instead of moving to a larger one.
> >
> > Although I do not like it that the Exynos UFS controller does not follow
> > the UFS standard, this UFS controller is used much more widely than only in
> > devices produced by my employer. See e.g. the output of the following grep
> > command:
>
> But it seems like no one is insisting on using it with larger than 4k
> page sizes. I think we should just prohibit using the driver for those
> kernel configs and be done with it.
In addition to Google, Samsung and MediaTek and other vendors have devices
that want to take advantage of 16k page size support and they use the same
Exynos UFS host controller.
For example, these phones could potentially support 16k page sizes:
Samsung Galaxy A54 5G, Exynos 1380
Samsung Galaxy A14 5G, Exynos 1330
See https://semiconductor.samsung.com/us/processor/showcase/smartphone/
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2023-08-04 23:11 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-12 20:33 [PATCH v6 0/8] Support limits below the page size Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 1/8] block: Use pr_info() instead of printk(KERN_INFO ...) Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 2/8] block: Prepare for supporting sub-page limits Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 3/8] block: Support configuring limits below the page size Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 4/8] block: Make sub_page_limit_queues available in debugfs Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 5/8] block: Support submitting passthrough requests with small segments Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 6/8] block: Add support for filesystem requests and " Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 7/8] scsi_debug: Support configuring the maximum segment size Bart Van Assche
2023-06-12 20:33 ` [PATCH v6 8/8] null_blk: " Bart Van Assche
2023-06-12 22:17 ` Damien Le Moal
2023-06-13 0:44 ` Bart Van Assche
2023-06-13 6:47 ` Damien Le Moal
2023-06-13 6:52 ` Christoph Hellwig
2023-06-15 2:01 ` [PATCH v6 0/8] Support limits below the page size Bart Van Assche
2023-06-15 2:22 ` Jens Axboe
2023-06-15 4:15 ` Christoph Hellwig
2023-06-15 13:55 ` Bart Van Assche
2023-06-16 7:02 ` Christoph Hellwig
2023-06-16 20:26 ` Bart Van Assche
2023-06-16 21:48 ` Jens Axboe
2023-06-16 22:28 ` Bart Van Assche
2023-08-04 23:11 ` Juan Yescas
2023-06-15 14:16 ` Jens Axboe
2023-06-15 14:16 ` Bart Van Assche
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).