From: Ming Lei <ming.lei@redhat.com>
To: Jens Axboe <axboe@kernel.dk>, linux-block@vger.kernel.org
Cc: Uday Shankar <ushankar@purestorage.com>,
Caleb Sander Mateos <csander@purestorage.com>,
Ming Lei <ming.lei@redhat.com>
Subject: [PATCH V2 3/5] ublk: use flexible array for ublk_queue.ios
Date: Tue, 28 Oct 2025 16:56:32 +0800 [thread overview]
Message-ID: <20251028085636.185714-4-ming.lei@redhat.com> (raw)
In-Reply-To: <20251028085636.185714-1-ming.lei@redhat.com>
Convert ublk_queue to use DECLARE_FLEX_ARRAY for the ios field and
use struct_size() for allocation, following kernel best practices.
Changes in this commit:
1. Convert ios field from "struct ublk_io ios[]" to use
DECLARE_FLEX_ARRAY(struct ublk_io, ios) for consistency with
modern kernel style.
2. Update ublk_init_queue() to use struct_size(ubq, ios, depth)
instead of manual size calculation (sizeof(struct ublk_queue) +
depth * sizeof(struct ublk_io)).
This provides better type safety and makes the code more maintainable
by using standard kernel macros for flexible array handling.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
drivers/block/ublk_drv.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 394e9b5f512f..cef9cfa94feb 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -203,7 +203,8 @@ struct ublk_queue {
bool fail_io; /* copy of dev->state == UBLK_S_DEV_FAIL_IO */
spinlock_t cancel_lock;
struct ublk_device *dev;
- struct ublk_io ios[];
+
+ DECLARE_FLEX_ARRAY(struct ublk_io, ios);
};
struct ublk_device {
@@ -2700,7 +2701,6 @@ static int ublk_get_queue_numa_node(struct ublk_device *ub, int q_id)
static int ublk_init_queue(struct ublk_device *ub, int q_id)
{
int depth = ub->dev_info.queue_depth;
- int ubq_size = sizeof(struct ublk_queue) + depth * sizeof(struct ublk_io);
gfp_t gfp_flags = GFP_KERNEL | __GFP_ZERO;
struct ublk_queue *ubq;
struct page *page;
@@ -2711,7 +2711,8 @@ static int ublk_init_queue(struct ublk_device *ub, int q_id)
numa_node = ublk_get_queue_numa_node(ub, q_id);
/* Allocate queue structure on local NUMA node */
- ubq = kvzalloc_node(ubq_size, GFP_KERNEL, numa_node);
+ ubq = kvzalloc_node(struct_size(ubq, ios, depth), GFP_KERNEL,
+ numa_node);
if (!ubq)
return -ENOMEM;
--
2.47.0
next prev parent reply other threads:[~2025-10-28 8:57 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-28 8:56 [PATCH V2 0/5] ublk: NUMA-aware memory allocation Ming Lei
2025-10-28 8:56 ` [PATCH V2 1/5] ublk: reorder tag_set initialization before queue allocation Ming Lei
2025-10-28 8:56 ` [PATCH V2 2/5] ublk: implement NUMA-aware memory allocation Ming Lei
2025-10-28 8:56 ` Ming Lei [this message]
2025-10-28 21:52 ` [PATCH V2 3/5] ublk: use flexible array for ublk_queue.ios Caleb Sander Mateos
2025-10-29 2:51 ` Ming Lei
2025-10-28 8:56 ` [PATCH V2 4/5] selftests: ublk: set CPU affinity before thread initialization Ming Lei
2025-10-28 8:56 ` [PATCH V2 5/5] selftests: ublk: make ublk_thread thread-local variable Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251028085636.185714-4-ming.lei@redhat.com \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=csander@purestorage.com \
--cc=linux-block@vger.kernel.org \
--cc=ushankar@purestorage.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox