* [LTP] [PATCH 0/3] io_uring READ(V), WRITE(v) operation tests
@ 2026-03-20 12:47 Sachin Sant
2026-03-20 12:47 ` [LTP] [PATCH 1/3] io_uring: Test IORING READ and WRITE operations Sachin Sant
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Sachin Sant @ 2026-03-20 12:47 UTC (permalink / raw)
To: ltp
This patch series adds a set of test case to validate
IOURING READ & WRITE (io_uring03), READV & WRITEV (io_uring04)
operations. The patch also adds a common header file to
avoid code duplication.
This patch series also refactors existing test io_uring01
to use the common structures and function defined in header.
These patches have been tested successfully on ppc64le
arch (fedora and SLES flavours)
Signed-off-by: Sachin Sant <sachinp@linux.ibm.com>
---
Changes from RFC patch series:
- Addressed review comments
- Refactored io_uring01 test to use common code
- Removed git tags
- Link to RFC: https://lore.kernel.org/ltp/20260318110328.52031-1-sachinp@linux.ibm.com/T/#t
Test run:
$ ./kirk -f iouring
....
Connecting to SUT: default
Suite: iouring
--------------
io_uring01: pass (0.005s)
io_uring02: pass (0.004s)
io_uring03: pass (0.004s)
io_uring04: pass (0.004s)
Execution time: 0.054s
Disconnecting from SUT: default
.....
------------------------
TEST SUMMARY
------------------------
Suite: iouring
Runtime: 0.017s
Runs: 4
Results:
Passed: 23
Failed: 0
Broken: 0
Skipped: 0
Warnings: 0
Session stopped
$
---
Sachin Sant (3):
io_uring: Test IORING READ and WRITE operations
io_uring: Test READV and WRITEV operations
io_uring: Refactor io_uring01 to use common code
runtest/syscalls | 2 +
testcases/kernel/syscalls/io_uring/.gitignore | 2 +
.../kernel/syscalls/io_uring/io_uring01.c | 111 +++-----
.../kernel/syscalls/io_uring/io_uring03.c | 130 +++++++++
.../kernel/syscalls/io_uring/io_uring04.c | 231 +++++++++++++++
.../syscalls/io_uring/io_uring_common.h | 265 ++++++++++++++++++
6 files changed, 665 insertions(+), 76 deletions(-)
create mode 100644 testcases/kernel/syscalls/io_uring/io_uring03.c
create mode 100644 testcases/kernel/syscalls/io_uring/io_uring04.c
create mode 100644 testcases/kernel/syscalls/io_uring/io_uring_common.h
--
2.39.1
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 9+ messages in thread
* [LTP] [PATCH 1/3] io_uring: Test IORING READ and WRITE operations
2026-03-20 12:47 [LTP] [PATCH 0/3] io_uring READ(V), WRITE(v) operation tests Sachin Sant
@ 2026-03-20 12:47 ` Sachin Sant
2026-03-23 12:46 ` Cyril Hrubis
2026-03-20 12:47 ` [LTP] [PATCH 2/3] io_uring: Test READV and WRITEV operations Sachin Sant
2026-03-20 12:47 ` [LTP] [PATCH 3/3] io_uring: Refactor io_uring01 to use common code Sachin Sant
2 siblings, 1 reply; 9+ messages in thread
From: Sachin Sant @ 2026-03-20 12:47 UTC (permalink / raw)
To: ltp
This test validates basic read and write operations using io_uring.
It tests:
1. IORING_OP_WRITE - Writing data to a file
2. IORING_OP_READ - Reading data from a file
3. Data integrity verification
This patch also introduces a header file for common functions.
Signed-off-by: Sachin Sant <sachinp@linux.ibm.com>
---
runtest/syscalls | 1 +
testcases/kernel/syscalls/io_uring/.gitignore | 1 +
.../kernel/syscalls/io_uring/io_uring03.c | 130 +++++++++
.../syscalls/io_uring/io_uring_common.h | 265 ++++++++++++++++++
4 files changed, 397 insertions(+)
create mode 100644 testcases/kernel/syscalls/io_uring/io_uring03.c
create mode 100644 testcases/kernel/syscalls/io_uring/io_uring_common.h
diff --git a/runtest/syscalls b/runtest/syscalls
index 2179e007c..7dc80fe29 100644
--- a/runtest/syscalls
+++ b/runtest/syscalls
@@ -1898,6 +1898,7 @@ membarrier01 membarrier01
io_uring01 io_uring01
io_uring02 io_uring02
+io_uring03 io_uring03
# Tests below may cause kernel memory leak
perf_event_open03 perf_event_open03
diff --git a/testcases/kernel/syscalls/io_uring/.gitignore b/testcases/kernel/syscalls/io_uring/.gitignore
index 749db17db..9382ae413 100644
--- a/testcases/kernel/syscalls/io_uring/.gitignore
+++ b/testcases/kernel/syscalls/io_uring/.gitignore
@@ -1,2 +1,3 @@
/io_uring01
/io_uring02
+/io_uring03
diff --git a/testcases/kernel/syscalls/io_uring/io_uring03.c b/testcases/kernel/syscalls/io_uring/io_uring03.c
new file mode 100644
index 000000000..0a73a331a
--- /dev/null
+++ b/testcases/kernel/syscalls/io_uring/io_uring03.c
@@ -0,0 +1,130 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Copyright (C) 2026 IBM
+ * Author: Sachin Sant <sachinp@linux.ibm.com>
+ */
+/*
+ * Test IORING_OP_READ and IORING_OP_WRITE operations.
+ *
+ * This test validates basic read and write operations using io_uring.
+ * It tests:
+ * 1. IORING_OP_WRITE - Writing data to a file
+ * 2. IORING_OP_READ - Reading data from a file
+ * 3. Data integrity verification
+ */
+
+#include "io_uring_common.h"
+
+#define TEST_FILE "io_uring_test_file"
+#define QUEUE_DEPTH 2
+#define BLOCK_SZ 4096
+
+static char write_buf[BLOCK_SZ];
+static char read_buf[BLOCK_SZ];
+static struct io_uring_submit s;
+static sigset_t sig;
+
+static void init_buffer(char start_char)
+{
+ size_t i;
+
+ for (i = 0; i < BLOCK_SZ; i++)
+ write_buf[i] = start_char + (i % 26);
+}
+
+static void verify_data_integrity(const char *test_name)
+{
+ size_t i;
+
+ if (memcmp(write_buf, read_buf, BLOCK_SZ) == 0) {
+ tst_res(TPASS, "%s data integrity verified", test_name);
+ } else {
+ tst_res(TFAIL, "%s data mismatch", test_name);
+ for (i = 0; i < BLOCK_SZ && i < 64; i++) {
+ if (write_buf[i] != read_buf[i]) {
+ tst_res(TINFO, "First mismatch at offset %zu: "
+ "wrote 0x%02x, read 0x%02x",
+ i, write_buf[i], read_buf[i]);
+ break;
+ }
+ }
+ }
+}
+
+static void test_write_read(void)
+{
+ int fd;
+
+ init_buffer('A');
+
+ fd = SAFE_OPEN(TEST_FILE, O_RDWR | O_CREAT | O_TRUNC, 0644);
+
+ tst_res(TINFO, "Testing IORING_OP_WRITE");
+ io_uring_do_io_op(&s, fd, IORING_OP_WRITE, write_buf, BLOCK_SZ, 0,
+ &sig, "IORING_OP_WRITE completed successfully");
+
+ SAFE_FSYNC(fd);
+
+ tst_res(TINFO, "Testing IORING_OP_READ");
+ memset(read_buf, 0, BLOCK_SZ);
+ io_uring_do_io_op(&s, fd, IORING_OP_READ, read_buf, BLOCK_SZ, 0,
+ &sig, "IORING_OP_READ completed successfully");
+
+ verify_data_integrity("Basic I/O");
+
+ SAFE_CLOSE(fd);
+}
+
+static void test_partial_io(void)
+{
+ int fd;
+ size_t half = BLOCK_SZ / 2;
+
+ tst_res(TINFO, "Testing partial I/O operations");
+
+ init_buffer('a');
+
+ fd = SAFE_OPEN(TEST_FILE, O_RDWR | O_CREAT | O_TRUNC, 0644);
+
+ io_uring_do_io_op(&s, fd, IORING_OP_WRITE, write_buf, half, 0,
+ &sig, "Partial write (first half) succeeded");
+
+ io_uring_do_io_op(&s, fd, IORING_OP_WRITE, write_buf + half, half,
+ half, &sig, "Partial write (second half) succeeded");
+
+ SAFE_FSYNC(fd);
+
+ memset(read_buf, 0, BLOCK_SZ);
+ io_uring_do_io_op(&s, fd, IORING_OP_READ, read_buf, BLOCK_SZ, 0,
+ &sig, "Full read after partial writes succeeded");
+
+ verify_data_integrity("Partial I/O");
+
+ SAFE_CLOSE(fd);
+}
+
+static void run(void)
+{
+ io_uring_setup_queue(&s, QUEUE_DEPTH);
+ test_write_read();
+ test_partial_io();
+ io_uring_cleanup_queue(&s, QUEUE_DEPTH);
+}
+
+static void setup(void)
+{
+ io_uring_setup_supported_by_kernel();
+ sigemptyset(&sig);
+ memset(&s, 0, sizeof(s));
+}
+
+static struct tst_test test = {
+ .test_all = run,
+ .setup = setup,
+ .needs_tmpdir = 1,
+ .save_restore = (const struct tst_path_val[]) {
+ {"/proc/sys/kernel/io_uring_disabled", "0",
+ TST_SR_SKIP_MISSING | TST_SR_TCONF_RO},
+ {}
+ }
+};
diff --git a/testcases/kernel/syscalls/io_uring/io_uring_common.h b/testcases/kernel/syscalls/io_uring/io_uring_common.h
new file mode 100644
index 000000000..4162b5571
--- /dev/null
+++ b/testcases/kernel/syscalls/io_uring/io_uring_common.h
@@ -0,0 +1,265 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Copyright (C) 2026 IBM
+ * Author: Sachin Sant <sachinp@linux.ibm.com>
+ *
+ * Common definitions and helper functions for io_uring tests
+ */
+
+#ifndef IO_URING_COMMON_H
+#define IO_URING_COMMON_H
+
+#include <stdlib.h>
+#include <string.h>
+#include <fcntl.h>
+#include "config.h"
+#include "tst_test.h"
+#include "lapi/io_uring.h"
+
+/* Common structures for io_uring ring management */
+struct io_sq_ring {
+ unsigned int *head;
+ unsigned int *tail;
+ unsigned int *ring_mask;
+ unsigned int *ring_entries;
+ unsigned int *flags;
+ unsigned int *array;
+};
+
+struct io_cq_ring {
+ unsigned int *head;
+ unsigned int *tail;
+ unsigned int *ring_mask;
+ unsigned int *ring_entries;
+ struct io_uring_cqe *cqes;
+};
+
+struct io_uring_submit {
+ int ring_fd;
+ struct io_sq_ring sq_ring;
+ struct io_uring_sqe *sqes;
+ struct io_cq_ring cq_ring;
+ void *sq_ptr;
+ size_t sq_ptr_size;
+ void *cq_ptr;
+ size_t cq_ptr_size;
+};
+
+/*
+ * Setup io_uring instance with specified queue depth
+ * Returns 0 on success, -1 on failure
+ */
+static inline int io_uring_setup_queue(struct io_uring_submit *s,
+ unsigned int queue_depth)
+{
+ struct io_sq_ring *sring = &s->sq_ring;
+ struct io_cq_ring *cring = &s->cq_ring;
+ struct io_uring_params p;
+
+ memset(&p, 0, sizeof(p));
+ s->ring_fd = io_uring_setup(queue_depth, &p);
+ if (s->ring_fd < 0) {
+ tst_brk(TBROK | TERRNO, "io_uring_setup() failed");
+ return -1;
+ }
+
+ s->sq_ptr_size = p.sq_off.array + p.sq_entries * sizeof(unsigned int);
+
+ /* Map submission queue ring buffer */
+ s->sq_ptr = SAFE_MMAP(0, s->sq_ptr_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_POPULATE, s->ring_fd,
+ IORING_OFF_SQ_RING);
+
+ /* Save submission queue pointers */
+ sring->head = s->sq_ptr + p.sq_off.head;
+ sring->tail = s->sq_ptr + p.sq_off.tail;
+ sring->ring_mask = s->sq_ptr + p.sq_off.ring_mask;
+ sring->ring_entries = s->sq_ptr + p.sq_off.ring_entries;
+ sring->flags = s->sq_ptr + p.sq_off.flags;
+ sring->array = s->sq_ptr + p.sq_off.array;
+
+ /* Map submission queue entries */
+ s->sqes = SAFE_MMAP(0, p.sq_entries * sizeof(struct io_uring_sqe),
+ PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE,
+ s->ring_fd, IORING_OFF_SQES);
+
+ s->cq_ptr_size = p.cq_off.cqes +
+ p.cq_entries * sizeof(struct io_uring_cqe);
+
+ s->cq_ptr = SAFE_MMAP(0, s->cq_ptr_size, PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_POPULATE, s->ring_fd,
+ IORING_OFF_CQ_RING);
+
+ /* Save completion queue pointers */
+ cring->head = s->cq_ptr + p.cq_off.head;
+ cring->tail = s->cq_ptr + p.cq_off.tail;
+ cring->ring_mask = s->cq_ptr + p.cq_off.ring_mask;
+ cring->ring_entries = s->cq_ptr + p.cq_off.ring_entries;
+ cring->cqes = s->cq_ptr + p.cq_off.cqes;
+
+ return 0;
+}
+
+/*
+ * Cleanup io_uring instance and unmap all memory regions
+ */
+static inline void io_uring_cleanup_queue(struct io_uring_submit *s,
+ unsigned int queue_depth)
+{
+ if (s->sqes)
+ SAFE_MUNMAP(s->sqes, queue_depth * sizeof(struct io_uring_sqe));
+ if (s->cq_ptr)
+ SAFE_MUNMAP(s->cq_ptr, s->cq_ptr_size);
+ if (s->sq_ptr)
+ SAFE_MUNMAP(s->sq_ptr, s->sq_ptr_size);
+ if (s->ring_fd > 0)
+ SAFE_CLOSE(s->ring_fd);
+}
+
+/*
+ * Internal helper to submit a single SQE to the submission queue
+ * Used by both vectored and non-vectored I/O operations
+ */
+static inline void io_uring_submit_sqe_internal(struct io_uring_submit *s,
+ int fd, int opcode,
+ unsigned long addr,
+ unsigned int len,
+ off_t offset)
+{
+ struct io_sq_ring *sring = &s->sq_ring;
+ unsigned int tail, index;
+ struct io_uring_sqe *sqe;
+
+ tail = *sring->tail;
+ index = tail & *sring->ring_mask;
+ sqe = &s->sqes[index];
+
+ memset(sqe, 0, sizeof(*sqe));
+ sqe->opcode = opcode;
+ sqe->fd = fd;
+ sqe->addr = addr;
+ sqe->len = len;
+ sqe->off = offset;
+ sqe->user_data = opcode;
+
+ sring->array[index] = index;
+ tail++;
+
+ *sring->tail = tail;
+}
+
+/*
+ * Submit a single SQE to the submission queue
+ * For basic read/write operations (non-vectored)
+ */
+static inline void io_uring_submit_sqe(struct io_uring_submit *s, int fd,
+ int opcode, void *buf, size_t len,
+ off_t offset)
+{
+ io_uring_submit_sqe_internal(s, fd, opcode, (unsigned long)buf,
+ len, offset);
+}
+
+/*
+ * Submit a vectored SQE to the submission queue
+ * For readv/writev operations
+ */
+static inline void io_uring_submit_sqe_vec(struct io_uring_submit *s, int fd,
+ int opcode, struct iovec *iovs,
+ int nr_vecs, off_t offset)
+{
+ io_uring_submit_sqe_internal(s, fd, opcode, (unsigned long)iovs,
+ nr_vecs, offset);
+}
+
+/*
+ * Wait for and validate a completion queue entry
+ * Returns 0 on success, -1 on failure
+ */
+static inline int io_uring_wait_cqe(struct io_uring_submit *s,
+ int expected_res, int expected_opcode,
+ sigset_t *sig)
+{
+ struct io_cq_ring *cring = &s->cq_ring;
+ struct io_uring_cqe *cqe;
+ unsigned int head;
+ int ret;
+
+ ret = io_uring_enter(s->ring_fd, 1, 1, IORING_ENTER_GETEVENTS, sig);
+ if (ret < 0) {
+ tst_res(TFAIL | TERRNO, "io_uring_enter() failed");
+ return -1;
+ }
+
+ head = *cring->head;
+ if (head == *cring->tail) {
+ tst_res(TFAIL, "No completion event received");
+ return -1;
+ }
+
+ cqe = &cring->cqes[head & *cring->ring_mask];
+
+ if (cqe->user_data != (uint64_t)expected_opcode) {
+ tst_res(TFAIL, "Unexpected user_data: got %llu, expected %d",
+ (unsigned long long)cqe->user_data, expected_opcode);
+ *cring->head = head + 1;
+ return -1;
+ }
+
+ if (cqe->res != expected_res) {
+ tst_res(TFAIL, "Operation failed: res=%d, expected=%d",
+ cqe->res, expected_res);
+ *cring->head = head + 1;
+ return -1;
+ }
+
+ *cring->head = head + 1;
+ return 0;
+}
+
+/*
+ * Initialize buffer with a repeating character pattern
+ * Useful for creating test data with predictable patterns
+ */
+static inline void io_uring_init_buffer_pattern(char *buf, size_t size,
+ char pattern)
+{
+ size_t i;
+
+ for (i = 0; i < size; i++)
+ buf[i] = pattern;
+}
+
+/*
+ * Submit and wait for a non-vectored I/O operation
+ * Combines io_uring_submit_sqe() and io_uring_wait_cqe() with result reporting
+ */
+static inline void io_uring_do_io_op(struct io_uring_submit *s, int fd,
+ int op, void *buf, size_t len,
+ off_t offset, sigset_t *sig,
+ const char *msg)
+{
+ io_uring_submit_sqe(s, fd, op, buf, len, offset);
+
+ if (io_uring_wait_cqe(s, len, op, sig) == 0)
+ tst_res(TPASS, "%s", msg);
+}
+
+/*
+ * Submit and wait for a vectored I/O operation
+ * Combines io_uring_submit_sqe_vec() and io_uring_wait_cqe() with
+ * result reporting
+ */
+static inline void io_uring_do_vec_io_op(struct io_uring_submit *s, int fd,
+ int op, struct iovec *iovs,
+ int nvecs, off_t offset,
+ int expected_size, sigset_t *sig,
+ const char *msg)
+{
+ io_uring_submit_sqe_vec(s, fd, op, iovs, nvecs, offset);
+
+ if (io_uring_wait_cqe(s, expected_size, op, sig) == 0)
+ tst_res(TPASS, "%s", msg);
+}
+
+#endif /* IO_URING_COMMON_H */
--
2.39.1
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [LTP] [PATCH 2/3] io_uring: Test READV and WRITEV operations
2026-03-20 12:47 [LTP] [PATCH 0/3] io_uring READ(V), WRITE(v) operation tests Sachin Sant
2026-03-20 12:47 ` [LTP] [PATCH 1/3] io_uring: Test IORING READ and WRITE operations Sachin Sant
@ 2026-03-20 12:47 ` Sachin Sant
2026-03-23 16:22 ` Cyril Hrubis
2026-03-20 12:47 ` [LTP] [PATCH 3/3] io_uring: Refactor io_uring01 to use common code Sachin Sant
2 siblings, 1 reply; 9+ messages in thread
From: Sachin Sant @ 2026-03-20 12:47 UTC (permalink / raw)
To: ltp
This test validates vectored read and write operations using io_uring.
1. IORING_OP_WRITEV - Writing data using multiple buffers (scatter)
2. IORING_OP_READV - Reading data into multiple buffers (gather)
3. Data integrity verification across multiple iovecs
4. Edge cases with different iovec configurations
Signed-off-by: Sachin Sant <sachinp@linux.ibm.com>
---
runtest/syscalls | 1 +
testcases/kernel/syscalls/io_uring/.gitignore | 1 +
.../kernel/syscalls/io_uring/io_uring04.c | 231 ++++++++++++++++++
3 files changed, 233 insertions(+)
create mode 100644 testcases/kernel/syscalls/io_uring/io_uring04.c
diff --git a/runtest/syscalls b/runtest/syscalls
index 7dc80fe29..eacf946c5 100644
--- a/runtest/syscalls
+++ b/runtest/syscalls
@@ -1899,6 +1899,7 @@ membarrier01 membarrier01
io_uring01 io_uring01
io_uring02 io_uring02
io_uring03 io_uring03
+io_uring04 io_uring04
# Tests below may cause kernel memory leak
perf_event_open03 perf_event_open03
diff --git a/testcases/kernel/syscalls/io_uring/.gitignore b/testcases/kernel/syscalls/io_uring/.gitignore
index 9382ae413..36cd24662 100644
--- a/testcases/kernel/syscalls/io_uring/.gitignore
+++ b/testcases/kernel/syscalls/io_uring/.gitignore
@@ -1,3 +1,4 @@
/io_uring01
/io_uring02
/io_uring03
+/io_uring04
diff --git a/testcases/kernel/syscalls/io_uring/io_uring04.c b/testcases/kernel/syscalls/io_uring/io_uring04.c
new file mode 100644
index 000000000..27d1c509b
--- /dev/null
+++ b/testcases/kernel/syscalls/io_uring/io_uring04.c
@@ -0,0 +1,231 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Copyright (C) 2026 IBM
+ * Author: Sachin Sant <sachinp@linux.ibm.com>
+ */
+/*
+ * Test IORING_OP_READV and IORING_OP_WRITEV operations.
+ *
+ * This test validates vectored read and write operations using io_uring.
+ * It tests:
+ * 1. IORING_OP_WRITEV - Writing data using multiple buffers (scatter)
+ * 2. IORING_OP_READV - Reading data into multiple buffers (gather)
+ * 3. Data integrity verification across multiple iovecs
+ * 4. Edge cases with different iovec configurations
+ */
+
+#include "io_uring_common.h"
+
+#define TEST_FILE "io_uring_test_file"
+#define QUEUE_DEPTH 2
+#define NUM_VECS 4
+#define VEC_SIZE 1024
+
+static char write_bufs[NUM_VECS][VEC_SIZE];
+static char read_bufs[NUM_VECS][VEC_SIZE];
+static struct iovec write_iovs[NUM_VECS];
+static struct iovec read_iovs[NUM_VECS];
+static struct io_uring_submit s;
+static sigset_t sig;
+
+static void prepare_write_buffers(void)
+{
+ size_t i, j;
+
+ for (i = 0; i < NUM_VECS; i++) {
+ for (j = 0; j < VEC_SIZE; j++) {
+ /* Each vector has a different pattern */
+ write_bufs[i][j] = 'A' + i + (j % 26);
+ }
+ write_iovs[i].iov_base = write_bufs[i];
+ write_iovs[i].iov_len = VEC_SIZE;
+ }
+}
+
+static void prepare_read_buffers(void)
+{
+ size_t i;
+
+ for (i = 0; i < NUM_VECS; i++) {
+ memset(read_bufs[i], 0, VEC_SIZE);
+ read_iovs[i].iov_base = read_bufs[i];
+ read_iovs[i].iov_len = VEC_SIZE;
+ }
+}
+
+static void verify_vector_data(char write_bufs[][VEC_SIZE],
+ char read_bufs[][VEC_SIZE],
+ size_t num_vecs, const char *test_name)
+{
+ size_t i, j;
+
+ for (i = 0; i < num_vecs; i++) {
+ if (memcmp(write_bufs[i], read_bufs[i], VEC_SIZE) != 0) {
+ tst_res(TFAIL, "%s: data mismatch in vector %zu",
+ test_name, i);
+ for (j = 0; j < VEC_SIZE && j < 64; j++) {
+ if (write_bufs[i][j] != read_bufs[i][j]) {
+ tst_res(TINFO, "Vector %zu: first mismatch at "
+ "offset %zu: wrote 0x%02x, read 0x%02x",
+ i, j, write_bufs[i][j], read_bufs[i][j]);
+ break;
+ }
+ }
+ return;
+ }
+ }
+
+ tst_res(TPASS, "%s: data integrity verified across %zu vectors",
+ test_name, num_vecs);
+}
+
+static void test_writev_readv(void)
+{
+ int fd;
+ int total_size = NUM_VECS * VEC_SIZE;
+
+ tst_res(TINFO, "Testing IORING_OP_WRITEV and IORING_OP_READV");
+
+ prepare_write_buffers();
+ prepare_read_buffers();
+
+ fd = SAFE_OPEN(TEST_FILE, O_RDWR | O_CREAT | O_TRUNC, 0644);
+
+ tst_res(TINFO, "Writing %d bytes using %d vectors", total_size, NUM_VECS);
+ io_uring_do_vec_io_op(&s, fd, IORING_OP_WRITEV, write_iovs, NUM_VECS,
+ 0, total_size, &sig,
+ "IORING_OP_WRITEV completed successfully");
+
+ SAFE_FSYNC(fd);
+
+ tst_res(TINFO, "Reading %d bytes using %d vectors", total_size, NUM_VECS);
+ io_uring_do_vec_io_op(&s, fd, IORING_OP_READV, read_iovs, NUM_VECS,
+ 0, total_size, &sig,
+ "IORING_OP_READV completed successfully");
+
+ verify_vector_data(write_bufs, read_bufs, NUM_VECS, "Basic vectored I/O");
+
+ SAFE_CLOSE(fd);
+}
+
+static void test_partial_vectors(void)
+{
+ int fd;
+ struct iovec partial_write[2];
+ struct iovec partial_read[2];
+ int expected_size;
+
+ tst_res(TINFO, "Testing partial vector operations");
+
+ prepare_write_buffers();
+ prepare_read_buffers();
+
+ fd = SAFE_OPEN(TEST_FILE, O_RDWR | O_CREAT | O_TRUNC, 0644);
+
+ /* Write using only 2 vectors */
+ partial_write[0] = write_iovs[0];
+ partial_write[1] = write_iovs[1];
+ expected_size = 2 * VEC_SIZE;
+
+ io_uring_do_vec_io_op(&s, fd, IORING_OP_WRITEV, partial_write, 2, 0,
+ expected_size, &sig,
+ "Partial IORING_OP_WRITEV (2 vectors) succeeded");
+
+ SAFE_FSYNC(fd);
+
+ /* Read back using 2 vectors */
+ partial_read[0] = read_iovs[0];
+ partial_read[1] = read_iovs[1];
+
+ io_uring_do_vec_io_op(&s, fd, IORING_OP_READV, partial_read, 2, 0,
+ expected_size, &sig,
+ "Partial IORING_OP_READV (2 vectors) succeeded");
+
+ verify_vector_data(write_bufs, read_bufs, 2, "Partial vector I/O");
+
+ SAFE_CLOSE(fd);
+}
+
+static void test_varying_sizes(void)
+{
+ int fd;
+ struct iovec var_write[3];
+ struct iovec var_read[3];
+ char buf1[512], buf2[1024], buf3[256];
+ char rbuf1[512], rbuf2[1024], rbuf3[256];
+ int expected_size = 512 + 1024 + 256;
+
+ tst_res(TINFO, "Testing vectors with varying sizes");
+
+ io_uring_init_buffer_pattern(buf1, 512, 'X');
+ io_uring_init_buffer_pattern(buf2, 1024, 'Y');
+ io_uring_init_buffer_pattern(buf3, 256, 'Z');
+
+ var_write[0].iov_base = buf1;
+ var_write[0].iov_len = 512;
+ var_write[1].iov_base = buf2;
+ var_write[1].iov_len = 1024;
+ var_write[2].iov_base = buf3;
+ var_write[2].iov_len = 256;
+
+ memset(rbuf1, 0, 512);
+ memset(rbuf2, 0, 1024);
+ memset(rbuf3, 0, 256);
+
+ var_read[0].iov_base = rbuf1;
+ var_read[0].iov_len = 512;
+ var_read[1].iov_base = rbuf2;
+ var_read[1].iov_len = 1024;
+ var_read[2].iov_base = rbuf3;
+ var_read[2].iov_len = 256;
+
+ fd = SAFE_OPEN(TEST_FILE, O_RDWR | O_CREAT | O_TRUNC, 0644);
+
+ io_uring_do_vec_io_op(&s, fd, IORING_OP_WRITEV, var_write, 3, 0,
+ expected_size, &sig,
+ "IORING_OP_WRITEV with varying sizes succeeded");
+
+ SAFE_FSYNC(fd);
+
+ io_uring_do_vec_io_op(&s, fd, IORING_OP_READV, var_read, 3, 0,
+ expected_size, &sig,
+ "IORING_OP_READV with varying sizes succeeded");
+
+ /* Verify each buffer */
+ if (memcmp(buf1, rbuf1, 512) == 0 &&
+ memcmp(buf2, rbuf2, 1024) == 0 &&
+ memcmp(buf3, rbuf3, 256) == 0) {
+ tst_res(TPASS, "Varying size vector data integrity verified");
+ } else {
+ tst_res(TFAIL, "Varying size vector data mismatch");
+ }
+
+ SAFE_CLOSE(fd);
+}
+
+static void run(void)
+{
+ io_uring_setup_queue(&s, QUEUE_DEPTH);
+ test_writev_readv();
+ test_partial_vectors();
+ test_varying_sizes();
+ io_uring_cleanup_queue(&s, QUEUE_DEPTH);
+}
+
+static void setup(void)
+{
+ io_uring_setup_supported_by_kernel();
+ sigemptyset(&sig);
+ memset(&s, 0, sizeof(s));
+}
+
+static struct tst_test test = {
+ .test_all = run,
+ .setup = setup,
+ .needs_tmpdir = 1,
+ .save_restore = (const struct tst_path_val[]) {
+ {"/proc/sys/kernel/io_uring_disabled", "0",
+ TST_SR_SKIP_MISSING | TST_SR_TCONF_RO},
+ {}
+ }
+};
--
2.39.1
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [LTP] [PATCH 3/3] io_uring: Refactor io_uring01 to use common code
2026-03-20 12:47 [LTP] [PATCH 0/3] io_uring READ(V), WRITE(v) operation tests Sachin Sant
2026-03-20 12:47 ` [LTP] [PATCH 1/3] io_uring: Test IORING READ and WRITE operations Sachin Sant
2026-03-20 12:47 ` [LTP] [PATCH 2/3] io_uring: Test READV and WRITEV operations Sachin Sant
@ 2026-03-20 12:47 ` Sachin Sant
2026-03-23 16:32 ` Cyril Hrubis
2 siblings, 1 reply; 9+ messages in thread
From: Sachin Sant @ 2026-03-20 12:47 UTC (permalink / raw)
To: ltp
No functional change.
Refactor io_uring01 test case to use
- common definitions from io_uring_common.h
- remove duplicate structure definitions
- Replace manual munmap/close calls with io_uring_cleanup_queue()
helper function
Signed-off-by: Sachin Sant <sachinp@linux.ibm.com>
---
.../kernel/syscalls/io_uring/io_uring01.c | 111 ++++++------------
1 file changed, 35 insertions(+), 76 deletions(-)
diff --git a/testcases/kernel/syscalls/io_uring/io_uring01.c b/testcases/kernel/syscalls/io_uring/io_uring01.c
index ab1ec00d6..368c1ed15 100644
--- a/testcases/kernel/syscalls/io_uring/io_uring01.c
+++ b/testcases/kernel/syscalls/io_uring/io_uring01.c
@@ -11,13 +11,7 @@
* registered in the kernel for long term operation using io_uring_register().
* This tests initiates I/O operations with the help of io_uring_enter().
*/
-#include <stdlib.h>
-#include <errno.h>
-#include <string.h>
-#include <fcntl.h>
-#include "config.h"
-#include "tst_test.h"
-#include "lapi/io_uring.h"
+#include "io_uring_common.h"
#define TEST_FILE "test_file"
@@ -32,42 +26,11 @@ static struct tcase {
{0, IORING_REGISTER_BUFFERS, IORING_OP_READ_FIXED},
};
-struct io_sq_ring {
- unsigned int *head;
- unsigned int *tail;
- unsigned int *ring_mask;
- unsigned int *ring_entries;
- unsigned int *flags;
- unsigned int *array;
-};
-
-struct io_cq_ring {
- unsigned int *head;
- unsigned int *tail;
- unsigned int *ring_mask;
- unsigned int *ring_entries;
- struct io_uring_cqe *cqes;
-};
-
-struct submitter {
- int ring_fd;
- struct io_sq_ring sq_ring;
- struct io_uring_sqe *sqes;
- struct io_cq_ring cq_ring;
-};
-
-static struct submitter sub_ring;
-static struct submitter *s = &sub_ring;
+static struct io_uring_submit s;
static sigset_t sig;
static struct iovec *iov;
-
-static void *sptr;
-static size_t sptr_size;
-static void *cptr;
-static size_t cptr_size;
-
-static int setup_io_uring_test(struct submitter *s, struct tcase *tc)
+static int setup_io_uring_test(struct io_uring_submit *s, struct tcase *tc)
{
struct io_sq_ring *sring = &s->sq_ring;
struct io_cq_ring *cring = &s->cq_ring;
@@ -83,43 +46,42 @@ static int setup_io_uring_test(struct submitter *s, struct tcase *tc)
return 1;
}
- sptr_size = p.sq_off.array + p.sq_entries * sizeof(unsigned int);
+ s->sq_ptr_size = p.sq_off.array + p.sq_entries * sizeof(unsigned int);
/* Submission queue ring buffer mapping */
- sptr = SAFE_MMAP(0, sptr_size,
- PROT_READ | PROT_WRITE,
- MAP_SHARED | MAP_POPULATE,
- s->ring_fd, IORING_OFF_SQ_RING);
+ s->sq_ptr = SAFE_MMAP(0, s->sq_ptr_size,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_POPULATE,
+ s->ring_fd, IORING_OFF_SQ_RING);
/* Save global submission queue struct info */
- sring->head = sptr + p.sq_off.head;
- sring->tail = sptr + p.sq_off.tail;
- sring->ring_mask = sptr + p.sq_off.ring_mask;
- sring->ring_entries = sptr + p.sq_off.ring_entries;
- sring->flags = sptr + p.sq_off.flags;
- sring->array = sptr + p.sq_off.array;
+ sring->head = s->sq_ptr + p.sq_off.head;
+ sring->tail = s->sq_ptr + p.sq_off.tail;
+ sring->ring_mask = s->sq_ptr + p.sq_off.ring_mask;
+ sring->ring_entries = s->sq_ptr + p.sq_off.ring_entries;
+ sring->flags = s->sq_ptr + p.sq_off.flags;
+ sring->array = s->sq_ptr + p.sq_off.array;
/* Submission queue entries ring buffer mapping */
- s->sqes = SAFE_MMAP(0, p.sq_entries *
- sizeof(struct io_uring_sqe),
- PROT_READ | PROT_WRITE,
- MAP_SHARED | MAP_POPULATE,
- s->ring_fd, IORING_OFF_SQES);
+ s->sqes = SAFE_MMAP(0, p.sq_entries * sizeof(struct io_uring_sqe),
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_POPULATE,
+ s->ring_fd, IORING_OFF_SQES);
- cptr_size = p.cq_off.cqes + p.cq_entries * sizeof(struct io_uring_cqe);
+ s->cq_ptr_size = p.cq_off.cqes + p.cq_entries * sizeof(struct io_uring_cqe);
/* Completion queue ring buffer mapping */
- cptr = SAFE_MMAP(0, cptr_size,
- PROT_READ | PROT_WRITE,
- MAP_SHARED | MAP_POPULATE,
- s->ring_fd, IORING_OFF_CQ_RING);
+ s->cq_ptr = SAFE_MMAP(0, s->cq_ptr_size,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED | MAP_POPULATE,
+ s->ring_fd, IORING_OFF_CQ_RING);
/* Save global completion queue struct info */
- cring->head = cptr + p.cq_off.head;
- cring->tail = cptr + p.cq_off.tail;
- cring->ring_mask = cptr + p.cq_off.ring_mask;
- cring->ring_entries = cptr + p.cq_off.ring_entries;
- cring->cqes = cptr + p.cq_off.cqes;
+ cring->head = s->cq_ptr + p.cq_off.head;
+ cring->tail = s->cq_ptr + p.cq_off.tail;
+ cring->ring_mask = s->cq_ptr + p.cq_off.ring_mask;
+ cring->ring_entries = s->cq_ptr + p.cq_off.ring_entries;
+ cring->cqes = s->cq_ptr + p.cq_off.cqes;
return 0;
}
@@ -139,7 +101,7 @@ static void check_buffer(char *buffer, size_t len)
tst_res(TPASS, "Buffer filled in correctly");
}
-static void drain_uring_cq(struct submitter *s, unsigned int exp_events)
+static void drain_uring_cq(struct io_uring_submit *s, unsigned int exp_events)
{
struct io_cq_ring *cring = &s->cq_ring;
unsigned int head = *cring->head;
@@ -175,7 +137,7 @@ static void drain_uring_cq(struct submitter *s, unsigned int exp_events)
events, exp_events);
}
-static int submit_to_uring_sq(struct submitter *s, struct tcase *tc)
+static int submit_to_uring_sq(struct io_uring_submit *s, struct tcase *tc)
{
unsigned int index = 0, tail = 0, next_tail = 0;
struct io_sq_ring *sring = &s->sq_ring;
@@ -229,23 +191,20 @@ static int submit_to_uring_sq(struct submitter *s, struct tcase *tc)
static void cleanup_io_uring_test(void)
{
- io_uring_register(s->ring_fd, IORING_UNREGISTER_BUFFERS,
+ io_uring_register(s.ring_fd, IORING_UNREGISTER_BUFFERS,
NULL, QUEUE_DEPTH);
- SAFE_MUNMAP(s->sqes, sizeof(struct io_uring_sqe));
- SAFE_MUNMAP(cptr, cptr_size);
- SAFE_MUNMAP(sptr, sptr_size);
- SAFE_CLOSE(s->ring_fd);
+ io_uring_cleanup_queue(&s, QUEUE_DEPTH);
}
static void run(unsigned int n)
{
struct tcase *tc = &tcases[n];
- if (setup_io_uring_test(s, tc))
+ if (setup_io_uring_test(&s, tc))
return;
- if (!submit_to_uring_sq(s, tc))
- drain_uring_cq(s, 1);
+ if (!submit_to_uring_sq(&s, tc))
+ drain_uring_cq(&s, 1);
cleanup_io_uring_test();
}
--
2.39.1
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [LTP] [PATCH 1/3] io_uring: Test IORING READ and WRITE operations
2026-03-20 12:47 ` [LTP] [PATCH 1/3] io_uring: Test IORING READ and WRITE operations Sachin Sant
@ 2026-03-23 12:46 ` Cyril Hrubis
2026-03-23 15:46 ` Sachin Sant
0 siblings, 1 reply; 9+ messages in thread
From: Cyril Hrubis @ 2026-03-23 12:46 UTC (permalink / raw)
To: Sachin Sant; +Cc: ltp
Hi!
> Signed-off-by: Sachin Sant <sachinp@linux.ibm.com>
> ---
> runtest/syscalls | 1 +
> testcases/kernel/syscalls/io_uring/.gitignore | 1 +
> .../kernel/syscalls/io_uring/io_uring03.c | 130 +++++++++
> .../syscalls/io_uring/io_uring_common.h | 265 ++++++++++++++++++
> 4 files changed, 397 insertions(+)
> create mode 100644 testcases/kernel/syscalls/io_uring/io_uring03.c
> create mode 100644 testcases/kernel/syscalls/io_uring/io_uring_common.h
>
> diff --git a/runtest/syscalls b/runtest/syscalls
> index 2179e007c..7dc80fe29 100644
> --- a/runtest/syscalls
> +++ b/runtest/syscalls
> @@ -1898,6 +1898,7 @@ membarrier01 membarrier01
>
> io_uring01 io_uring01
> io_uring02 io_uring02
> +io_uring03 io_uring03
>
> # Tests below may cause kernel memory leak
> perf_event_open03 perf_event_open03
> diff --git a/testcases/kernel/syscalls/io_uring/.gitignore b/testcases/kernel/syscalls/io_uring/.gitignore
> index 749db17db..9382ae413 100644
> --- a/testcases/kernel/syscalls/io_uring/.gitignore
> +++ b/testcases/kernel/syscalls/io_uring/.gitignore
> @@ -1,2 +1,3 @@
> /io_uring01
> /io_uring02
> +/io_uring03
> diff --git a/testcases/kernel/syscalls/io_uring/io_uring03.c b/testcases/kernel/syscalls/io_uring/io_uring03.c
> new file mode 100644
> index 000000000..0a73a331a
> --- /dev/null
> +++ b/testcases/kernel/syscalls/io_uring/io_uring03.c
> @@ -0,0 +1,130 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * Copyright (C) 2026 IBM
> + * Author: Sachin Sant <sachinp@linux.ibm.com>
> + */
> +/*
> + * Test IORING_OP_READ and IORING_OP_WRITE operations.
> + *
> + * This test validates basic read and write operations using io_uring.
> + * It tests:
> + * 1. IORING_OP_WRITE - Writing data to a file
> + * 2. IORING_OP_READ - Reading data from a file
> + * 3. Data integrity verification
> + */
> +
> +#include "io_uring_common.h"
> +
> +#define TEST_FILE "io_uring_test_file"
> +#define QUEUE_DEPTH 2
> +#define BLOCK_SZ 4096
> +
> +static char write_buf[BLOCK_SZ];
> +static char read_buf[BLOCK_SZ];
Can we please allocate these as a guarded buffers?
https://linux-test-project.readthedocs.io/en/latest/developers/api_c_tests.html#guarded-buffers
> +static struct io_uring_submit s;
> +static sigset_t sig;
> +
> +static void init_buffer(char start_char)
> +{
> + size_t i;
> +
> + for (i = 0; i < BLOCK_SZ; i++)
> + write_buf[i] = start_char + (i % 26);
> +}
> +
> +static void verify_data_integrity(const char *test_name)
> +{
> + size_t i;
> +
> + if (memcmp(write_buf, read_buf, BLOCK_SZ) == 0) {
> + tst_res(TPASS, "%s data integrity verified", test_name);
> + } else {
> + tst_res(TFAIL, "%s data mismatch", test_name);
> + for (i = 0; i < BLOCK_SZ && i < 64; i++) {
> + if (write_buf[i] != read_buf[i]) {
> + tst_res(TINFO, "First mismatch at offset %zu: "
> + "wrote 0x%02x, read 0x%02x",
> + i, write_buf[i], read_buf[i]);
> + break;
> + }
> + }
> + }
> +}
> +
> +static void test_write_read(void)
> +{
> + int fd;
> +
> + init_buffer('A');
> +
> + fd = SAFE_OPEN(TEST_FILE, O_RDWR | O_CREAT | O_TRUNC, 0644);
> +
> + tst_res(TINFO, "Testing IORING_OP_WRITE");
> + io_uring_do_io_op(&s, fd, IORING_OP_WRITE, write_buf, BLOCK_SZ, 0,
> + &sig, "IORING_OP_WRITE completed successfully");
> +
> + SAFE_FSYNC(fd);
> +
> + tst_res(TINFO, "Testing IORING_OP_READ");
> + memset(read_buf, 0, BLOCK_SZ);
> + io_uring_do_io_op(&s, fd, IORING_OP_READ, read_buf, BLOCK_SZ, 0,
> + &sig, "IORING_OP_READ completed successfully");
> +
> + verify_data_integrity("Basic I/O");
> +
> + SAFE_CLOSE(fd);
> +}
> +
> +static void test_partial_io(void)
> +{
> + int fd;
> + size_t half = BLOCK_SZ / 2;
> +
> + tst_res(TINFO, "Testing partial I/O operations");
> +
> + init_buffer('a');
> +
> + fd = SAFE_OPEN(TEST_FILE, O_RDWR | O_CREAT | O_TRUNC, 0644);
> +
> + io_uring_do_io_op(&s, fd, IORING_OP_WRITE, write_buf, half, 0,
> + &sig, "Partial write (first half) succeeded");
> +
> + io_uring_do_io_op(&s, fd, IORING_OP_WRITE, write_buf + half, half,
> + half, &sig, "Partial write (second half) succeeded");
> +
> + SAFE_FSYNC(fd);
> +
> + memset(read_buf, 0, BLOCK_SZ);
> + io_uring_do_io_op(&s, fd, IORING_OP_READ, read_buf, BLOCK_SZ, 0,
> + &sig, "Full read after partial writes succeeded");
> +
> + verify_data_integrity("Partial I/O");
> +
> + SAFE_CLOSE(fd);
> +}
> +
> +static void run(void)
> +{
> + io_uring_setup_queue(&s, QUEUE_DEPTH);
> + test_write_read();
> + test_partial_io();
> + io_uring_cleanup_queue(&s, QUEUE_DEPTH);
I suppose that we need to setup and cleanup the queue only once in test
setup() and cleanup() functions (when the test is executed with -i 2 on
command line).
> +}
> +
> +static void setup(void)
> +{
> + io_uring_setup_supported_by_kernel();
> + sigemptyset(&sig);
> + memset(&s, 0, sizeof(s));
> +}
> +
> +static struct tst_test test = {
> + .test_all = run,
> + .setup = setup,
> + .needs_tmpdir = 1,
> + .save_restore = (const struct tst_path_val[]) {
> + {"/proc/sys/kernel/io_uring_disabled", "0",
> + TST_SR_SKIP_MISSING | TST_SR_TCONF_RO},
> + {}
> + }
> +};
> diff --git a/testcases/kernel/syscalls/io_uring/io_uring_common.h b/testcases/kernel/syscalls/io_uring/io_uring_common.h
> new file mode 100644
> index 000000000..4162b5571
> --- /dev/null
> +++ b/testcases/kernel/syscalls/io_uring/io_uring_common.h
> @@ -0,0 +1,265 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * Copyright (C) 2026 IBM
> + * Author: Sachin Sant <sachinp@linux.ibm.com>
> + *
> + * Common definitions and helper functions for io_uring tests
> + */
> +
> +#ifndef IO_URING_COMMON_H
> +#define IO_URING_COMMON_H
> +
> +#include <stdlib.h>
> +#include <string.h>
> +#include <fcntl.h>
> +#include "config.h"
> +#include "tst_test.h"
> +#include "lapi/io_uring.h"
> +
> +/* Common structures for io_uring ring management */
> +struct io_sq_ring {
> + unsigned int *head;
> + unsigned int *tail;
> + unsigned int *ring_mask;
> + unsigned int *ring_entries;
> + unsigned int *flags;
> + unsigned int *array;
> +};
> +
> +struct io_cq_ring {
> + unsigned int *head;
> + unsigned int *tail;
> + unsigned int *ring_mask;
> + unsigned int *ring_entries;
> + struct io_uring_cqe *cqes;
> +};
> +
> +struct io_uring_submit {
> + int ring_fd;
> + struct io_sq_ring sq_ring;
> + struct io_uring_sqe *sqes;
> + struct io_cq_ring cq_ring;
> + void *sq_ptr;
> + size_t sq_ptr_size;
> + void *cq_ptr;
> + size_t cq_ptr_size;
> +};
> +
> +/*
> + * Setup io_uring instance with specified queue depth
> + * Returns 0 on success, -1 on failure
> + */
> +static inline int io_uring_setup_queue(struct io_uring_submit *s,
> + unsigned int queue_depth)
> +{
> + struct io_sq_ring *sring = &s->sq_ring;
> + struct io_cq_ring *cring = &s->cq_ring;
> + struct io_uring_params p;
> +
> + memset(&p, 0, sizeof(p));
> + s->ring_fd = io_uring_setup(queue_depth, &p);
> + if (s->ring_fd < 0) {
> + tst_brk(TBROK | TERRNO, "io_uring_setup() failed");
> + return -1;
> + }
> +
> + s->sq_ptr_size = p.sq_off.array + p.sq_entries * sizeof(unsigned int);
> +
> + /* Map submission queue ring buffer */
> + s->sq_ptr = SAFE_MMAP(0, s->sq_ptr_size, PROT_READ | PROT_WRITE,
> + MAP_SHARED | MAP_POPULATE, s->ring_fd,
> + IORING_OFF_SQ_RING);
> +
> + /* Save submission queue pointers */
> + sring->head = s->sq_ptr + p.sq_off.head;
> + sring->tail = s->sq_ptr + p.sq_off.tail;
> + sring->ring_mask = s->sq_ptr + p.sq_off.ring_mask;
> + sring->ring_entries = s->sq_ptr + p.sq_off.ring_entries;
> + sring->flags = s->sq_ptr + p.sq_off.flags;
> + sring->array = s->sq_ptr + p.sq_off.array;
> +
> + /* Map submission queue entries */
> + s->sqes = SAFE_MMAP(0, p.sq_entries * sizeof(struct io_uring_sqe),
> + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE,
> + s->ring_fd, IORING_OFF_SQES);
> +
> + s->cq_ptr_size = p.cq_off.cqes +
> + p.cq_entries * sizeof(struct io_uring_cqe);
> +
> + s->cq_ptr = SAFE_MMAP(0, s->cq_ptr_size, PROT_READ | PROT_WRITE,
> + MAP_SHARED | MAP_POPULATE, s->ring_fd,
> + IORING_OFF_CQ_RING);
> +
> + /* Save completion queue pointers */
> + cring->head = s->cq_ptr + p.cq_off.head;
> + cring->tail = s->cq_ptr + p.cq_off.tail;
> + cring->ring_mask = s->cq_ptr + p.cq_off.ring_mask;
> + cring->ring_entries = s->cq_ptr + p.cq_off.ring_entries;
> + cring->cqes = s->cq_ptr + p.cq_off.cqes;
> +
> + return 0;
> +}
> +
> +/*
> + * Cleanup io_uring instance and unmap all memory regions
> + */
> +static inline void io_uring_cleanup_queue(struct io_uring_submit *s,
> + unsigned int queue_depth)
> +{
> + if (s->sqes)
> + SAFE_MUNMAP(s->sqes, queue_depth * sizeof(struct io_uring_sqe));
> + if (s->cq_ptr)
> + SAFE_MUNMAP(s->cq_ptr, s->cq_ptr_size);
> + if (s->sq_ptr)
> + SAFE_MUNMAP(s->sq_ptr, s->sq_ptr_size);
> + if (s->ring_fd > 0)
> + SAFE_CLOSE(s->ring_fd);
> +}
> +
> +/*
> + * Internal helper to submit a single SQE to the submission queue
> + * Used by both vectored and non-vectored I/O operations
> + */
> +static inline void io_uring_submit_sqe_internal(struct io_uring_submit *s,
> + int fd, int opcode,
> + unsigned long addr,
> + unsigned int len,
> + off_t offset)
> +{
> + struct io_sq_ring *sring = &s->sq_ring;
> + unsigned int tail, index;
> + struct io_uring_sqe *sqe;
> +
> + tail = *sring->tail;
> + index = tail & *sring->ring_mask;
> + sqe = &s->sqes[index];
> +
> + memset(sqe, 0, sizeof(*sqe));
> + sqe->opcode = opcode;
> + sqe->fd = fd;
> + sqe->addr = addr;
> + sqe->len = len;
> + sqe->off = offset;
> + sqe->user_data = opcode;
> +
> + sring->array[index] = index;
> + tail++;
> +
> + *sring->tail = tail;
> +}
> +
> +/*
> + * Submit a single SQE to the submission queue
> + * For basic read/write operations (non-vectored)
> + */
> +static inline void io_uring_submit_sqe(struct io_uring_submit *s, int fd,
> + int opcode, void *buf, size_t len,
> + off_t offset)
> +{
> + io_uring_submit_sqe_internal(s, fd, opcode, (unsigned long)buf,
> + len, offset);
> +}
> +
> +/*
> + * Submit a vectored SQE to the submission queue
> + * For readv/writev operations
> + */
> +static inline void io_uring_submit_sqe_vec(struct io_uring_submit *s, int fd,
> + int opcode, struct iovec *iovs,
> + int nr_vecs, off_t offset)
> +{
> + io_uring_submit_sqe_internal(s, fd, opcode, (unsigned long)iovs,
> + nr_vecs, offset);
> +}
> +
> +/*
> + * Wait for and validate a completion queue entry
> + * Returns 0 on success, -1 on failure
> + */
> +static inline int io_uring_wait_cqe(struct io_uring_submit *s,
> + int expected_res, int expected_opcode,
> + sigset_t *sig)
> +{
> + struct io_cq_ring *cring = &s->cq_ring;
> + struct io_uring_cqe *cqe;
> + unsigned int head;
> + int ret;
> +
> + ret = io_uring_enter(s->ring_fd, 1, 1, IORING_ENTER_GETEVENTS, sig);
> + if (ret < 0) {
> + tst_res(TFAIL | TERRNO, "io_uring_enter() failed");
> + return -1;
> + }
> +
> + head = *cring->head;
> + if (head == *cring->tail) {
> + tst_res(TFAIL, "No completion event received");
> + return -1;
> + }
> +
> + cqe = &cring->cqes[head & *cring->ring_mask];
> +
> + if (cqe->user_data != (uint64_t)expected_opcode) {
> + tst_res(TFAIL, "Unexpected user_data: got %llu, expected %d",
> + (unsigned long long)cqe->user_data, expected_opcode);
> + *cring->head = head + 1;
> + return -1;
> + }
> +
> + if (cqe->res != expected_res) {
> + tst_res(TFAIL, "Operation failed: res=%d, expected=%d",
> + cqe->res, expected_res);
> + *cring->head = head + 1;
> + return -1;
> + }
> +
> + *cring->head = head + 1;
> + return 0;
> +}
> +
> +/*
> + * Initialize buffer with a repeating character pattern
> + * Useful for creating test data with predictable patterns
> + */
> +static inline void io_uring_init_buffer_pattern(char *buf, size_t size,
> + char pattern)
> +{
> + size_t i;
> +
> + for (i = 0; i < size; i++)
> + buf[i] = pattern;
> +}
> +
> +/*
> + * Submit and wait for a non-vectored I/O operation
> + * Combines io_uring_submit_sqe() and io_uring_wait_cqe() with result reporting
> + */
> +static inline void io_uring_do_io_op(struct io_uring_submit *s, int fd,
> + int op, void *buf, size_t len,
> + off_t offset, sigset_t *sig,
> + const char *msg)
> +{
> + io_uring_submit_sqe(s, fd, op, buf, len, offset);
> +
> + if (io_uring_wait_cqe(s, len, op, sig) == 0)
> + tst_res(TPASS, "%s", msg);
Rather than passing the description passed from the caller I would print
the parameters passed to the function, something as:
tst_res(TPASS, "OP=%2x fd=%i buf=%p len=%zu offset=%jd",
op, fd, buf, len, (intmax_t)offset);
And we can add a function to map the OP to the enum name if we want to
have fancy messages:
static const char *ioring_op_name(int op)
{
switch (op) {
case IORING_READ:
return "IORING_READ";
...
defaut:
return "UNKNOWN"
}
Then we can print the OP as:
tst_res(TPASS, "OP=%s (%2x) ...", ioring_op_name(op), op, ...);
With this approach we will avoid copy&paste mistakes, it's way too easy
for messages and parameters written manually to get out of sync.
Also we should either propagate the failure to the test by returning
non-zero from this function if waiting for completion failed (so that
the test can abort) or call tst_brk(TBROK, ...) in the
io_uring_wait_cqe() which will abort the test when something fails
automatically.
> +}
> +
> +/*
> + * Submit and wait for a vectored I/O operation
> + * Combines io_uring_submit_sqe_vec() and io_uring_wait_cqe() with
> + * result reporting
> + */
> +static inline void io_uring_do_vec_io_op(struct io_uring_submit *s, int fd,
> + int op, struct iovec *iovs,
> + int nvecs, off_t offset,
> + int expected_size, sigset_t *sig,
> + const char *msg)
> +{
> + io_uring_submit_sqe_vec(s, fd, op, iovs, nvecs, offset);
> +
> + if (io_uring_wait_cqe(s, expected_size, op, sig) == 0)
> + tst_res(TPASS, "%s", msg);
And here as well, I would rather see the message generated from the
parameters and we should handle the failure somehow too.
> +}
> +
> +#endif /* IO_URING_COMMON_H */
> --
> 2.39.1
>
--
Cyril Hrubis
chrubis@suse.cz
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [LTP] [PATCH 1/3] io_uring: Test IORING READ and WRITE operations
2026-03-23 12:46 ` Cyril Hrubis
@ 2026-03-23 15:46 ` Sachin Sant
0 siblings, 0 replies; 9+ messages in thread
From: Sachin Sant @ 2026-03-23 15:46 UTC (permalink / raw)
To: Cyril Hrubis; +Cc: ltp
>> +
>> +static char write_buf[BLOCK_SZ];
>> +static char read_buf[BLOCK_SZ];
> Can we please allocate these as a guarded buffers?
>
> https://linux-test-project.readthedocs.io/en/latest/developers/api_c_tests.html#guarded-buffers
Sure, will modify accordingly.
>> +
>> +static void run(void)
>> +{
>> + io_uring_setup_queue(&s, QUEUE_DEPTH);
>> + test_write_read();
>> + test_partial_io();
>> + io_uring_cleanup_queue(&s, QUEUE_DEPTH);
> I suppose that we need to setup and cleanup the queue only once in test
> setup() and cleanup() functions (when the test is executed with -i 2 on
> command line).
Good point. Will change it accordingly.
> +static inline void io_uring_do_io_op(struct io_uring_submit *s, int fd,
> + int op, void *buf, size_t len,
> + off_t offset, sigset_t *sig,
> + const char *msg)
> +{
> + io_uring_submit_sqe(s, fd, op, buf, len, offset);
> +
> + if (io_uring_wait_cqe(s, len, op, sig) == 0)
> + tst_res(TPASS, "%s", msg);
>
> Rather than passing the description passed from the caller I would print
> the parameters passed to the function, something as:
>
> tst_res(TPASS, "OP=%2x fd=%i buf=%p len=%zu offset=%jd",
> op, fd, buf, len, (intmax_t)offset);
>
> And we can add a function to map the OP to the enum name if we want to
> have fancy messages:
>
> static const char *ioring_op_name(int op)
> {
> switch (op) {
> case IORING_READ:
> return "IORING_READ";
> ...
> defaut:
> return "UNKNOWN"
> }
Okay. Will include the changes in next version.
> Then we can print the OP as:
>
> tst_res(TPASS, "OP=%s (%2x) ...", ioring_op_name(op), op, ...);
>
> With this approach we will avoid copy&paste mistakes, it's way too easy
> for messages and parameters written manually to get out of sync.
>
> Also we should either propagate the failure to the test by returning
> non-zero from this function if waiting for completion failed (so that
> the test can abort) or call tst_brk(TBROK, ...) in the
> io_uring_wait_cqe() which will abort the test when something fails
> automatically.
Yes, will make the required changes to address this.
Thanks again for the review comments.
--
Thanks
- Sachin
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [LTP] [PATCH 2/3] io_uring: Test READV and WRITEV operations
2026-03-20 12:47 ` [LTP] [PATCH 2/3] io_uring: Test READV and WRITEV operations Sachin Sant
@ 2026-03-23 16:22 ` Cyril Hrubis
2026-03-24 5:15 ` Sachin Sant
0 siblings, 1 reply; 9+ messages in thread
From: Cyril Hrubis @ 2026-03-23 16:22 UTC (permalink / raw)
To: Sachin Sant; +Cc: ltp
Hi!
> +#define TEST_FILE "io_uring_test_file"
> +#define QUEUE_DEPTH 2
> +#define NUM_VECS 4
> +#define VEC_SIZE 1024
> +
> +static char write_bufs[NUM_VECS][VEC_SIZE];
> +static char read_bufs[NUM_VECS][VEC_SIZE];
> +static struct iovec write_iovs[NUM_VECS];
> +static struct iovec read_iovs[NUM_VECS];
The guarded buffers can allocate iovec for you as well. Have a look at
readv01.c test how to do that in the tst_test structure.
> +static struct io_uring_submit s;
> +static sigset_t sig;
> +
> +static void prepare_write_buffers(void)
> +{
> + size_t i, j;
> +
> + for (i = 0; i < NUM_VECS; i++) {
> + for (j = 0; j < VEC_SIZE; j++) {
> + /* Each vector has a different pattern */
> + write_bufs[i][j] = 'A' + i + (j % 26);
> + }
> + write_iovs[i].iov_base = write_bufs[i];
> + write_iovs[i].iov_len = VEC_SIZE;
> + }
> +}
> +
> +static void prepare_read_buffers(void)
> +{
> + size_t i;
> +
> + for (i = 0; i < NUM_VECS; i++) {
> + memset(read_bufs[i], 0, VEC_SIZE);
> + read_iovs[i].iov_base = read_bufs[i];
> + read_iovs[i].iov_len = VEC_SIZE;
> + }
> +}
> +
> +static void verify_vector_data(char write_bufs[][VEC_SIZE],
> + char read_bufs[][VEC_SIZE],
> + size_t num_vecs, const char *test_name)
> +{
> + size_t i, j;
> +
> + for (i = 0; i < num_vecs; i++) {
> + if (memcmp(write_bufs[i], read_bufs[i], VEC_SIZE) != 0) {
> + tst_res(TFAIL, "%s: data mismatch in vector %zu",
> + test_name, i);
> + for (j = 0; j < VEC_SIZE && j < 64; j++) {
> + if (write_bufs[i][j] != read_bufs[i][j]) {
> + tst_res(TINFO, "Vector %zu: first mismatch at "
> + "offset %zu: wrote 0x%02x, read 0x%02x",
> + i, j, write_bufs[i][j], read_bufs[i][j]);
> + break;
> + }
> + }
> + return;
> + }
> + }
> +
> + tst_res(TPASS, "%s: data integrity verified across %zu vectors",
> + test_name, num_vecs);
> +}
> +
> +static void test_writev_readv(void)
> +{
> + int fd;
> + int total_size = NUM_VECS * VEC_SIZE;
> +
> + tst_res(TINFO, "Testing IORING_OP_WRITEV and IORING_OP_READV");
> +
> + prepare_write_buffers();
> + prepare_read_buffers();
> +
> + fd = SAFE_OPEN(TEST_FILE, O_RDWR | O_CREAT | O_TRUNC, 0644);
> +
> + tst_res(TINFO, "Writing %d bytes using %d vectors", total_size, NUM_VECS);
> + io_uring_do_vec_io_op(&s, fd, IORING_OP_WRITEV, write_iovs, NUM_VECS,
> + 0, total_size, &sig,
> + "IORING_OP_WRITEV completed successfully");
> +
> + SAFE_FSYNC(fd);
> +
> + tst_res(TINFO, "Reading %d bytes using %d vectors", total_size, NUM_VECS);
> + io_uring_do_vec_io_op(&s, fd, IORING_OP_READV, read_iovs, NUM_VECS,
> + 0, total_size, &sig,
> + "IORING_OP_READV completed successfully");
> +
> + verify_vector_data(write_bufs, read_bufs, NUM_VECS, "Basic vectored I/O");
> +
> + SAFE_CLOSE(fd);
> +}
> +
> +static void test_partial_vectors(void)
> +{
> + int fd;
> + struct iovec partial_write[2];
> + struct iovec partial_read[2];
> + int expected_size;
> +
> + tst_res(TINFO, "Testing partial vector operations");
> +
> + prepare_write_buffers();
The write buffers can be initialized once in the test setup.
> + prepare_read_buffers();
> +
> + fd = SAFE_OPEN(TEST_FILE, O_RDWR | O_CREAT | O_TRUNC, 0644);
> +
> + /* Write using only 2 vectors */
> + partial_write[0] = write_iovs[0];
> + partial_write[1] = write_iovs[1];
> + expected_size = 2 * VEC_SIZE;
We do not need to copy the pointers here, we can just pass the
write_iovs to the function and kernel will the write as long as we pass
iovcnt = 2.
> + io_uring_do_vec_io_op(&s, fd, IORING_OP_WRITEV, partial_write, 2, 0,
> + expected_size, &sig,
> + "Partial IORING_OP_WRITEV (2 vectors) succeeded");
And I do not think this is that useful since we do not write the second
half with an offset as we do in the first test.
It would make much more sense if we wrote the second half with the
offset here and check that the file was pieced together correctly as we
do in the previous test.
> + SAFE_FSYNC(fd);
> +
> + /* Read back using 2 vectors */
> + partial_read[0] = read_iovs[0];
> + partial_read[1] = read_iovs[1];
> +
> + io_uring_do_vec_io_op(&s, fd, IORING_OP_READV, partial_read, 2, 0,
> + expected_size, &sig,
> + "Partial IORING_OP_READV (2 vectors) succeeded");
> +
> + verify_vector_data(write_bufs, read_bufs, 2, "Partial vector I/O");
> +
> + SAFE_CLOSE(fd);
> +
> +}
> +
> +static void test_varying_sizes(void)
> +{
> + int fd;
> + struct iovec var_write[3];
> + struct iovec var_read[3];
> + char buf1[512], buf2[1024], buf3[256];
> + char rbuf1[512], rbuf2[1024], rbuf3[256];
> + int expected_size = 512 + 1024 + 256;
> +
> + tst_res(TINFO, "Testing vectors with varying sizes");
> +
> + io_uring_init_buffer_pattern(buf1, 512, 'X');
> + io_uring_init_buffer_pattern(buf2, 1024, 'Y');
> + io_uring_init_buffer_pattern(buf3, 256, 'Z');
> +
> + var_write[0].iov_base = buf1;
> + var_write[0].iov_len = 512;
> + var_write[1].iov_base = buf2;
> + var_write[1].iov_len = 1024;
> + var_write[2].iov_base = buf3;
> + var_write[2].iov_len = 256;
First off all can we please allocate this with the guarded buffers.
> + memset(rbuf1, 0, 512);
> + memset(rbuf2, 0, 1024);
> + memset(rbuf3, 0, 256);
We could use the generic clearing fucntions if we used the len argument
from the iovec instead of hardcoded buffer size. We just need to pass
the iovec lenght to the prepare_read_buffers(), loop over the array and
pass len to the memset. With that we can clear any iovec.
> + var_read[0].iov_base = rbuf1;
> + var_read[0].iov_len = 512;
> + var_read[1].iov_base = rbuf2;
> + var_read[1].iov_len = 1024;
> + var_read[2].iov_base = rbuf3;
> + var_read[2].iov_len = 256;
This as well.
> + fd = SAFE_OPEN(TEST_FILE, O_RDWR | O_CREAT | O_TRUNC, 0644);
> +
> + io_uring_do_vec_io_op(&s, fd, IORING_OP_WRITEV, var_write, 3, 0,
> + expected_size, &sig,
> + "IORING_OP_WRITEV with varying sizes succeeded");
> +
> + SAFE_FSYNC(fd);
> +
> + io_uring_do_vec_io_op(&s, fd, IORING_OP_READV, var_read, 3, 0,
> + expected_size, &sig,
> + "IORING_OP_READV with varying sizes succeeded");
> +
> + /* Verify each buffer */
Obvious comment.
> + if (memcmp(buf1, rbuf1, 512) == 0 &&
> + memcmp(buf2, rbuf2, 1024) == 0 &&
> + memcmp(buf3, rbuf3, 256) == 0) {
> + tst_res(TPASS, "Varying size vector data integrity verified");
> + } else {
> + tst_res(TFAIL, "Varying size vector data mismatch");
> + }
This is very ugly code. Moreover you could easily use the common
function to check for the buffers if you haven't hardcoded the sizes
there and uses lenght from the iovec.
> + SAFE_CLOSE(fd);
> +}
> +
> +static void run(void)
> +{
> + io_uring_setup_queue(&s, QUEUE_DEPTH);
> + test_writev_readv();
> + test_partial_vectors();
> + test_varying_sizes();
> + io_uring_cleanup_queue(&s, QUEUE_DEPTH);
Here as well setup and cleanup of the queue should be done once in the
test setup/cleanup so that we do not do that again and again with the -i
parameter.
> +}
And we are missing some interesting testcases. We can, for instance,
have a buffer size 0 in the middle of the iovec and things should work
fine. We should do that in the variable sized test.
> +static void setup(void)
> +{
> + io_uring_setup_supported_by_kernel();
> + sigemptyset(&sig);
> + memset(&s, 0, sizeof(s));
> +}
> +
> +static struct tst_test test = {
> + .test_all = run,
> + .setup = setup,
> + .needs_tmpdir = 1,
> + .save_restore = (const struct tst_path_val[]) {
> + {"/proc/sys/kernel/io_uring_disabled", "0",
> + TST_SR_SKIP_MISSING | TST_SR_TCONF_RO},
> + {}
> + }
> +};
> --
> 2.39.1
>
--
Cyril Hrubis
chrubis@suse.cz
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [LTP] [PATCH 3/3] io_uring: Refactor io_uring01 to use common code
2026-03-20 12:47 ` [LTP] [PATCH 3/3] io_uring: Refactor io_uring01 to use common code Sachin Sant
@ 2026-03-23 16:32 ` Cyril Hrubis
0 siblings, 0 replies; 9+ messages in thread
From: Cyril Hrubis @ 2026-03-23 16:32 UTC (permalink / raw)
To: Sachin Sant; +Cc: ltp
Hi!
> -static int setup_io_uring_test(struct submitter *s, struct tcase *tc)
> +static int setup_io_uring_test(struct io_uring_submit *s, struct tcase *tc)
> {
> struct io_sq_ring *sring = &s->sq_ring;
> struct io_cq_ring *cring = &s->cq_ring;
> @@ -83,43 +46,42 @@ static int setup_io_uring_test(struct submitter *s, struct tcase *tc)
> return 1;
> }
>
> - sptr_size = p.sq_off.array + p.sq_entries * sizeof(unsigned int);
> + s->sq_ptr_size = p.sq_off.array + p.sq_entries * sizeof(unsigned int);
>
> /* Submission queue ring buffer mapping */
> - sptr = SAFE_MMAP(0, sptr_size,
> - PROT_READ | PROT_WRITE,
> - MAP_SHARED | MAP_POPULATE,
> - s->ring_fd, IORING_OFF_SQ_RING);
> + s->sq_ptr = SAFE_MMAP(0, s->sq_ptr_size,
> + PROT_READ | PROT_WRITE,
> + MAP_SHARED | MAP_POPULATE,
> + s->ring_fd, IORING_OFF_SQ_RING);
>
> /* Save global submission queue struct info */
> - sring->head = sptr + p.sq_off.head;
> - sring->tail = sptr + p.sq_off.tail;
> - sring->ring_mask = sptr + p.sq_off.ring_mask;
> - sring->ring_entries = sptr + p.sq_off.ring_entries;
> - sring->flags = sptr + p.sq_off.flags;
> - sring->array = sptr + p.sq_off.array;
> + sring->head = s->sq_ptr + p.sq_off.head;
> + sring->tail = s->sq_ptr + p.sq_off.tail;
> + sring->ring_mask = s->sq_ptr + p.sq_off.ring_mask;
> + sring->ring_entries = s->sq_ptr + p.sq_off.ring_entries;
> + sring->flags = s->sq_ptr + p.sq_off.flags;
> + sring->array = s->sq_ptr + p.sq_off.array;
>
> /* Submission queue entries ring buffer mapping */
> - s->sqes = SAFE_MMAP(0, p.sq_entries *
> - sizeof(struct io_uring_sqe),
> - PROT_READ | PROT_WRITE,
> - MAP_SHARED | MAP_POPULATE,
> - s->ring_fd, IORING_OFF_SQES);
> + s->sqes = SAFE_MMAP(0, p.sq_entries * sizeof(struct io_uring_sqe),
> + PROT_READ | PROT_WRITE,
> + MAP_SHARED | MAP_POPULATE,
> + s->ring_fd, IORING_OFF_SQES);
>
> - cptr_size = p.cq_off.cqes + p.cq_entries * sizeof(struct io_uring_cqe);
> + s->cq_ptr_size = p.cq_off.cqes + p.cq_entries * sizeof(struct io_uring_cqe);
>
> /* Completion queue ring buffer mapping */
> - cptr = SAFE_MMAP(0, cptr_size,
> - PROT_READ | PROT_WRITE,
> - MAP_SHARED | MAP_POPULATE,
> - s->ring_fd, IORING_OFF_CQ_RING);
> + s->cq_ptr = SAFE_MMAP(0, s->cq_ptr_size,
> + PROT_READ | PROT_WRITE,
> + MAP_SHARED | MAP_POPULATE,
> + s->ring_fd, IORING_OFF_CQ_RING);
>
> /* Save global completion queue struct info */
> - cring->head = cptr + p.cq_off.head;
> - cring->tail = cptr + p.cq_off.tail;
> - cring->ring_mask = cptr + p.cq_off.ring_mask;
> - cring->ring_entries = cptr + p.cq_off.ring_entries;
> - cring->cqes = cptr + p.cq_off.cqes;
> + cring->head = s->cq_ptr + p.cq_off.head;
> + cring->tail = s->cq_ptr + p.cq_off.tail;
> + cring->ring_mask = s->cq_ptr + p.cq_off.ring_mask;
> + cring->ring_entries = s->cq_ptr + p.cq_off.ring_entries;
> + cring->cqes = s->cq_ptr + p.cq_off.cqes;
>
> return 0;
> }
Isn't this nearly identical to the io uring setup in the common header?
I was hoping of unifying that part as well.
> @@ -139,7 +101,7 @@ static void check_buffer(char *buffer, size_t len)
> tst_res(TPASS, "Buffer filled in correctly");
> }
>
> -static void drain_uring_cq(struct submitter *s, unsigned int exp_events)
> +static void drain_uring_cq(struct io_uring_submit *s, unsigned int exp_events)
> {
> struct io_cq_ring *cring = &s->cq_ring;
> unsigned int head = *cring->head;
> @@ -175,7 +137,7 @@ static void drain_uring_cq(struct submitter *s, unsigned int exp_events)
> events, exp_events);
> }
>
> -static int submit_to_uring_sq(struct submitter *s, struct tcase *tc)
> +static int submit_to_uring_sq(struct io_uring_submit *s, struct tcase *tc)
> {
> unsigned int index = 0, tail = 0, next_tail = 0;
> struct io_sq_ring *sring = &s->sq_ring;
> @@ -229,23 +191,20 @@ static int submit_to_uring_sq(struct submitter *s, struct tcase *tc)
>
> static void cleanup_io_uring_test(void)
> {
> - io_uring_register(s->ring_fd, IORING_UNREGISTER_BUFFERS,
> + io_uring_register(s.ring_fd, IORING_UNREGISTER_BUFFERS,
> NULL, QUEUE_DEPTH);
> - SAFE_MUNMAP(s->sqes, sizeof(struct io_uring_sqe));
> - SAFE_MUNMAP(cptr, cptr_size);
> - SAFE_MUNMAP(sptr, sptr_size);
> - SAFE_CLOSE(s->ring_fd);
> + io_uring_cleanup_queue(&s, QUEUE_DEPTH);
> }
>
> static void run(unsigned int n)
> {
> struct tcase *tc = &tcases[n];
>
> - if (setup_io_uring_test(s, tc))
> + if (setup_io_uring_test(&s, tc))
> return;
>
> - if (!submit_to_uring_sq(s, tc))
> - drain_uring_cq(s, 1);
> + if (!submit_to_uring_sq(&s, tc))
> + drain_uring_cq(&s, 1);
>
> cleanup_io_uring_test();
> }
> --
> 2.39.1
>
--
Cyril Hrubis
chrubis@suse.cz
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [LTP] [PATCH 2/3] io_uring: Test READV and WRITEV operations
2026-03-23 16:22 ` Cyril Hrubis
@ 2026-03-24 5:15 ` Sachin Sant
0 siblings, 0 replies; 9+ messages in thread
From: Sachin Sant @ 2026-03-24 5:15 UTC (permalink / raw)
To: Cyril Hrubis; +Cc: ltp
On 23/03/26 9:52 pm, Cyril Hrubis wrote:
> Hi!
>> +#define TEST_FILE "io_uring_test_file"
>> +#define QUEUE_DEPTH 2
>> +#define NUM_VECS 4
>> +#define VEC_SIZE 1024
>> +
>> +static char write_bufs[NUM_VECS][VEC_SIZE];
>> +static char read_bufs[NUM_VECS][VEC_SIZE];
>> +static struct iovec write_iovs[NUM_VECS];
>> +static struct iovec read_iovs[NUM_VECS];
> The guarded buffers can allocate iovec for you as well. Have a look at
> readv01.c test how to do that in the tst_test structure.
Thanks Cyril for the review comments. I have addressed them in v2.
--
Thanks
- Sachin
--
Mailing list info: https://lists.linux.it/listinfo/ltp
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-03-24 5:16 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-20 12:47 [LTP] [PATCH 0/3] io_uring READ(V), WRITE(v) operation tests Sachin Sant
2026-03-20 12:47 ` [LTP] [PATCH 1/3] io_uring: Test IORING READ and WRITE operations Sachin Sant
2026-03-23 12:46 ` Cyril Hrubis
2026-03-23 15:46 ` Sachin Sant
2026-03-20 12:47 ` [LTP] [PATCH 2/3] io_uring: Test READV and WRITEV operations Sachin Sant
2026-03-23 16:22 ` Cyril Hrubis
2026-03-24 5:15 ` Sachin Sant
2026-03-20 12:47 ` [LTP] [PATCH 3/3] io_uring: Refactor io_uring01 to use common code Sachin Sant
2026-03-23 16:32 ` Cyril Hrubis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox