* [PATCH 0/5] Add support for specifying ramp up period by amount of IO
@ 2025-12-17 16:17 Jan Kara
2025-12-17 16:17 ` [PATCH 1/5] time: rename in_ramp_time() and ramp_time_over() Jan Kara
` (4 more replies)
0 siblings, 5 replies; 20+ messages in thread
From: Jan Kara @ 2025-12-17 16:17 UTC (permalink / raw)
To: fio; +Cc: Jens Axboe, Vincent Fu, Jan Kara
Hello!
In some cases the ramp up period is not easy to define by amount of time. This
is for example a case of buffered writes measurement where we want to start
measuring only once dirty throttling kicks in. The time until dirty throttling
kicks in depends on dirty limit (easy to figure out) and speed of writes to the
page cache (difficult to know in advance). This patch series adds to fio an
option to specify ramp up period in terms of amount to IO - i.e., the ramp up
period finishes once given amount of IO is done.
Please consider including this feature in fio, it would help us make some
testing we do more reliable. Thanks!
Honza
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/5] time: rename in_ramp_time() and ramp_time_over()
2025-12-17 16:17 [PATCH 0/5] Add support for specifying ramp up period by amount of IO Jan Kara
@ 2025-12-17 16:17 ` Jan Kara
2025-12-17 22:52 ` Damien Le Moal
2025-12-19 3:53 ` fiotestbot
2025-12-17 16:17 ` [PATCH 2/5] td: Initialize ramp_period_over based on options Jan Kara
` (3 subsequent siblings)
4 siblings, 2 replies; 20+ messages in thread
From: Jan Kara @ 2025-12-17 16:17 UTC (permalink / raw)
To: fio; +Cc: Jens Axboe, Vincent Fu, Jan Kara
Rename in_ramp_time() and ramp_time_over() to in_ramp_period() and
ramp_period_over() respectively. We will be adding other not time-based
methods for determining whether the load is ramping up so the old names
would be confusing.
Signed-off-by: Jan Kara <jack@suse.cz>
---
backend.c | 12 ++++++------
eta.c | 6 +++---
fio.h | 2 +-
fio_time.h | 4 ++--
io_u.c | 4 ++--
stat.c | 2 +-
time.c | 14 +++++++-------
7 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/backend.c b/backend.c
index 13c77552a1ed..1e4c4a38369c 100644
--- a/backend.c
+++ b/backend.c
@@ -298,7 +298,7 @@ static inline void update_ts_cache(struct thread_data *td)
static inline bool runtime_exceeded(struct thread_data *td, struct timespec *t)
{
- if (in_ramp_time(td))
+ if (in_ramp_period(td))
return false;
if (!td->o.timeout)
return false;
@@ -1031,7 +1031,7 @@ static void do_io(struct thread_data *td, uint64_t *bytes_done)
for (i = 0; i < DDIR_RWDIR_CNT; i++)
bytes_done[i] = td->bytes_done[i];
- if (in_ramp_time(td))
+ if (in_ramp_period(td))
td_set_runstate(td, TD_RAMP);
else
td_set_runstate(td, TD_RUNNING);
@@ -1162,7 +1162,7 @@ static void do_io(struct thread_data *td, uint64_t *bytes_done)
else
io_u->end_io = verify_io_u;
td_set_runstate(td, TD_VERIFYING);
- } else if (in_ramp_time(td))
+ } else if (in_ramp_period(td))
td_set_runstate(td, TD_RAMP);
else
td_set_runstate(td, TD_RUNNING);
@@ -1232,7 +1232,7 @@ reap:
!td_ioengine_flagged(td, FIO_NOIO))
continue;
- if (!in_ramp_time(td) && should_check_rate(td)) {
+ if (!in_ramp_period(td) && should_check_rate(td)) {
if (check_min_rate(td, &comp_time)) {
if (exitall_on_terminate || td->o.exitall_error)
fio_terminate_threads(td->groupid, td->o.exit_what);
@@ -1240,7 +1240,7 @@ reap:
break;
}
}
- if (!in_ramp_time(td) && td->o.latency_target)
+ if (!in_ramp_period(td) && td->o.latency_target)
lat_target_check(td);
}
@@ -2673,7 +2673,7 @@ reap:
if (td->runstate != TD_INITIALIZED)
continue;
- if (in_ramp_time(td))
+ if (in_ramp_period(td))
td_set_runstate(td, TD_RAMP);
else
td_set_runstate(td, TD_RUNNING);
diff --git a/eta.c b/eta.c
index 16109510067b..947752b9dc17 100644
--- a/eta.c
+++ b/eta.c
@@ -274,12 +274,12 @@ static unsigned long thread_eta(struct thread_data *td)
uint64_t ramp_time = td->o.ramp_time;
t_eta = __timeout + start_delay;
- if (!td->ramp_time_over) {
+ if (!td->ramp_period_over) {
t_eta += ramp_time;
}
t_eta /= 1000000ULL;
- if ((td->runstate == TD_RAMP) && in_ramp_time(td)) {
+ if ((td->runstate == TD_RAMP) && in_ramp_period(td)) {
unsigned long ramp_left;
ramp_left = mtime_since_now(&td->epoch);
@@ -522,7 +522,7 @@ static bool calc_thread_status(struct jobs_eta *je, int force)
any_td_in_ramp = false;
for_each_td(td) {
- any_td_in_ramp |= in_ramp_time(td);
+ any_td_in_ramp |= in_ramp_period(td);
} end_for_each();
if (write_bw_log && rate_time > bw_avg_time && !any_td_in_ramp) {
calc_rate(unified_rw_rep, rate_time, io_bytes, rate_io_bytes,
diff --git a/fio.h b/fio.h
index 037678d182b8..87c20803a929 100644
--- a/fio.h
+++ b/fio.h
@@ -417,7 +417,7 @@ struct thread_data {
struct timespec terminate_time;
unsigned int ts_cache_nr;
unsigned int ts_cache_mask;
- bool ramp_time_over;
+ bool ramp_period_over;
/*
* Time since last latency_window was started
diff --git a/fio_time.h b/fio_time.h
index 969ad68d5b9c..30de7aca4fb1 100644
--- a/fio_time.h
+++ b/fio_time.h
@@ -27,8 +27,8 @@ extern uint64_t usec_spin(unsigned int);
extern uint64_t usec_sleep(struct thread_data *, unsigned long);
extern void fill_start_time(struct timespec *);
extern void set_genesis_time(void);
-extern bool ramp_time_over(struct thread_data *);
-extern bool in_ramp_time(struct thread_data *);
+extern bool ramp_period_over(struct thread_data *);
+extern bool in_ramp_period(struct thread_data *);
extern void fio_time_init(void);
extern void timespec_add_msec(struct timespec *, unsigned int);
extern void set_epoch_time(struct thread_data *, clockid_t, clockid_t);
diff --git a/io_u.c b/io_u.c
index ec3f668cae49..be0a0555098f 100644
--- a/io_u.c
+++ b/io_u.c
@@ -2106,7 +2106,7 @@ static void file_log_write_comp(const struct thread_data *td, struct fio_file *f
static bool should_account(struct thread_data *td)
{
- return ramp_time_over(td) && (td->runstate == TD_RUNNING ||
+ return ramp_period_over(td) && (td->runstate == TD_RUNNING ||
td->runstate == TD_VERIFYING);
}
@@ -2333,7 +2333,7 @@ int io_u_queued_complete(struct thread_data *td, int min_evts)
*/
void io_u_queued(struct thread_data *td, struct io_u *io_u)
{
- if (!td->o.disable_slat && ramp_time_over(td) && td->o.stats) {
+ if (!td->o.disable_slat && ramp_period_over(td) && td->o.stats) {
if (td->parent)
td = td->parent;
add_slat_sample(td, io_u);
diff --git a/stat.c b/stat.c
index a67d35514d1a..c7c0f2f4a3ef 100644
--- a/stat.c
+++ b/stat.c
@@ -3626,7 +3626,7 @@ static int add_iops_samples(struct thread_data *td, struct timespec *t)
static bool td_in_logging_state(struct thread_data *td)
{
- if (in_ramp_time(td))
+ if (in_ramp_period(td))
return false;
switch(td->runstate) {
diff --git a/time.c b/time.c
index 7f85c8de3bcb..9625e8cd92af 100644
--- a/time.c
+++ b/time.c
@@ -110,31 +110,31 @@ uint64_t utime_since_genesis(void)
return utime_since_now(&genesis);
}
-bool in_ramp_time(struct thread_data *td)
+bool in_ramp_period(struct thread_data *td)
{
- return td->o.ramp_time && !td->ramp_time_over;
+ return td->o.ramp_time && !td->ramp_period_over;
}
static bool parent_update_ramp(struct thread_data *td)
{
struct thread_data *parent = td->parent;
- if (!parent || parent->ramp_time_over)
+ if (!parent || parent->ramp_period_over)
return false;
reset_all_stats(parent);
- parent->ramp_time_over = true;
+ parent->ramp_period_over = true;
td_set_runstate(parent, TD_RAMP);
return true;
}
-bool ramp_time_over(struct thread_data *td)
+bool ramp_period_over(struct thread_data *td)
{
- if (!td->o.ramp_time || td->ramp_time_over)
+ if (!td->o.ramp_time || td->ramp_period_over)
return true;
if (utime_since_now(&td->epoch) >= td->o.ramp_time) {
- td->ramp_time_over = true;
+ td->ramp_period_over = true;
reset_all_stats(td);
reset_io_stats(td);
td_set_runstate(td, TD_RAMP);
--
2.51.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 2/5] td: Initialize ramp_period_over based on options
2025-12-17 16:17 [PATCH 0/5] Add support for specifying ramp up period by amount of IO Jan Kara
2025-12-17 16:17 ` [PATCH 1/5] time: rename in_ramp_time() and ramp_time_over() Jan Kara
@ 2025-12-17 16:17 ` Jan Kara
2025-12-17 22:53 ` Damien Le Moal
2025-12-17 16:17 ` [PATCH 3/5] eta: Use in_ramp_period() instead of opencoding it Jan Kara
` (2 subsequent siblings)
4 siblings, 1 reply; 20+ messages in thread
From: Jan Kara @ 2025-12-17 16:17 UTC (permalink / raw)
To: fio; +Cc: Jens Axboe, Vincent Fu, Jan Kara
Instead of checking whether ramp_time is specified each time we need to
check whether we are in the ramp period, initialize ramp_period_over
based on the ramp_time option. This will simplify things more
significantly later when ramp up period can be defined in a different
way.
Signed-off-by: Jan Kara <jack@suse.cz>
---
fio_time.h | 1 +
init.c | 2 ++
time.c | 10 ++++++++--
3 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/fio_time.h b/fio_time.h
index 30de7aca4fb1..c67c9450f484 100644
--- a/fio_time.h
+++ b/fio_time.h
@@ -29,6 +29,7 @@ extern void fill_start_time(struct timespec *);
extern void set_genesis_time(void);
extern bool ramp_period_over(struct thread_data *);
extern bool in_ramp_period(struct thread_data *);
+extern void td_ramp_period_init(struct thread_data *);
extern void fio_time_init(void);
extern void timespec_add_msec(struct timespec *, unsigned int);
extern void set_epoch_time(struct thread_data *, clockid_t, clockid_t);
diff --git a/init.c b/init.c
index cf66ac2c51f1..8b7728907c7f 100644
--- a/init.c
+++ b/init.c
@@ -1676,6 +1676,8 @@ static int add_job(struct thread_data *td, const char *jobname, int job_add_num,
init_thread_stat_min_vals(&td->ts);
+ td_ramp_period_init(td);
+
/*
* td->>ddir_seq_nr needs to be initialized to 1, NOT o->ddir_seq_nr,
* so that get_next_offset gets a new random offset the first time it
diff --git a/time.c b/time.c
index 9625e8cd92af..f90f2d9044fc 100644
--- a/time.c
+++ b/time.c
@@ -112,7 +112,7 @@ uint64_t utime_since_genesis(void)
bool in_ramp_period(struct thread_data *td)
{
- return td->o.ramp_time && !td->ramp_period_over;
+ return !td->ramp_period_over;
}
static bool parent_update_ramp(struct thread_data *td)
@@ -130,7 +130,7 @@ static bool parent_update_ramp(struct thread_data *td)
bool ramp_period_over(struct thread_data *td)
{
- if (!td->o.ramp_time || td->ramp_period_over)
+ if (td->ramp_period_over)
return true;
if (utime_since_now(&td->epoch) >= td->o.ramp_time) {
@@ -153,6 +153,12 @@ bool ramp_period_over(struct thread_data *td)
return false;
}
+void td_ramp_period_init(struct thread_data *td)
+{
+ if (!td->o.ramp_time)
+ td->ramp_period_over = true;
+}
+
void fio_time_init(void)
{
int i;
--
2.51.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 3/5] eta: Use in_ramp_period() instead of opencoding it
2025-12-17 16:17 [PATCH 0/5] Add support for specifying ramp up period by amount of IO Jan Kara
2025-12-17 16:17 ` [PATCH 1/5] time: rename in_ramp_time() and ramp_time_over() Jan Kara
2025-12-17 16:17 ` [PATCH 2/5] td: Initialize ramp_period_over based on options Jan Kara
@ 2025-12-17 16:17 ` Jan Kara
2025-12-17 22:55 ` Damien Le Moal
2025-12-17 16:17 ` [PATCH 4/5] time: Evaluate ramp up condition once per second Jan Kara
2025-12-17 16:17 ` [PATCH 5/5] Add option to specify ramp period by amount of IO Jan Kara
4 siblings, 1 reply; 20+ messages in thread
From: Jan Kara @ 2025-12-17 16:17 UTC (permalink / raw)
To: fio; +Cc: Jens Axboe, Vincent Fu, Jan Kara
Signed-off-by: Jan Kara <jack@suse.cz>
---
eta.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/eta.c b/eta.c
index 947752b9dc17..c6e3cffb81d6 100644
--- a/eta.c
+++ b/eta.c
@@ -274,9 +274,8 @@ static unsigned long thread_eta(struct thread_data *td)
uint64_t ramp_time = td->o.ramp_time;
t_eta = __timeout + start_delay;
- if (!td->ramp_period_over) {
+ if (in_ramp_period(td))
t_eta += ramp_time;
- }
t_eta /= 1000000ULL;
if ((td->runstate == TD_RAMP) && in_ramp_period(td)) {
--
2.51.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 4/5] time: Evaluate ramp up condition once per second
2025-12-17 16:17 [PATCH 0/5] Add support for specifying ramp up period by amount of IO Jan Kara
` (2 preceding siblings ...)
2025-12-17 16:17 ` [PATCH 3/5] eta: Use in_ramp_period() instead of opencoding it Jan Kara
@ 2025-12-17 16:17 ` Jan Kara
2025-12-17 22:58 ` Damien Le Moal
2025-12-17 16:17 ` [PATCH 5/5] Add option to specify ramp period by amount of IO Jan Kara
4 siblings, 1 reply; 20+ messages in thread
From: Jan Kara @ 2025-12-17 16:17 UTC (permalink / raw)
To: fio; +Cc: Jens Axboe, Vincent Fu, Jan Kara
Instead of evaluating whether ramp up period has finished on each IO
submission and completion evaluate it once per second and set
appropriate state variable in thread_data. Later when ramp up period end
condition will be more complex and would involve stat data from all
threads, it would unnecessarily slow down IO.
Signed-off-by: Jan Kara <jack@suse.cz>
---
fio.h | 2 +-
fio_time.h | 5 ++++
helper_thread.c | 8 +++++-
time.c | 66 +++++++++++++++++++++++++++++++++----------------
4 files changed, 58 insertions(+), 23 deletions(-)
diff --git a/fio.h b/fio.h
index 87c20803a929..fdd36fa49c95 100644
--- a/fio.h
+++ b/fio.h
@@ -417,7 +417,7 @@ struct thread_data {
struct timespec terminate_time;
unsigned int ts_cache_nr;
unsigned int ts_cache_mask;
- bool ramp_period_over;
+ unsigned int ramp_period_state;
/*
* Time since last latency_window was started
diff --git a/fio_time.h b/fio_time.h
index c67c9450f484..54dad8c7812e 100644
--- a/fio_time.h
+++ b/fio_time.h
@@ -8,6 +8,10 @@
/* IWYU pragma: end_exports */
#include "lib/types.h"
+#define RAMP_PERIOD_CHECK_MSEC 1000
+
+extern bool ramp_period_enabled;
+
struct thread_data;
extern uint64_t ntime_since(const struct timespec *, const struct timespec *);
extern uint64_t ntime_since_now(const struct timespec *);
@@ -27,6 +31,7 @@ extern uint64_t usec_spin(unsigned int);
extern uint64_t usec_sleep(struct thread_data *, unsigned long);
extern void fill_start_time(struct timespec *);
extern void set_genesis_time(void);
+extern int ramp_period_check(void);
extern bool ramp_period_over(struct thread_data *);
extern bool in_ramp_period(struct thread_data *);
extern void td_ramp_period_init(struct thread_data *);
diff --git a/helper_thread.c b/helper_thread.c
index fed21d1d61e7..88614e58e5a8 100644
--- a/helper_thread.c
+++ b/helper_thread.c
@@ -290,7 +290,13 @@ static void *helper_thread_main(void *data)
.interval_ms = steadystate_enabled ? ss_check_interval :
0,
.func = steadystate_check,
- }
+ },
+ {
+ .name = "ramp_period",
+ .interval_ms = ramp_period_enabled ?
+ RAMP_PERIOD_CHECK_MSEC : 0,
+ .func = ramp_period_check,
+ },
};
struct timespec ts;
long clk_tck;
diff --git a/time.c b/time.c
index f90f2d9044fc..2709d5b9784a 100644
--- a/time.c
+++ b/time.c
@@ -6,6 +6,12 @@
static struct timespec genesis;
static unsigned long ns_granularity;
+enum ramp_period_states {
+ RAMP_RUNNING,
+ RAMP_FINISHING,
+ RAMP_DONE
+};
+
void timespec_add_msec(struct timespec *ts, unsigned int msec)
{
uint64_t adj_nsec = 1000000ULL * msec;
@@ -112,51 +118,69 @@ uint64_t utime_since_genesis(void)
bool in_ramp_period(struct thread_data *td)
{
- return !td->ramp_period_over;
+ return td->ramp_period_state != RAMP_DONE;
+}
+
+bool ramp_period_enabled = false;
+
+int ramp_period_check(void)
+{
+ for_each_td(td) {
+ if (td->ramp_period_state != RAMP_RUNNING)
+ continue;
+ if (utime_since_now(&td->epoch) >= td->o.ramp_time)
+ td->ramp_period_state = RAMP_FINISHING;
+ } end_for_each();
+
+ return 0;
}
static bool parent_update_ramp(struct thread_data *td)
{
struct thread_data *parent = td->parent;
- if (!parent || parent->ramp_period_over)
+ if (!parent || parent->ramp_period_state == RAMP_DONE)
return false;
reset_all_stats(parent);
- parent->ramp_period_over = true;
+ parent->ramp_period_state = RAMP_DONE;
td_set_runstate(parent, TD_RAMP);
return true;
}
+
bool ramp_period_over(struct thread_data *td)
{
- if (td->ramp_period_over)
+ if (td->ramp_period_state == RAMP_DONE)
return true;
- if (utime_since_now(&td->epoch) >= td->o.ramp_time) {
- td->ramp_period_over = true;
- reset_all_stats(td);
- reset_io_stats(td);
- td_set_runstate(td, TD_RAMP);
+ if (td->ramp_period_state == RAMP_RUNNING)
+ return false;
- /*
- * If we have a parent, the parent isn't doing IO. Hence
- * the parent never enters do_io(), which will switch us
- * from RAMP -> RUNNING. Do this manually here.
- */
- if (parent_update_ramp(td))
- td_set_runstate(td, TD_RUNNING);
+ td->ramp_period_state = RAMP_DONE;
+ reset_all_stats(td);
+ reset_io_stats(td);
+ td_set_runstate(td, TD_RAMP);
- return true;
- }
+ /*
+ * If we have a parent, the parent isn't doing IO. Hence
+ * the parent never enters do_io(), which will switch us
+ * from RAMP -> RUNNING. Do this manually here.
+ */
+ if (parent_update_ramp(td))
+ td_set_runstate(td, TD_RUNNING);
- return false;
+ return true;
}
void td_ramp_period_init(struct thread_data *td)
{
- if (!td->o.ramp_time)
- td->ramp_period_over = true;
+ if (td->o.ramp_time) {
+ td->ramp_period_state = RAMP_RUNNING;
+ ramp_period_enabled = true;
+ } else {
+ td->ramp_period_state = RAMP_DONE;
+ }
}
void fio_time_init(void)
--
2.51.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 5/5] Add option to specify ramp period by amount of IO
2025-12-17 16:17 [PATCH 0/5] Add support for specifying ramp up period by amount of IO Jan Kara
` (3 preceding siblings ...)
2025-12-17 16:17 ` [PATCH 4/5] time: Evaluate ramp up condition once per second Jan Kara
@ 2025-12-17 16:17 ` Jan Kara
2025-12-17 22:56 ` Jens Axboe
` (2 more replies)
4 siblings, 3 replies; 20+ messages in thread
From: Jan Kara @ 2025-12-17 16:17 UTC (permalink / raw)
To: fio; +Cc: Jens Axboe, Vincent Fu, Jan Kara
In some cases the ramp up period is not easy to define by amount of
time. This is for example a case of buffered writes measurement where we
want to start measuring only once dirty throttling kicks in. The time
until dirty throttling kicks in depends on dirty limit (easy to figure
out) and speed of writes to the page cache (difficult to know in
advance). Add option ramp_size which determines the ramp up period by
the amount of IO written (either by each job or by each group when group
reporting is enabled).
Signed-off-by: Jan Kara <jack@suse.cz>
---
HOWTO.rst | 7 +++++++
fio.1 | 6 ++++++
options.c | 11 +++++++++++
thread_options.h | 2 ++
time.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++--
5 files changed, 74 insertions(+), 2 deletions(-)
diff --git a/HOWTO.rst b/HOWTO.rst
index 9f55a73bde05..49ebae6275a6 100644
--- a/HOWTO.rst
+++ b/HOWTO.rst
@@ -715,6 +715,13 @@ Time related parameters
:option:`runtime` is specified. When the unit is omitted, the value is
given in seconds.
+.. option:: ramp_size=size
+
+ If set, fio will wait until the workload does given amount of IO before
+ logging any performance numbers. Similar considerations apply as for
+ ``ramp_time`` option. When the unit is omitted, the value is given in
+ megabytes.
+
.. option:: clocksource=str
Use the given clocksource as the base of timing. The supported options are:
diff --git a/fio.1 b/fio.1
index 9c4ff08c86ad..88e97518f853 100644
--- a/fio.1
+++ b/fio.1
@@ -497,6 +497,12 @@ thus it will increase the total runtime if a special timeout or
\fBruntime\fR is specified. When the unit is omitted, the value is
given in seconds.
.TP
+.BI ramp_size \fR=\fPsize
+If set, fio will wait until the workload does given amount of IO before
+logging any performance numbers. Similar considerations apply as for
+\fBramp_time\fR option. When the unit is omitted, the value is given in
+megabytes.
+.TP
.BI clocksource \fR=\fPstr
Use the given clocksource as the base of timing. The supported options are:
.RS
diff --git a/options.c b/options.c
index 8e3de528bbbb..d050a1f6a417 100644
--- a/options.c
+++ b/options.c
@@ -3093,6 +3093,17 @@ struct fio_option fio_options[FIO_MAX_OPTS] = {
.category = FIO_OPT_C_GENERAL,
.group = FIO_OPT_G_RUNTIME,
},
+ {
+ .name = "ramp_size",
+ .lname = "Ramp size",
+ .type = FIO_OPT_STR_VAL,
+ .off1 = offsetof(struct thread_options, ramp_size),
+ .minval = 1,
+ .help = "Amount of data transferred before measuring performance",
+ .interval = 1024 * 1024,
+ .category = FIO_OPT_C_GENERAL,
+ .group = FIO_OPT_G_RUNTIME,
+ },
{
.name = "clocksource",
.lname = "Clock source",
diff --git a/thread_options.h b/thread_options.h
index 3abce7318ce2..b4dd8d7acd49 100644
--- a/thread_options.h
+++ b/thread_options.h
@@ -212,6 +212,7 @@ struct thread_options {
unsigned long long start_delay_high;
unsigned long long timeout;
unsigned long long ramp_time;
+ unsigned long long ramp_size;
unsigned int ss_state;
fio_fp64_t ss_limit;
unsigned long long ss_dur;
@@ -546,6 +547,7 @@ struct thread_options_pack {
uint64_t start_delay_high;
uint64_t timeout;
uint64_t ramp_time;
+ uint64_t ramp_size;
uint64_t ss_dur;
uint64_t ss_ramp_time;
uint32_t ss_state;
diff --git a/time.c b/time.c
index 2709d5b9784a..d2a201bb262c 100644
--- a/time.c
+++ b/time.c
@@ -125,11 +125,57 @@ bool ramp_period_enabled = false;
int ramp_period_check(void)
{
+ uint64_t group_bytes = 0;
+ int prev_groupid = -1;
+ bool group_ramp_period_over = false;
+
for_each_td(td) {
if (td->ramp_period_state != RAMP_RUNNING)
continue;
- if (utime_since_now(&td->epoch) >= td->o.ramp_time)
+
+ if (td->o.ramp_time &&
+ utime_since_now(&td->epoch) >= td->o.ramp_time) {
td->ramp_period_state = RAMP_FINISHING;
+ continue;
+ }
+
+ if (td->o.ramp_size) {
+ int ddir;
+ const bool needs_lock = td_async_processing(td);
+
+ if (!td->o.group_reporting ||
+ (td->o.group_reporting &&
+ td->groupid != prev_groupid)) {
+ group_bytes = 0;
+ prev_groupid = td->groupid;
+ group_ramp_period_over = false;
+ }
+
+ if (needs_lock)
+ __td_io_u_lock(td);
+
+ for (ddir = 0; ddir < DDIR_RWDIR_CNT; ddir++)
+ group_bytes += td->io_bytes[ddir];
+
+ if (needs_lock)
+ __td_io_u_unlock(td);
+
+ if (group_bytes >= td->o.ramp_size) {
+ td->ramp_period_state = RAMP_FINISHING;
+ /*
+ * Mark ramp up for all threads in the group as
+ * done.
+ */
+ if (td->o.group_reporting &&
+ !group_ramp_period_over) {
+ group_ramp_period_over = true;
+ for_each_td(td2) {
+ if (td2->groupid == td->groupid)
+ td2->ramp_period_state = RAMP_FINISHING;
+ } end_for_each();
+ }
+ }
+ }
} end_for_each();
return 0;
@@ -175,7 +221,7 @@ bool ramp_period_over(struct thread_data *td)
void td_ramp_period_init(struct thread_data *td)
{
- if (td->o.ramp_time) {
+ if (td->o.ramp_time || td->o.ramp_size) {
td->ramp_period_state = RAMP_RUNNING;
ramp_period_enabled = true;
} else {
--
2.51.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 1/5] time: rename in_ramp_time() and ramp_time_over()
2025-12-17 16:17 ` [PATCH 1/5] time: rename in_ramp_time() and ramp_time_over() Jan Kara
@ 2025-12-17 22:52 ` Damien Le Moal
2025-12-19 3:53 ` fiotestbot
1 sibling, 0 replies; 20+ messages in thread
From: Damien Le Moal @ 2025-12-17 22:52 UTC (permalink / raw)
To: Jan Kara, fio; +Cc: Jens Axboe, Vincent Fu
On 12/18/25 01:17, Jan Kara wrote:
> Rename in_ramp_time() and ramp_time_over() to in_ramp_period() and
> ramp_period_over() respectively. We will be adding other not time-based
> methods for determining whether the load is ramping up so the old names
> would be confusing.
>
> Signed-off-by: Jan Kara <jack@suse.cz>
Looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 2/5] td: Initialize ramp_period_over based on options
2025-12-17 16:17 ` [PATCH 2/5] td: Initialize ramp_period_over based on options Jan Kara
@ 2025-12-17 22:53 ` Damien Le Moal
0 siblings, 0 replies; 20+ messages in thread
From: Damien Le Moal @ 2025-12-17 22:53 UTC (permalink / raw)
To: Jan Kara, fio; +Cc: Jens Axboe, Vincent Fu
On 12/18/25 01:17, Jan Kara wrote:
> Instead of checking whether ramp_time is specified each time we need to
> check whether we are in the ramp period, initialize ramp_period_over
> based on the ramp_time option. This will simplify things more
> significantly later when ramp up period can be defined in a different
> way.
>
> Signed-off-by: Jan Kara <jack@suse.cz>
Looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 3/5] eta: Use in_ramp_period() instead of opencoding it
2025-12-17 16:17 ` [PATCH 3/5] eta: Use in_ramp_period() instead of opencoding it Jan Kara
@ 2025-12-17 22:55 ` Damien Le Moal
0 siblings, 0 replies; 20+ messages in thread
From: Damien Le Moal @ 2025-12-17 22:55 UTC (permalink / raw)
To: Jan Kara, fio; +Cc: Jens Axboe, Vincent Fu
On 12/18/25 01:17, Jan Kara wrote:
> Signed-off-by: Jan Kara <jack@suse.cz>
Looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 5/5] Add option to specify ramp period by amount of IO
2025-12-17 16:17 ` [PATCH 5/5] Add option to specify ramp period by amount of IO Jan Kara
@ 2025-12-17 22:56 ` Jens Axboe
2025-12-18 17:18 ` Jan Kara
2025-12-17 23:03 ` Damien Le Moal
2025-12-18 15:19 ` Vincent Fu
2 siblings, 1 reply; 20+ messages in thread
From: Jens Axboe @ 2025-12-17 22:56 UTC (permalink / raw)
To: Jan Kara, fio; +Cc: Vincent Fu
On 12/17/25 9:17 AM, Jan Kara wrote:
> In some cases the ramp up period is not easy to define by amount of
> time. This is for example a case of buffered writes measurement where we
> want to start measuring only once dirty throttling kicks in. The time
> until dirty throttling kicks in depends on dirty limit (easy to figure
> out) and speed of writes to the page cache (difficult to know in
> advance). Add option ramp_size which determines the ramp up period by
> the amount of IO written (either by each job or by each group when group
> reporting is enabled).
Looks fine, but needs cconv additions and bumping of the server version
number or this won't work on client/server configurations.
--
Jens Axboe
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 4/5] time: Evaluate ramp up condition once per second
2025-12-17 16:17 ` [PATCH 4/5] time: Evaluate ramp up condition once per second Jan Kara
@ 2025-12-17 22:58 ` Damien Le Moal
0 siblings, 0 replies; 20+ messages in thread
From: Damien Le Moal @ 2025-12-17 22:58 UTC (permalink / raw)
To: Jan Kara, fio; +Cc: Jens Axboe, Vincent Fu
On 12/18/25 01:17, Jan Kara wrote:
> Instead of evaluating whether ramp up period has finished on each IO
> submission and completion evaluate it once per second and set
> appropriate state variable in thread_data. Later when ramp up period end
> condition will be more complex and would involve stat data from all
> threads, it would unnecessarily slow down IO.
>
> Signed-off-by: Jan Kara <jack@suse.cz>
Looks good.
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 5/5] Add option to specify ramp period by amount of IO
2025-12-17 16:17 ` [PATCH 5/5] Add option to specify ramp period by amount of IO Jan Kara
2025-12-17 22:56 ` Jens Axboe
@ 2025-12-17 23:03 ` Damien Le Moal
2025-12-18 16:31 ` Jan Kara
2025-12-18 15:19 ` Vincent Fu
2 siblings, 1 reply; 20+ messages in thread
From: Damien Le Moal @ 2025-12-17 23:03 UTC (permalink / raw)
To: Jan Kara, fio; +Cc: Jens Axboe, Vincent Fu
On 12/18/25 01:17, Jan Kara wrote:
> In some cases the ramp up period is not easy to define by amount of
> time. This is for example a case of buffered writes measurement where we
> want to start measuring only once dirty throttling kicks in. The time
> until dirty throttling kicks in depends on dirty limit (easy to figure
> out) and speed of writes to the page cache (difficult to know in
> advance). Add option ramp_size which determines the ramp up period by
> the amount of IO written (either by each job or by each group when group
> reporting is enabled).
>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
> HOWTO.rst | 7 +++++++
> fio.1 | 6 ++++++
> options.c | 11 +++++++++++
> thread_options.h | 2 ++
> time.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++--
> 5 files changed, 74 insertions(+), 2 deletions(-)
>
> diff --git a/HOWTO.rst b/HOWTO.rst
> index 9f55a73bde05..49ebae6275a6 100644
> --- a/HOWTO.rst
> +++ b/HOWTO.rst
> @@ -715,6 +715,13 @@ Time related parameters
> :option:`runtime` is specified. When the unit is omitted, the value is
> given in seconds.
>
> +.. option:: ramp_size=size
> +
> + If set, fio will wait until the workload does given amount of IO before
> + logging any performance numbers. Similar considerations apply as for
> + ``ramp_time`` option. When the unit is omitted, the value is given in
> + megabytes.
Hmm... It may be less confusing/easier to use the regular fio "int" parameter
type here, which takes all the kilo, mega etc suffixes. This means that the
default without any suffix would be bytes, not megabytes.
Other than this, this looks good to me, and indeed super useful when testing
file systems with buffered I/Os so that the initial super fast buffering spike
is easily not accounted for.
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 5/5] Add option to specify ramp period by amount of IO
2025-12-17 16:17 ` [PATCH 5/5] Add option to specify ramp period by amount of IO Jan Kara
2025-12-17 22:56 ` Jens Axboe
2025-12-17 23:03 ` Damien Le Moal
@ 2025-12-18 15:19 ` Vincent Fu
2025-12-18 17:18 ` Jan Kara
2 siblings, 1 reply; 20+ messages in thread
From: Vincent Fu @ 2025-12-18 15:19 UTC (permalink / raw)
To: Jan Kara; +Cc: fio, Jens Axboe
On Wed, Dec 17, 2025 at 11:17 AM Jan Kara <jack@suse.cz> wrote:
>
> In some cases the ramp up period is not easy to define by amount of
> time. This is for example a case of buffered writes measurement where we
> want to start measuring only once dirty throttling kicks in. The time
> until dirty throttling kicks in depends on dirty limit (easy to figure
> out) and speed of writes to the page cache (difficult to know in
> advance). Add option ramp_size which determines the ramp up period by
> the amount of IO written (either by each job or by each group when group
> reporting is enabled).
>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
> HOWTO.rst | 7 +++++++
> fio.1 | 6 ++++++
> options.c | 11 +++++++++++
> thread_options.h | 2 ++
> time.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++--
> 5 files changed, 74 insertions(+), 2 deletions(-)
>
> diff --git a/HOWTO.rst b/HOWTO.rst
> index 9f55a73bde05..49ebae6275a6 100644
> --- a/HOWTO.rst
> +++ b/HOWTO.rst
> @@ -715,6 +715,13 @@ Time related parameters
> :option:`runtime` is specified. When the unit is omitted, the value is
> given in seconds.
>
> +.. option:: ramp_size=size
> +
> + If set, fio will wait until the workload does given amount of IO before
> + logging any performance numbers. Similar considerations apply as for
> + ``ramp_time`` option. When the unit is omitted, the value is given in
> + megabytes.
> +
At the very least the documentation should explain how this options works with
reporting groups.
Currently this patch compares the running total of the group's bytes to one
job's value for ramp_size and then declares ramp time over for all jobs in
the group if the threshold is exceeded. A little more thought should be put
into potential use cases here. Should fio allow ramp_size to differ among jobs
in a reporting group? Consider adding an option to enable/disable connections
between jobs in the same reporting group.
Vincent
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 5/5] Add option to specify ramp period by amount of IO
2025-12-17 23:03 ` Damien Le Moal
@ 2025-12-18 16:31 ` Jan Kara
2025-12-19 8:12 ` Damien Le Moal
0 siblings, 1 reply; 20+ messages in thread
From: Jan Kara @ 2025-12-18 16:31 UTC (permalink / raw)
To: Damien Le Moal; +Cc: Jan Kara, fio, Jens Axboe, Vincent Fu
On Thu 18-12-25 08:03:36, Damien Le Moal wrote:
> On 12/18/25 01:17, Jan Kara wrote:
> > In some cases the ramp up period is not easy to define by amount of
> > time. This is for example a case of buffered writes measurement where we
> > want to start measuring only once dirty throttling kicks in. The time
> > until dirty throttling kicks in depends on dirty limit (easy to figure
> > out) and speed of writes to the page cache (difficult to know in
> > advance). Add option ramp_size which determines the ramp up period by
> > the amount of IO written (either by each job or by each group when group
> > reporting is enabled).
> >
> > Signed-off-by: Jan Kara <jack@suse.cz>
> > ---
> > HOWTO.rst | 7 +++++++
> > fio.1 | 6 ++++++
> > options.c | 11 +++++++++++
> > thread_options.h | 2 ++
> > time.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++--
> > 5 files changed, 74 insertions(+), 2 deletions(-)
> >
> > diff --git a/HOWTO.rst b/HOWTO.rst
> > index 9f55a73bde05..49ebae6275a6 100644
> > --- a/HOWTO.rst
> > +++ b/HOWTO.rst
> > @@ -715,6 +715,13 @@ Time related parameters
> > :option:`runtime` is specified. When the unit is omitted, the value is
> > given in seconds.
> >
> > +.. option:: ramp_size=size
> > +
> > + If set, fio will wait until the workload does given amount of IO before
> > + logging any performance numbers. Similar considerations apply as for
> > + ``ramp_time`` option. When the unit is omitted, the value is given in
> > + megabytes.
>
> Hmm... It may be less confusing/easier to use the regular fio "int" parameter
> type here, which takes all the kilo, mega etc suffixes. This means that the
> default without any suffix would be bytes, not megabytes.
Well, this argument also takes all the kilo, mega, etc. suffixes. Just
without any suffix it will default to MB. I've copied this behavior from
'size' and 'filesize' options which behave like this striving for some
consistency. That being said I don't really care deeply about the behavior
without units because I think sane people write the units explicitly.
> Other than this, this looks good to me, and indeed super useful when testing
> file systems with buffered I/Os so that the initial super fast buffering spike
> is easily not accounted for.
Thanks for review!
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 5/5] Add option to specify ramp period by amount of IO
2025-12-18 15:19 ` Vincent Fu
@ 2025-12-18 17:18 ` Jan Kara
2025-12-18 21:42 ` Vincent Fu
0 siblings, 1 reply; 20+ messages in thread
From: Jan Kara @ 2025-12-18 17:18 UTC (permalink / raw)
To: Vincent Fu; +Cc: Jan Kara, fio, Jens Axboe
On Thu 18-12-25 10:19:12, Vincent Fu wrote:
> On Wed, Dec 17, 2025 at 11:17 AM Jan Kara <jack@suse.cz> wrote:
> >
> > In some cases the ramp up period is not easy to define by amount of
> > time. This is for example a case of buffered writes measurement where we
> > want to start measuring only once dirty throttling kicks in. The time
> > until dirty throttling kicks in depends on dirty limit (easy to figure
> > out) and speed of writes to the page cache (difficult to know in
> > advance). Add option ramp_size which determines the ramp up period by
> > the amount of IO written (either by each job or by each group when group
> > reporting is enabled).
> >
> > Signed-off-by: Jan Kara <jack@suse.cz>
> > ---
> > HOWTO.rst | 7 +++++++
> > fio.1 | 6 ++++++
> > options.c | 11 +++++++++++
> > thread_options.h | 2 ++
> > time.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++--
> > 5 files changed, 74 insertions(+), 2 deletions(-)
> >
> > diff --git a/HOWTO.rst b/HOWTO.rst
> > index 9f55a73bde05..49ebae6275a6 100644
> > --- a/HOWTO.rst
> > +++ b/HOWTO.rst
> > @@ -715,6 +715,13 @@ Time related parameters
> > :option:`runtime` is specified. When the unit is omitted, the value is
> > given in seconds.
> >
> > +.. option:: ramp_size=size
> > +
> > + If set, fio will wait until the workload does given amount of IO before
> > + logging any performance numbers. Similar considerations apply as for
> > + ``ramp_time`` option. When the unit is omitted, the value is given in
> > + megabytes.
> > +
>
> At the very least the documentation should explain how this options works with
> reporting groups.
Right, I'll improve that.
> Currently this patch compares the running total of the group's bytes to one
> job's value for ramp_size and then declares ramp time over for all jobs in
> the group if the threshold is exceeded. A little more thought should be put
> into potential use cases here. Should fio allow ramp_size to differ among jobs
> in a reporting group? Consider adding an option to enable/disable connections
> between jobs in the same reporting group.
My intention (somewhat inspired by the steadystate logic) was that IO of
all jobs in the group gets summed together and the specified limit is the
amount of IO for the whole group as well. This was motivated by the usecase
where you have multiple tasks doing buffered writes and you want ramp up to
terminate once they together write given amount of data. I didn't even think
of a case where ramp_size for different jobs in one group would be different.
Now that I'm checking steadystate logic, it rejects configs where different
jobs in a group have different configurations so if I add this consistency
check to ramp_size would you be fine with it?
I think connections between jobs are a bit of an overkill at this point - I
don't have a usecase for that...
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 5/5] Add option to specify ramp period by amount of IO
2025-12-17 22:56 ` Jens Axboe
@ 2025-12-18 17:18 ` Jan Kara
0 siblings, 0 replies; 20+ messages in thread
From: Jan Kara @ 2025-12-18 17:18 UTC (permalink / raw)
To: Jens Axboe; +Cc: Jan Kara, fio, Vincent Fu
On Wed 17-12-25 15:56:00, Jens Axboe wrote:
> On 12/17/25 9:17 AM, Jan Kara wrote:
> > In some cases the ramp up period is not easy to define by amount of
> > time. This is for example a case of buffered writes measurement where we
> > want to start measuring only once dirty throttling kicks in. The time
> > until dirty throttling kicks in depends on dirty limit (easy to figure
> > out) and speed of writes to the page cache (difficult to know in
> > advance). Add option ramp_size which determines the ramp up period by
> > the amount of IO written (either by each job or by each group when group
> > reporting is enabled).
>
> Looks fine, but needs cconv additions and bumping of the server version
> number or this won't work on client/server configurations.
Thanks for review. Fixed.
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 5/5] Add option to specify ramp period by amount of IO
2025-12-18 17:18 ` Jan Kara
@ 2025-12-18 21:42 ` Vincent Fu
0 siblings, 0 replies; 20+ messages in thread
From: Vincent Fu @ 2025-12-18 21:42 UTC (permalink / raw)
To: Jan Kara; +Cc: fio, Jens Axboe
On Thu, Dec 18, 2025 at 12:18 PM Jan Kara <jack@suse.cz> wrote:
>
> On Thu 18-12-25 10:19:12, Vincent Fu wrote:
> > On Wed, Dec 17, 2025 at 11:17 AM Jan Kara <jack@suse.cz> wrote:
> > >
> > Currently this patch compares the running total of the group's bytes to one
> > job's value for ramp_size and then declares ramp time over for all jobs in
> > the group if the threshold is exceeded. A little more thought should be put
> > into potential use cases here. Should fio allow ramp_size to differ among jobs
> > in a reporting group? Consider adding an option to enable/disable connections
> > between jobs in the same reporting group.
>
> My intention (somewhat inspired by the steadystate logic) was that IO of
> all jobs in the group gets summed together and the specified limit is the
> amount of IO for the whole group as well. This was motivated by the usecase
> where you have multiple tasks doing buffered writes and you want ramp up to
> terminate once they together write given amount of data. I didn't even think
> of a case where ramp_size for different jobs in one group would be different.
> Now that I'm checking steadystate logic, it rejects configs where different
> jobs in a group have different configurations so if I add this consistency
> check to ramp_size would you be fine with it?
>
> I think connections between jobs are a bit of an overkill at this point - I
> don't have a usecase for that...
>
Yes, I would be fine with rejecting job files that have differing
ramp_size within a reporting group.
Note that I just merged another pull request that bumped the server version.
Vincent
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 1/5] time: rename in_ramp_time() and ramp_time_over()
2025-12-17 16:17 ` [PATCH 1/5] time: rename in_ramp_time() and ramp_time_over() Jan Kara
2025-12-17 22:52 ` Damien Le Moal
@ 2025-12-19 3:53 ` fiotestbot
1 sibling, 0 replies; 20+ messages in thread
From: fiotestbot @ 2025-12-19 3:53 UTC (permalink / raw)
To: fio
[-- Attachment #1: Type: text/plain, Size: 144 bytes --]
The result of fio's continuous integration tests was: failure
For more details see https://github.com/fiotestbot/fio/actions/runs/20357805338
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 5/5] Add option to specify ramp period by amount of IO
2025-12-18 16:31 ` Jan Kara
@ 2025-12-19 8:12 ` Damien Le Moal
2025-12-19 13:10 ` Jan Kara
0 siblings, 1 reply; 20+ messages in thread
From: Damien Le Moal @ 2025-12-19 8:12 UTC (permalink / raw)
To: Jan Kara; +Cc: fio, Jens Axboe, Vincent Fu
On 12/19/25 01:31, Jan Kara wrote:
>>> +.. option:: ramp_size=size
>>> +
>>> + If set, fio will wait until the workload does given amount of IO before
>>> + logging any performance numbers. Similar considerations apply as for
>>> + ``ramp_time`` option. When the unit is omitted, the value is given in
>>> + megabytes.
>>
>> Hmm... It may be less confusing/easier to use the regular fio "int" parameter
>> type here, which takes all the kilo, mega etc suffixes. This means that the
>> default without any suffix would be bytes, not megabytes.
>
> Well, this argument also takes all the kilo, mega, etc. suffixes. Just
> without any suffix it will default to MB. I've copied this behavior from
> 'size' and 'filesize' options which behave like this striving for some
> consistency. That being said I don't really care deeply about the behavior
> without units because I think sane people write the units explicitly.
Sam here: I do not really mind one way or the other. I raised this comment so
that the options stay consistent in behavior, same as you aim for.
Checking with some quick runs, if I run something like:
fio --name=write --ioengine=psync --bs=512 --rw=write --directory=/mnt \
--filename_format='test.$jobnum.$filenum' --nrfiles=16 --openfiles=1 \
--create_fsync=0 --filesize=1024 --create_on_open=1 \
--allow_file_create=1 --end_fsync=1
I get files of 1024B, not 1GiB. Same with even smaller values like:
--bs=8 --filesize=8
I get files of 8 Bytes.
So I doubt that the default is MiB when there is no unit specified.
Checking the code, the .interval field of filesize option (and all other options
in fact) seems to be unused, except by the goptions.c file, which I think is for
the fio GUI.
So the default it seems is Bytes when no unit suffix is specified. Your code is
good then, except the man page and HOWTO files where you should remove the text
"When the unit is omitted, the value is given in megabytes."
With that,
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 5/5] Add option to specify ramp period by amount of IO
2025-12-19 8:12 ` Damien Le Moal
@ 2025-12-19 13:10 ` Jan Kara
0 siblings, 0 replies; 20+ messages in thread
From: Jan Kara @ 2025-12-19 13:10 UTC (permalink / raw)
To: Damien Le Moal; +Cc: Jan Kara, fio, Jens Axboe, Vincent Fu
On Fri 19-12-25 17:12:02, Damien Le Moal wrote:
> On 12/19/25 01:31, Jan Kara wrote:
> >>> +.. option:: ramp_size=size
> >>> +
> >>> + If set, fio will wait until the workload does given amount of IO before
> >>> + logging any performance numbers. Similar considerations apply as for
> >>> + ``ramp_time`` option. When the unit is omitted, the value is given in
> >>> + megabytes.
> >>
> >> Hmm... It may be less confusing/easier to use the regular fio "int" parameter
> >> type here, which takes all the kilo, mega etc suffixes. This means that the
> >> default without any suffix would be bytes, not megabytes.
> >
> > Well, this argument also takes all the kilo, mega, etc. suffixes. Just
> > without any suffix it will default to MB. I've copied this behavior from
> > 'size' and 'filesize' options which behave like this striving for some
> > consistency. That being said I don't really care deeply about the behavior
> > without units because I think sane people write the units explicitly.
>
> Sam here: I do not really mind one way or the other. I raised this comment so
> that the options stay consistent in behavior, same as you aim for.
>
> Checking with some quick runs, if I run something like:
>
> fio --name=write --ioengine=psync --bs=512 --rw=write --directory=/mnt \
> --filename_format='test.$jobnum.$filenum' --nrfiles=16 --openfiles=1 \
> --create_fsync=0 --filesize=1024 --create_on_open=1 \
> --allow_file_create=1 --end_fsync=1
>
> I get files of 1024B, not 1GiB. Same with even smaller values like:
> --bs=8 --filesize=8
> I get files of 8 Bytes.
>
> So I doubt that the default is MiB when there is no unit specified.
>
> Checking the code, the .interval field of filesize option (and all other options
> in fact) seems to be unused, except by the goptions.c file, which I think is for
> the fio GUI.
>
> So the default it seems is Bytes when no unit suffix is specified. Your code is
> good then, except the man page and HOWTO files where you should remove the text
> "When the unit is omitted, the value is given in megabytes."
Indeed. Thanks for checking! My experiments confirm that as well. I've left
in the doc comment that "When the unit is omitted, the value is given in bytes."
just for the clarity and I've removed the .interval argument to not add to
the confusion.
Honza
--
Jan Kara <jack@suse.com>
SUSE Labs, CR
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2025-12-19 13:10 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-17 16:17 [PATCH 0/5] Add support for specifying ramp up period by amount of IO Jan Kara
2025-12-17 16:17 ` [PATCH 1/5] time: rename in_ramp_time() and ramp_time_over() Jan Kara
2025-12-17 22:52 ` Damien Le Moal
2025-12-19 3:53 ` fiotestbot
2025-12-17 16:17 ` [PATCH 2/5] td: Initialize ramp_period_over based on options Jan Kara
2025-12-17 22:53 ` Damien Le Moal
2025-12-17 16:17 ` [PATCH 3/5] eta: Use in_ramp_period() instead of opencoding it Jan Kara
2025-12-17 22:55 ` Damien Le Moal
2025-12-17 16:17 ` [PATCH 4/5] time: Evaluate ramp up condition once per second Jan Kara
2025-12-17 22:58 ` Damien Le Moal
2025-12-17 16:17 ` [PATCH 5/5] Add option to specify ramp period by amount of IO Jan Kara
2025-12-17 22:56 ` Jens Axboe
2025-12-18 17:18 ` Jan Kara
2025-12-17 23:03 ` Damien Le Moal
2025-12-18 16:31 ` Jan Kara
2025-12-19 8:12 ` Damien Le Moal
2025-12-19 13:10 ` Jan Kara
2025-12-18 15:19 ` Vincent Fu
2025-12-18 17:18 ` Jan Kara
2025-12-18 21:42 ` Vincent Fu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox