fio.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] add option to interleave blktraces
@ 2018-09-19 18:25 Dennis Zhou
  2018-09-19 18:25 ` [PATCH 1/4] options: rename name string operations for more general use Dennis Zhou
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Dennis Zhou @ 2018-09-19 18:25 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Tejun Heo, Andy Newell, fio, kernel-team, Dennis Zhou

Hi everyone,

Understanding how a workload performs on different devices has been
nice and easy given the infrastructure around blktrace, blkparse, and
fio. Given a blktrace, fio can rerun that workload on a different drive.

Exploring colocation is a little tricker, but doable via adding multiple
jobs in fio. An issue here is that the scheduler can influence the
performance of each run as each job is async.

This patchset adds the ability to pass multiple blktrace binary dumps to
"--read_iolog" as colon separated paths and then performs simple
timestamp merging between the traces. Two additional parameters are
added, "--merge_blktrace_scalars" and "--merge_blktrace_iters", to allow
for scaling a particular trace and adjusting the number of iterations
respectively.

In an example, given two 60s blktraces, A and B. Imagine we want to see
how trace A would perform if we slowed it down by 50%. We can experiment
here with --merge_blktrace_scalars="200:100" and
--merge_blktrace_iters="1:2". This says to run the first blktrace over
200% of the time and the second at 100% running the first for a single
iteration and the second for 2 iterations. This puts the overall runtime
at 120s for each trace.

This patchset includes the following 4 patches:
  0001-options-rename-name-string-operations-for-more-gener.patch
  0002-blktrace-add-support-to-interleave-blktrace-files.patch
  0003-blktrace-add-option-to-scale-a-trace.patch
  0004-blktrace-add-option-to-iterate-over-a-trace-multiple.patch

0001 renames some string parsing functions to be more generic.
0002 adds basic merging support. 0003 adds merge_blktrace_scalars.
0004 adds merge_blktrace_iters.

diffstats below:

Dennis Zhou (4):
  options: rename name string operations for more general use
  blktrace: add support to interleave blktrace files
  blktrace: add option to scale a trace
  blktrace: add option to iterate over a trace multiple times

 blktrace.c       | 210 +++++++++++++++++++++++++++++++++++++++++++++++
 blktrace.h       |  17 ++++
 init.c           |  20 +++++
 options.c        |  47 +++++++++--
 options.h        |   2 +
 thread_options.h |   3 +
 6 files changed, 290 insertions(+), 9 deletions(-)

Thanks,
Dennis


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/4] options: rename name string operations for more general use
  2018-09-19 18:25 [PATCH 0/4] add option to interleave blktraces Dennis Zhou
@ 2018-09-19 18:25 ` Dennis Zhou
  2018-09-19 18:25 ` [PATCH 2/4] blktrace: add support to interleave blktrace files Dennis Zhou
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 14+ messages in thread
From: Dennis Zhou @ 2018-09-19 18:25 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Tejun Heo, Andy Newell, fio, kernel-team, Dennis Zhou

get_next_name() and get_max_str_idx() are helpers that iterate and split
a string separated by ':'. s/name/str/g to make this more generic which
will be used to parse file paths for blktraces to merge.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 options.c | 18 +++++++++---------
 options.h |  2 ++
 2 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/options.c b/options.c
index 6bd74555..824abee0 100644
--- a/options.c
+++ b/options.c
@@ -1155,7 +1155,7 @@ static int str_steadystate_cb(void *data, const char *str)
  * is escaped with a '\', then that ':' is part of the filename and does not
  * indicate a new file.
  */
-static char *get_next_name(char **ptr)
+char *get_next_str(char **ptr)
 {
 	char *str = *ptr;
 	char *p, *start;
@@ -1197,14 +1197,14 @@ static char *get_next_name(char **ptr)
 }
 
 
-static int get_max_name_idx(char *input)
+int get_max_str_idx(char *input)
 {
 	unsigned int cur_idx;
 	char *str, *p;
 
 	p = str = strdup(input);
 	for (cur_idx = 0; ; cur_idx++)
-		if (get_next_name(&str) == NULL)
+		if (get_next_str(&str) == NULL)
 			break;
 
 	free(p);
@@ -1224,9 +1224,9 @@ int set_name_idx(char *target, size_t tlen, char *input, int index,
 
 	p = str = strdup(input);
 
-	index %= get_max_name_idx(input);
+	index %= get_max_str_idx(input);
 	for (cur_idx = 0; cur_idx <= index; cur_idx++)
-		fname = get_next_name(&str);
+		fname = get_next_str(&str);
 
 	if (client_sockaddr_str[0] && unique_filename) {
 		len = snprintf(target, tlen, "%s/%s.", fname,
@@ -1247,9 +1247,9 @@ char* get_name_by_idx(char *input, int index)
 
 	p = str = strdup(input);
 
-	index %= get_max_name_idx(input);
+	index %= get_max_str_idx(input);
 	for (cur_idx = 0; cur_idx <= index; cur_idx++)
-		fname = get_next_name(&str);
+		fname = get_next_str(&str);
 
 	fname = strdup(fname);
 	free(p);
@@ -1273,7 +1273,7 @@ static int str_filename_cb(void *data, const char *input)
 	if (!td->files_index)
 		td->o.nr_files = 0;
 
-	while ((fname = get_next_name(&str)) != NULL) {
+	while ((fname = get_next_str(&str)) != NULL) {
 		if (!strlen(fname))
 			break;
 		add_file(td, fname, 0, 1);
@@ -1294,7 +1294,7 @@ static int str_directory_cb(void *data, const char fio_unused *unused)
 		return 0;
 
 	p = str = strdup(td->o.directory);
-	while ((dirname = get_next_name(&str)) != NULL) {
+	while ((dirname = get_next_str(&str)) != NULL) {
 		if (lstat(dirname, &sb) < 0) {
 			ret = errno;
 
diff --git a/options.h b/options.h
index 8fdd1363..5276f31e 100644
--- a/options.h
+++ b/options.h
@@ -16,6 +16,8 @@ void add_opt_posval(const char *, const char *, const char *);
 void del_opt_posval(const char *, const char *);
 struct thread_data;
 void fio_options_free(struct thread_data *);
+char *get_next_str(char **ptr);
+int get_max_str_idx(char *input);
 char* get_name_by_idx(char *input, int index);
 int set_name_idx(char *, size_t, char *, int, bool);
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/4] blktrace: add support to interleave blktrace files
  2018-09-19 18:25 [PATCH 0/4] add option to interleave blktraces Dennis Zhou
  2018-09-19 18:25 ` [PATCH 1/4] options: rename name string operations for more general use Dennis Zhou
@ 2018-09-19 18:25 ` Dennis Zhou
  2018-09-19 18:47   ` Jens Axboe
  2018-09-19 18:25 ` [PATCH 3/4] blktrace: add option to scale a trace Dennis Zhou
  2018-09-19 18:25 ` [PATCH 4/4] blktrace: add option to iterate over a trace multiple times Dennis Zhou
  3 siblings, 1 reply; 14+ messages in thread
From: Dennis Zhou @ 2018-09-19 18:25 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Tejun Heo, Andy Newell, fio, kernel-team, Dennis Zhou

Running concurrent workloads via multiple jobs can lead to
nondeterministic results as we are at the schedulers mercy. While the
actual performance of the workload may vary regardless, this makes the
order of events consistent.

This patch introduces two flags: --merge_blktrace_file=<file> and
--merge-blktrace-only. When the first is specified, files that are ':'
separated in --read_iolog are passed to a merge function wich then
produces a single sorted trace file. This file is then passed on in
place of the sorted list or fio exists if --merge-blktrace-only is
specified.

During merging, events are filtered based on what fio cares about and
the pdu is discarded as well.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 blktrace.c       | 156 +++++++++++++++++++++++++++++++++++++++++++++++
 blktrace.h       |  11 ++++
 init.c           |  20 ++++++
 options.c        |   9 +++
 thread_options.h |   1 +
 5 files changed, 197 insertions(+)

diff --git a/blktrace.c b/blktrace.c
index 36a71809..9cdbd3ca 100644
--- a/blktrace.c
+++ b/blktrace.c
@@ -613,3 +613,159 @@ err:
 	fifo_free(fifo);
 	return false;
 }
+
+static int find_earliest_io(struct blktrace_cursor *bcs, int nr_logs)
+{
+	__u64 time = ~(__u64)0;
+	int idx = 0, i;
+
+	for (i = 0; i < nr_logs; i++) {
+		if (bcs[i].t.time < time) {
+			time = bcs[i].t.time;
+			idx = i;
+		}
+	}
+
+	return idx;
+}
+
+static void merge_finish_file(struct blktrace_cursor *bcs, int i, int *nr_logs)
+{
+	*nr_logs -= 1;
+
+	/* close file */
+	fifo_free(bcs[i].fifo);
+	close(bcs[i].fd);
+
+	/* keep active files contiguous */
+	memmove(&bcs[i], &bcs[*nr_logs], sizeof(bcs[i]));
+}
+
+static int read_trace(struct thread_data *td, struct blktrace_cursor *bc)
+{
+	int ret = 0;
+	struct blk_io_trace *t = &bc->t;
+
+read_skip:
+	/* read an io trace */
+	ret = trace_fifo_get(td, bc->fifo, bc->fd, t, sizeof(*t));
+	if (ret <= 0) {
+		return ret;
+	} else if (ret < (int) sizeof(*t)) {
+		log_err("fio: short fifo get\n");
+		return -1;
+	}
+
+	if (bc->swap)
+		byteswap_trace(t);
+
+	/* skip over actions that fio does not care about */
+	if ((t->action & 0xffff) != __BLK_TA_QUEUE ||
+	    t_get_ddir(t) == DDIR_INVAL) {
+		ret = discard_pdu(td, bc->fifo, bc->fd, t);
+		if (ret < 0) {
+			td_verror(td, ret, "blktrace lseek");
+			return ret;
+		} else if (t->pdu_len != ret) {
+			log_err("fio: discarded %d of %d\n", ret,
+				t->pdu_len);
+			return -1;
+		}
+		goto read_skip;
+	}
+
+	return ret;
+}
+
+static int write_trace(FILE *fp, struct blk_io_trace *t)
+{
+	/* pdu is not used so just write out only the io trace */
+	t->pdu_len = 0;
+	return fwrite((void *)t, sizeof(*t), 1, fp);
+}
+
+int merge_blktrace_iologs(struct thread_data *td)
+{
+	int nr_logs = get_max_str_idx(td->o.read_iolog_file);
+	struct blktrace_cursor *bcs = malloc(sizeof(struct blktrace_cursor) *
+					     nr_logs);
+	struct blktrace_cursor *bc;
+	FILE *merge_fp;
+	char *str, *ptr, *name, *merge_buf;
+	int i, ret;
+
+	/* setup output file */
+	merge_fp = fopen(td->o.merge_blktrace_file, "w");
+	merge_buf = malloc(128 * 1024);
+	ret = setvbuf(merge_fp, merge_buf, _IOFBF, 128 * 1024);
+	if (ret)
+		goto err_out_file;
+
+	/* setup input files */
+	str = ptr = strdup(td->o.read_iolog_file);
+	nr_logs = 0;
+	for (i = 0; (name = get_next_str(&ptr)) != NULL; i++) {
+		bcs[i].fd = open(name, O_RDONLY);
+		if (bcs[i].fd < 0) {
+			log_err("fio: could not open file: %s\n", name);
+			ret = bcs[i].fd;
+			goto err_file;
+		}
+		bcs[i].fifo = fifo_alloc(TRACE_FIFO_SIZE);
+		nr_logs++;
+
+		if (!is_blktrace(name, &bcs[i].swap)) {
+			log_err("fio: file is not a blktrace: %s\n", name);
+			goto err_file;
+		}
+
+		ret = read_trace(td, &bcs[i]);
+		if (ret < 0) {
+			goto err_file;
+		} else if (!ret) {
+			merge_finish_file(bcs, i, &nr_logs);
+			i--;
+		}
+	}
+	free(str);
+
+	/* merge files */
+	while (nr_logs) {
+		i = find_earliest_io(bcs, nr_logs);
+		bc = &bcs[i];
+		/* skip over the pdu */
+		ret = discard_pdu(td, bc->fifo, bc->fd, &bc->t);
+		if (ret < 0) {
+			td_verror(td, ret, "blktrace lseek");
+			goto err_file;
+		} else if (bc->t.pdu_len != ret) {
+			log_err("fio: discarded %d of %d\n", ret,
+				bc->t.pdu_len);
+			goto err_file;
+		}
+
+		ret = write_trace(merge_fp, &bc->t);
+		ret = read_trace(td, bc);
+		if (ret < 0)
+			goto err_file;
+		else if (!ret)
+			merge_finish_file(bcs, i, &nr_logs);
+	}
+
+	/* set iolog file to read from the newly merged file */
+	td->o.read_iolog_file = td->o.merge_blktrace_file;
+	ret = 0;
+
+err_file:
+	/* cleanup */
+	for (i = 0; i < nr_logs; i++) {
+		fifo_free(bcs[i].fifo);
+		close(bcs[i].fd);
+	}
+err_out_file:
+	fflush(merge_fp);
+	fclose(merge_fp);
+	free(bcs);
+
+	return ret;
+}
diff --git a/blktrace.h b/blktrace.h
index 096993ed..1b2bb76b 100644
--- a/blktrace.h
+++ b/blktrace.h
@@ -1,10 +1,21 @@
 #ifndef FIO_BLKTRACE_H
 #define FIO_BLKTRACE_H
 
+
 #ifdef FIO_HAVE_BLKTRACE
 
+#include "blktrace_api.h"
+
+struct blktrace_cursor {
+	struct fifo		*fifo;	// fifo queue for reading
+	int			fd;	// blktrace file
+	struct blk_io_trace	t;	// current io trace
+	int			swap;	// bitwise reverse required
+};
+
 bool is_blktrace(const char *, int *);
 bool load_blktrace(struct thread_data *, const char *, int);
+int merge_blktrace_iologs(struct thread_data *td);
 
 #else
 
diff --git a/init.c b/init.c
index c235b05e..1297726d 100644
--- a/init.c
+++ b/init.c
@@ -30,6 +30,7 @@
 #include "idletime.h"
 #include "filelock.h"
 #include "steadystate.h"
+#include "blktrace.h"
 
 #include "oslib/getopt.h"
 #include "oslib/strcasestr.h"
@@ -46,6 +47,7 @@ static char **ini_file;
 static int max_jobs = FIO_MAX_JOBS;
 static int dump_cmdline;
 static int parse_only;
+static int merge_blktrace_only;
 
 static struct thread_data def_thread;
 struct thread_data *threads = NULL;
@@ -286,6 +288,11 @@ static struct option l_opts[FIO_NR_OPTIONS] = {
 		.has_arg	= required_argument,
 		.val		= 'K',
 	},
+	{
+		.name		= (char *) "merge-blktrace-only",
+		.has_arg	= no_argument,
+		.val		= 'A' | FIO_CLIENT_FLAG,
+	},
 	{
 		.name		= NULL,
 	},
@@ -1724,6 +1731,14 @@ static int add_job(struct thread_data *td, const char *jobname, int job_add_num,
 	if (td_steadystate_init(td))
 		goto err;
 
+	if (o->merge_blktrace_file && !merge_blktrace_iologs(td))
+		goto err;
+
+	if (merge_blktrace_only) {
+		put_job(td);
+		return 0;
+	}
+
 	/*
 	 * recurse add identical jobs, clear numjobs and stonewall options
 	 * as they don't apply to sub-jobs
@@ -2889,6 +2904,11 @@ int parse_cmd_line(int argc, char *argv[], int client_type)
 			}
 			trigger_timeout /= 1000000;
 			break;
+
+		case 'A':
+			did_arg = true;
+			merge_blktrace_only = 1;
+			break;
 		case '?':
 			log_err("%s: unrecognized option '%s'\n", argv[0],
 							argv[optind - 1]);
diff --git a/options.c b/options.c
index 824abee0..c0deffcb 100644
--- a/options.c
+++ b/options.c
@@ -3198,6 +3198,15 @@ struct fio_option fio_options[FIO_MAX_OPTS] = {
 		.category = FIO_OPT_C_IO,
 		.group	= FIO_OPT_G_IOLOG,
 	},
+	{
+		.name	= "merge_blktrace_file",
+		.lname	= "Merged blktrace output filename",
+		.type	= FIO_OPT_STR_STORE,
+		.off1	= offsetof(struct thread_options, merge_blktrace_file),
+		.help	= "Merged blktrace output filename",
+		.category = FIO_OPT_C_IO,
+		.group = FIO_OPT_G_IOLOG,
+	},
 	{
 		.name	= "exec_prerun",
 		.lname	= "Pre-execute runnable",
diff --git a/thread_options.h b/thread_options.h
index 39315834..99552326 100644
--- a/thread_options.h
+++ b/thread_options.h
@@ -258,6 +258,7 @@ struct thread_options {
 	char *read_iolog_file;
 	bool read_iolog_chunked;
 	char *write_iolog_file;
+	char *merge_blktrace_file;
 
 	unsigned int write_bw_log;
 	unsigned int write_lat_log;
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/4] blktrace: add option to scale a trace
  2018-09-19 18:25 [PATCH 0/4] add option to interleave blktraces Dennis Zhou
  2018-09-19 18:25 ` [PATCH 1/4] options: rename name string operations for more general use Dennis Zhou
  2018-09-19 18:25 ` [PATCH 2/4] blktrace: add support to interleave blktrace files Dennis Zhou
@ 2018-09-19 18:25 ` Dennis Zhou
  2018-09-19 18:49   ` Jens Axboe
  2018-09-19 18:25 ` [PATCH 4/4] blktrace: add option to iterate over a trace multiple times Dennis Zhou
  3 siblings, 1 reply; 14+ messages in thread
From: Dennis Zhou @ 2018-09-19 18:25 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Tejun Heo, Andy Newell, fio, kernel-team, Dennis Zhou

As we explore stacking traces, it is nice to be able to scale a trace to
understand how the traces end up interacting.

This patch adds scaling by letting the user pass in percentages to scale
a trace by. When passed '--merge_blktrace_scalars="100", the trace is
ran at 100% speed. If passed 50%, this will halve the trace timestamps.
The new option takes in a comma separated list that index-wise pairs
with the passed files in "--read_iolog".

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 blktrace.c       | 35 +++++++++++++++++++++++++++++++++++
 blktrace.h       |  1 +
 options.c        | 10 ++++++++++
 thread_options.h |  1 +
 4 files changed, 47 insertions(+)

diff --git a/blktrace.c b/blktrace.c
index 9cdbd3ca..14acc699 100644
--- a/blktrace.c
+++ b/blktrace.c
@@ -4,6 +4,7 @@
 #include <stdio.h>
 #include <stdlib.h>
 #include <sys/ioctl.h>
+#include <unistd.h>
 #include <linux/fs.h>
 
 #include "flist.h"
@@ -614,6 +615,28 @@ err:
 	return false;
 }
 
+static int init_merge_param_list(fio_fp64_t *vals, struct blktrace_cursor *bcs,
+				 int nr_logs, int def, size_t off)
+{
+	int i = 0, len = 0;
+
+	while (len < FIO_IO_U_LIST_MAX_LEN && vals[len].u.f != 0.0)
+		len++;
+
+	if (len && len != nr_logs)
+		return len;
+
+	for (i = 0; i < nr_logs; i++) {
+		int *val = (int *)((char *)&bcs[i] + off);
+		*val = def;
+		if (len)
+			*val = (int)vals[i].u.f;
+	}
+
+	return 0;
+
+}
+
 static int find_earliest_io(struct blktrace_cursor *bcs, int nr_logs)
 {
 	__u64 time = ~(__u64)0;
@@ -674,6 +697,8 @@ read_skip:
 		goto read_skip;
 	}
 
+	t->time = t->time * bc->scalar / 100;
+
 	return ret;
 }
 
@@ -694,6 +719,15 @@ int merge_blktrace_iologs(struct thread_data *td)
 	char *str, *ptr, *name, *merge_buf;
 	int i, ret;
 
+	ret = init_merge_param_list(td->o.merge_blktrace_scalars, bcs, nr_logs,
+				    100, offsetof(struct blktrace_cursor,
+						  scalar));
+	if (ret) {
+		log_err("fio: merge_blktrace_scalars(%d) != nr_logs(%d)\n",
+			ret, nr_logs);
+		goto err_param;
+	}
+
 	/* setup output file */
 	merge_fp = fopen(td->o.merge_blktrace_file, "w");
 	merge_buf = malloc(128 * 1024);
@@ -765,6 +799,7 @@ err_file:
 err_out_file:
 	fflush(merge_fp);
 	fclose(merge_fp);
+err_param:
 	free(bcs);
 
 	return ret;
diff --git a/blktrace.h b/blktrace.h
index 1b2bb76b..cebd54d6 100644
--- a/blktrace.h
+++ b/blktrace.h
@@ -11,6 +11,7 @@ struct blktrace_cursor {
 	int			fd;	// blktrace file
 	struct blk_io_trace	t;	// current io trace
 	int			swap;	// bitwise reverse required
+	int			scalar;	// scale percentage
 };
 
 bool is_blktrace(const char *, int *);
diff --git a/options.c b/options.c
index c0deffcb..706f98fd 100644
--- a/options.c
+++ b/options.c
@@ -3207,6 +3207,16 @@ struct fio_option fio_options[FIO_MAX_OPTS] = {
 		.category = FIO_OPT_C_IO,
 		.group = FIO_OPT_G_IOLOG,
 	},
+	{
+		.name	= "merge_blktrace_scalars",
+		.lname	= "Percentage to scale each trace",
+		.type	= FIO_OPT_FLOAT_LIST,
+		.off1	= offsetof(struct thread_options, merge_blktrace_scalars),
+		.maxlen	= FIO_IO_U_LIST_MAX_LEN,
+		.help	= "Percentage to scale each trace",
+		.category = FIO_OPT_C_IO,
+		.group	= FIO_OPT_G_IOLOG,
+	},
 	{
 		.name	= "exec_prerun",
 		.lname	= "Pre-execute runnable",
diff --git a/thread_options.h b/thread_options.h
index 99552326..9eb6d53e 100644
--- a/thread_options.h
+++ b/thread_options.h
@@ -259,6 +259,7 @@ struct thread_options {
 	bool read_iolog_chunked;
 	char *write_iolog_file;
 	char *merge_blktrace_file;
+	fio_fp64_t merge_blktrace_scalars[FIO_IO_U_LIST_MAX_LEN];
 
 	unsigned int write_bw_log;
 	unsigned int write_lat_log;
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/4] blktrace: add option to iterate over a trace multiple times
  2018-09-19 18:25 [PATCH 0/4] add option to interleave blktraces Dennis Zhou
                   ` (2 preceding siblings ...)
  2018-09-19 18:25 ` [PATCH 3/4] blktrace: add option to scale a trace Dennis Zhou
@ 2018-09-19 18:25 ` Dennis Zhou
  3 siblings, 0 replies; 14+ messages in thread
From: Dennis Zhou @ 2018-09-19 18:25 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Tejun Heo, Andy Newell, fio, kernel-team, Dennis Zhou

Scaling a particular trace may result in different runtimes among the
merging traces. By knowing the approximate length of each trace as a
user, the overall runtime of each can be tuned to line up by letting
certain traces loop multiple times.

First, the last timestamp of a trace is recorded at the end of the first
iteration to denote the length of a trace. This value is then used to
offset subsequent iterations of a trace.

Next, the "--merge_blktrace_iters" option is introduced to let the user
specify the number of times to loop over each specific trace. This is
done by passing a comma separated list that index-wise pairs with the
passed files in "--read_iolog". Iteration counts are introduced as well
as keeping track of the length of each trace.

In an example, given two traces, A and B, each 60s long. If we want to
see the impact of trace A issuing IOs twice as fast, the
--merge_blktrace_scalars="50,100" can be set and then
--merge_blktrace_iters="2,1". This runs trace A at 2x the speed twice for
approximately the same runtime as a single run of trace B.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 blktrace.c       | 23 +++++++++++++++++++++--
 blktrace.h       |  5 +++++
 options.c        | 10 ++++++++++
 thread_options.h |  1 +
 4 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/blktrace.c b/blktrace.c
index 14acc699..1d33c6a4 100644
--- a/blktrace.c
+++ b/blktrace.c
@@ -654,6 +654,12 @@ static int find_earliest_io(struct blktrace_cursor *bcs, int nr_logs)
 
 static void merge_finish_file(struct blktrace_cursor *bcs, int i, int *nr_logs)
 {
+	bcs[i].iter++;
+	if (bcs[i].iter < bcs[i].nr_iter) {
+		lseek(bcs[i].fd, 0, SEEK_SET);
+		return;
+	}
+
 	*nr_logs -= 1;
 
 	/* close file */
@@ -672,7 +678,11 @@ static int read_trace(struct thread_data *td, struct blktrace_cursor *bc)
 read_skip:
 	/* read an io trace */
 	ret = trace_fifo_get(td, bc->fifo, bc->fd, t, sizeof(*t));
-	if (ret <= 0) {
+	if (ret < 0) {
+		return ret;
+	} else if (!ret) {
+		if (!bc->length)
+			bc->length = bc->t.time;
 		return ret;
 	} else if (ret < (int) sizeof(*t)) {
 		log_err("fio: short fifo get\n");
@@ -697,7 +707,7 @@ read_skip:
 		goto read_skip;
 	}
 
-	t->time = t->time * bc->scalar / 100;
+	t->time = (t->time + bc->iter * bc->length) * bc->scalar / 100;
 
 	return ret;
 }
@@ -728,6 +738,15 @@ int merge_blktrace_iologs(struct thread_data *td)
 		goto err_param;
 	}
 
+	ret = init_merge_param_list(td->o.merge_blktrace_iters, bcs, nr_logs,
+				    1, offsetof(struct blktrace_cursor,
+						nr_iter));
+	if (ret) {
+		log_err("fio: merge_blktrace_iters(%d) != nr_logs(%d)\n",
+			ret, nr_logs);
+		goto err_param;
+	}
+
 	/* setup output file */
 	merge_fp = fopen(td->o.merge_blktrace_file, "w");
 	merge_buf = malloc(128 * 1024);
diff --git a/blktrace.h b/blktrace.h
index cebd54d6..72d74cf8 100644
--- a/blktrace.h
+++ b/blktrace.h
@@ -4,14 +4,19 @@
 
 #ifdef FIO_HAVE_BLKTRACE
 
+#include <asm/types.h>
+
 #include "blktrace_api.h"
 
 struct blktrace_cursor {
 	struct fifo		*fifo;	// fifo queue for reading
 	int			fd;	// blktrace file
+	__u64			length; // length of trace
 	struct blk_io_trace	t;	// current io trace
 	int			swap;	// bitwise reverse required
 	int			scalar;	// scale percentage
+	int			iter;	// current iteration
+	int			nr_iter; // number of iterations to run
 };
 
 bool is_blktrace(const char *, int *);
diff --git a/options.c b/options.c
index 706f98fd..9b277309 100644
--- a/options.c
+++ b/options.c
@@ -3217,6 +3217,16 @@ struct fio_option fio_options[FIO_MAX_OPTS] = {
 		.category = FIO_OPT_C_IO,
 		.group	= FIO_OPT_G_IOLOG,
 	},
+	{
+		.name	= "merge_blktrace_iters",
+		.lname	= "Number of iterations to run per trace",
+		.type	= FIO_OPT_FLOAT_LIST,
+		.off1	= offsetof(struct thread_options, merge_blktrace_iters),
+		.maxlen	= FIO_IO_U_LIST_MAX_LEN,
+		.help	= "Number of iterations to run per trace",
+		.category = FIO_OPT_C_IO,
+		.group	= FIO_OPT_G_IOLOG,
+	},
 	{
 		.name	= "exec_prerun",
 		.lname	= "Pre-execute runnable",
diff --git a/thread_options.h b/thread_options.h
index 9eb6d53e..b3c1e06c 100644
--- a/thread_options.h
+++ b/thread_options.h
@@ -260,6 +260,7 @@ struct thread_options {
 	char *write_iolog_file;
 	char *merge_blktrace_file;
 	fio_fp64_t merge_blktrace_scalars[FIO_IO_U_LIST_MAX_LEN];
+	fio_fp64_t merge_blktrace_iters[FIO_IO_U_LIST_MAX_LEN];
 
 	unsigned int write_bw_log;
 	unsigned int write_lat_log;
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/4] blktrace: add support to interleave blktrace files
  2018-09-19 18:25 ` [PATCH 2/4] blktrace: add support to interleave blktrace files Dennis Zhou
@ 2018-09-19 18:47   ` Jens Axboe
  2018-09-19 19:29     ` Dennis Zhou
  0 siblings, 1 reply; 14+ messages in thread
From: Jens Axboe @ 2018-09-19 18:47 UTC (permalink / raw)
  To: Dennis Zhou; +Cc: Tejun Heo, Andy Newell, fio, kernel-team

On 9/19/18 12:25 PM, Dennis Zhou wrote:
> Running concurrent workloads via multiple jobs can lead to
> nondeterministic results as we are at the schedulers mercy. While the
> actual performance of the workload may vary regardless, this makes the
> order of events consistent.
> 
> This patch introduces two flags: --merge_blktrace_file=<file> and
> --merge-blktrace-only. When the first is specified, files that are ':'
> separated in --read_iolog are passed to a merge function wich then
> produces a single sorted trace file. This file is then passed on in
> place of the sorted list or fio exists if --merge-blktrace-only is
> specified.
> 
> During merging, events are filtered based on what fio cares about and
> the pdu is discarded as well.

This looks fine to me, but it's missing documentation (fio.1 and HOWTO)
for added options. For the feature as a whole as well.

> diff --git a/thread_options.h b/thread_options.h
> index 39315834..99552326 100644
> --- a/thread_options.h
> +++ b/thread_options.h
> @@ -258,6 +258,7 @@ struct thread_options {
>  	char *read_iolog_file;
>  	bool read_iolog_chunked;
>  	char *write_iolog_file;
> +	char *merge_blktrace_file;
>  
>  	unsigned int write_bw_log;
>  	unsigned int write_lat_log;

This needs an accompanying change to struct thread_options_pack, and a
cconv.c addition to ensure it's converted properly over the wire for
client/server runs of fio.o

Basically both comments here apply to the rest of the series as well :-)

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/4] blktrace: add option to scale a trace
  2018-09-19 18:25 ` [PATCH 3/4] blktrace: add option to scale a trace Dennis Zhou
@ 2018-09-19 18:49   ` Jens Axboe
  2018-09-19 19:56     ` Dennis Zhou
  0 siblings, 1 reply; 14+ messages in thread
From: Jens Axboe @ 2018-09-19 18:49 UTC (permalink / raw)
  To: Dennis Zhou; +Cc: Tejun Heo, Andy Newell, fio, kernel-team

On 9/19/18 12:25 PM, Dennis Zhou wrote:
> As we explore stacking traces, it is nice to be able to scale a trace to
> understand how the traces end up interacting.
> 
> This patch adds scaling by letting the user pass in percentages to scale
> a trace by. When passed '--merge_blktrace_scalars="100", the trace is
> ran at 100% speed. If passed 50%, this will halve the trace timestamps.
> The new option takes in a comma separated list that index-wise pairs
> with the passed files in "--read_iolog".

How is this different than replay_time_scale?

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/4] blktrace: add support to interleave blktrace files
  2018-09-19 18:47   ` Jens Axboe
@ 2018-09-19 19:29     ` Dennis Zhou
  2018-09-19 19:34       ` Jens Axboe
  0 siblings, 1 reply; 14+ messages in thread
From: Dennis Zhou @ 2018-09-19 19:29 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Dennis Zhou, Tejun Heo, Andy Newell, fio, kernel-team

Hi Jens,

On Wed, Sep 19, 2018 at 12:47:37PM -0600, Jens Axboe wrote:
> On 9/19/18 12:25 PM, Dennis Zhou wrote:
> > Running concurrent workloads via multiple jobs can lead to
> > nondeterministic results as we are at the schedulers mercy. While the
> > actual performance of the workload may vary regardless, this makes the
> > order of events consistent.
> > 
> > This patch introduces two flags: --merge_blktrace_file=<file> and
> > --merge-blktrace-only. When the first is specified, files that are ':'
> > separated in --read_iolog are passed to a merge function wich then
> > produces a single sorted trace file. This file is then passed on in
> > place of the sorted list or fio exists if --merge-blktrace-only is
> > specified.
> > 
> > During merging, events are filtered based on what fio cares about and
> > the pdu is discarded as well.
> 
> This looks fine to me, but it's missing documentation (fio.1 and HOWTO)
> for added options. For the feature as a whole as well.
> 
> > diff --git a/thread_options.h b/thread_options.h
> > index 39315834..99552326 100644
> > --- a/thread_options.h
> > +++ b/thread_options.h
> > @@ -258,6 +258,7 @@ struct thread_options {
> >  	char *read_iolog_file;
> >  	bool read_iolog_chunked;
> >  	char *write_iolog_file;
> > +	char *merge_blktrace_file;
> >  
> >  	unsigned int write_bw_log;
> >  	unsigned int write_lat_log;
> 
> This needs an accompanying change to struct thread_options_pack, and a
> cconv.c addition to ensure it's converted properly over the wire for
> client/server runs of fio.o
> 
> Basically both comments here apply to the rest of the series as well :-)
> 

Ah that makes sense. I wasn't quite sure what the _pack was for. Now I
get it. I'll add the corresponding documentation and send out a v2!
Thanks for looking at it so quickly.

Thanks,
Dennis


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/4] blktrace: add support to interleave blktrace files
  2018-09-19 19:29     ` Dennis Zhou
@ 2018-09-19 19:34       ` Jens Axboe
  0 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2018-09-19 19:34 UTC (permalink / raw)
  To: Dennis Zhou; +Cc: Tejun Heo, Andy Newell, fio, kernel-team

On 9/19/18 1:29 PM, Dennis Zhou wrote:
> Hi Jens,
> 
> On Wed, Sep 19, 2018 at 12:47:37PM -0600, Jens Axboe wrote:
>> On 9/19/18 12:25 PM, Dennis Zhou wrote:
>>> Running concurrent workloads via multiple jobs can lead to
>>> nondeterministic results as we are at the schedulers mercy. While the
>>> actual performance of the workload may vary regardless, this makes the
>>> order of events consistent.
>>>
>>> This patch introduces two flags: --merge_blktrace_file=<file> and
>>> --merge-blktrace-only. When the first is specified, files that are ':'
>>> separated in --read_iolog are passed to a merge function wich then
>>> produces a single sorted trace file. This file is then passed on in
>>> place of the sorted list or fio exists if --merge-blktrace-only is
>>> specified.
>>>
>>> During merging, events are filtered based on what fio cares about and
>>> the pdu is discarded as well.
>>
>> This looks fine to me, but it's missing documentation (fio.1 and HOWTO)
>> for added options. For the feature as a whole as well.
>>
>>> diff --git a/thread_options.h b/thread_options.h
>>> index 39315834..99552326 100644
>>> --- a/thread_options.h
>>> +++ b/thread_options.h
>>> @@ -258,6 +258,7 @@ struct thread_options {
>>>  	char *read_iolog_file;
>>>  	bool read_iolog_chunked;
>>>  	char *write_iolog_file;
>>> +	char *merge_blktrace_file;
>>>  
>>>  	unsigned int write_bw_log;
>>>  	unsigned int write_lat_log;
>>
>> This needs an accompanying change to struct thread_options_pack, and a
>> cconv.c addition to ensure it's converted properly over the wire for
>> client/server runs of fio.o
>>
>> Basically both comments here apply to the rest of the series as well :-)
>>
> 
> Ah that makes sense. I wasn't quite sure what the _pack was for. Now I
> get it. I'll add the corresponding documentation and send out a v2!
> Thanks for looking at it so quickly.

No problem - also forgot to mention that you want to bump FIO_SERVER_VER
as well, if you modify thread_options_pack.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/4] blktrace: add option to scale a trace
  2018-09-19 18:49   ` Jens Axboe
@ 2018-09-19 19:56     ` Dennis Zhou
  2018-09-19 19:59       ` Jens Axboe
  0 siblings, 1 reply; 14+ messages in thread
From: Dennis Zhou @ 2018-09-19 19:56 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Tejun Heo, Andy Newell, fio, kernel-team

On Wed, Sep 19, 2018 at 12:49:15PM -0600, Jens Axboe wrote:
> On 9/19/18 12:25 PM, Dennis Zhou wrote:
> > As we explore stacking traces, it is nice to be able to scale a trace to
> > understand how the traces end up interacting.
> > 
> > This patch adds scaling by letting the user pass in percentages to scale
> > a trace by. When passed '--merge_blktrace_scalars="100", the trace is
> > ran at 100% speed. If passed 50%, this will halve the trace timestamps.
> > The new option takes in a comma separated list that index-wise pairs
> > with the passed files in "--read_iolog".
> 
> How is this different than replay_time_scale?
> 

I think merge_blktrace_scalars is a trace building parameter whereas
replay_time scale is a runtime parameter. merge_blktrace_scalars is an
index-paired list with the logs passed to --read_iolog allowing for each
trace to be independently scaled. replay_time_scale happens at runtime
and scales the entire trace uniformly. And because replay_time_scale
happens at runtime, I'm not sure repurposing the numbers would be super
intuitive.

Thanks,
Dennis


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/4] blktrace: add option to scale a trace
  2018-09-19 19:56     ` Dennis Zhou
@ 2018-09-19 19:59       ` Jens Axboe
  2018-09-19 21:06         ` Dennis Zhou
  0 siblings, 1 reply; 14+ messages in thread
From: Jens Axboe @ 2018-09-19 19:59 UTC (permalink / raw)
  To: Dennis Zhou; +Cc: Tejun Heo, Andy Newell, fio, kernel-team

On 9/19/18 1:56 PM, Dennis Zhou wrote:
> On Wed, Sep 19, 2018 at 12:49:15PM -0600, Jens Axboe wrote:
>> On 9/19/18 12:25 PM, Dennis Zhou wrote:
>>> As we explore stacking traces, it is nice to be able to scale a trace to
>>> understand how the traces end up interacting.
>>>
>>> This patch adds scaling by letting the user pass in percentages to scale
>>> a trace by. When passed '--merge_blktrace_scalars="100", the trace is
>>> ran at 100% speed. If passed 50%, this will halve the trace timestamps.
>>> The new option takes in a comma separated list that index-wise pairs
>>> with the passed files in "--read_iolog".
>>
>> How is this different than replay_time_scale?
>>
> 
> I think merge_blktrace_scalars is a trace building parameter whereas
> replay_time scale is a runtime parameter. merge_blktrace_scalars is an
> index-paired list with the logs passed to --read_iolog allowing for each
> trace to be independently scaled. replay_time_scale happens at runtime
> and scales the entire trace uniformly. And because replay_time_scale
> happens at runtime, I'm not sure repurposing the numbers would be super
> intuitive.

Not sure I see the difference, if you just allow replay_time_scale to
take multiple values (one for each trace)?

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/4] blktrace: add option to scale a trace
  2018-09-19 19:59       ` Jens Axboe
@ 2018-09-19 21:06         ` Dennis Zhou
  2018-09-19 21:19           ` Jens Axboe
  0 siblings, 1 reply; 14+ messages in thread
From: Dennis Zhou @ 2018-09-19 21:06 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Dennis Zhou, Tejun Heo, Andy Newell, fio, kernel-team

On Wed, Sep 19, 2018 at 01:59:53PM -0600, Jens Axboe wrote:
> On 9/19/18 1:56 PM, Dennis Zhou wrote:
> > On Wed, Sep 19, 2018 at 12:49:15PM -0600, Jens Axboe wrote:
> >> On 9/19/18 12:25 PM, Dennis Zhou wrote:
> >>> As we explore stacking traces, it is nice to be able to scale a trace to
> >>> understand how the traces end up interacting.
> >>>
> >>> This patch adds scaling by letting the user pass in percentages to scale
> >>> a trace by. When passed '--merge_blktrace_scalars="100", the trace is
> >>> ran at 100% speed. If passed 50%, this will halve the trace timestamps.
> >>> The new option takes in a comma separated list that index-wise pairs
> >>> with the passed files in "--read_iolog".
> >>
> >> How is this different than replay_time_scale?
> >>
> > 
> > I think merge_blktrace_scalars is a trace building parameter whereas
> > replay_time scale is a runtime parameter. merge_blktrace_scalars is an
> > index-paired list with the logs passed to --read_iolog allowing for each
> > trace to be independently scaled. replay_time_scale happens at runtime
> > and scales the entire trace uniformly. And because replay_time_scale
> > happens at runtime, I'm not sure repurposing the numbers would be super
> > intuitive.
> 
> Not sure I see the difference, if you just allow replay_time_scale to
> take multiple values (one for each trace)?
> 

I'm imagining if I reused replay_time_scale, I could use those numbers
for merging, but then I'd have to reset it so that it doesn't affect the
trace a second time during runtime. I feel like this gets a little weird
as we're saying if you are merging, replay_time_scale gets applied
during the merge, otherwise it gets applied during runtime. Reusing also
makes it become a many (merging) to one (runtime) parameter change too
as merging works on multiple files (main thread) while runtime runs a
single file (worker thread).

Additionally, I don't think it's unreasonable to want to store the
merged trace in realtime and then run the merged trace at a different
pace, which would require the merge time and runtime knobs to be
different.

Thanks,
Dennis


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/4] blktrace: add option to scale a trace
  2018-09-19 21:06         ` Dennis Zhou
@ 2018-09-19 21:19           ` Jens Axboe
  0 siblings, 0 replies; 14+ messages in thread
From: Jens Axboe @ 2018-09-19 21:19 UTC (permalink / raw)
  To: Dennis Zhou; +Cc: Tejun Heo, Andy Newell, fio, kernel-team

On 9/19/18 3:06 PM, Dennis Zhou wrote:
> On Wed, Sep 19, 2018 at 01:59:53PM -0600, Jens Axboe wrote:
>> On 9/19/18 1:56 PM, Dennis Zhou wrote:
>>> On Wed, Sep 19, 2018 at 12:49:15PM -0600, Jens Axboe wrote:
>>>> On 9/19/18 12:25 PM, Dennis Zhou wrote:
>>>>> As we explore stacking traces, it is nice to be able to scale a trace to
>>>>> understand how the traces end up interacting.
>>>>>
>>>>> This patch adds scaling by letting the user pass in percentages to scale
>>>>> a trace by. When passed '--merge_blktrace_scalars="100", the trace is
>>>>> ran at 100% speed. If passed 50%, this will halve the trace timestamps.
>>>>> The new option takes in a comma separated list that index-wise pairs
>>>>> with the passed files in "--read_iolog".
>>>>
>>>> How is this different than replay_time_scale?
>>>>
>>>
>>> I think merge_blktrace_scalars is a trace building parameter whereas
>>> replay_time scale is a runtime parameter. merge_blktrace_scalars is an
>>> index-paired list with the logs passed to --read_iolog allowing for each
>>> trace to be independently scaled. replay_time_scale happens at runtime
>>> and scales the entire trace uniformly. And because replay_time_scale
>>> happens at runtime, I'm not sure repurposing the numbers would be super
>>> intuitive.
>>
>> Not sure I see the difference, if you just allow replay_time_scale to
>> take multiple values (one for each trace)?
>>
> 
> I'm imagining if I reused replay_time_scale, I could use those numbers
> for merging, but then I'd have to reset it so that it doesn't affect the
> trace a second time during runtime. I feel like this gets a little weird
> as we're saying if you are merging, replay_time_scale gets applied
> during the merge, otherwise it gets applied during runtime. Reusing also
> makes it become a many (merging) to one (runtime) parameter change too
> as merging works on multiple files (main thread) while runtime runs a
> single file (worker thread).
> 
> Additionally, I don't think it's unreasonable to want to store the
> merged trace in realtime and then run the merged trace at a different
> pace, which would require the merge time and runtime knobs to be
> different.

I guess that makes sense, especially as a sub-option to merging. Just
ensure it's all properly documented :-)

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/4] options: rename name string operations for more general use
  2018-09-20 18:08 [PATCH v2 0/4] add option to interleave blktraces Dennis Zhou
@ 2018-09-20 18:08 ` Dennis Zhou
  0 siblings, 0 replies; 14+ messages in thread
From: Dennis Zhou @ 2018-09-20 18:08 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Tejun Heo, Andy Newell, fio, kernel-team, Dennis Zhou

get_next_name() and get_max_str_idx() are helpers that iterate and split
a string separated by ':'. s/name/str/g to make this more generic which
will be used to parse file paths for blktraces to merge.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 options.c | 18 +++++++++---------
 options.h |  2 ++
 2 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/options.c b/options.c
index 6bd74555..824abee0 100644
--- a/options.c
+++ b/options.c
@@ -1155,7 +1155,7 @@ static int str_steadystate_cb(void *data, const char *str)
  * is escaped with a '\', then that ':' is part of the filename and does not
  * indicate a new file.
  */
-static char *get_next_name(char **ptr)
+char *get_next_str(char **ptr)
 {
 	char *str = *ptr;
 	char *p, *start;
@@ -1197,14 +1197,14 @@ static char *get_next_name(char **ptr)
 }
 
 
-static int get_max_name_idx(char *input)
+int get_max_str_idx(char *input)
 {
 	unsigned int cur_idx;
 	char *str, *p;
 
 	p = str = strdup(input);
 	for (cur_idx = 0; ; cur_idx++)
-		if (get_next_name(&str) == NULL)
+		if (get_next_str(&str) == NULL)
 			break;
 
 	free(p);
@@ -1224,9 +1224,9 @@ int set_name_idx(char *target, size_t tlen, char *input, int index,
 
 	p = str = strdup(input);
 
-	index %= get_max_name_idx(input);
+	index %= get_max_str_idx(input);
 	for (cur_idx = 0; cur_idx <= index; cur_idx++)
-		fname = get_next_name(&str);
+		fname = get_next_str(&str);
 
 	if (client_sockaddr_str[0] && unique_filename) {
 		len = snprintf(target, tlen, "%s/%s.", fname,
@@ -1247,9 +1247,9 @@ char* get_name_by_idx(char *input, int index)
 
 	p = str = strdup(input);
 
-	index %= get_max_name_idx(input);
+	index %= get_max_str_idx(input);
 	for (cur_idx = 0; cur_idx <= index; cur_idx++)
-		fname = get_next_name(&str);
+		fname = get_next_str(&str);
 
 	fname = strdup(fname);
 	free(p);
@@ -1273,7 +1273,7 @@ static int str_filename_cb(void *data, const char *input)
 	if (!td->files_index)
 		td->o.nr_files = 0;
 
-	while ((fname = get_next_name(&str)) != NULL) {
+	while ((fname = get_next_str(&str)) != NULL) {
 		if (!strlen(fname))
 			break;
 		add_file(td, fname, 0, 1);
@@ -1294,7 +1294,7 @@ static int str_directory_cb(void *data, const char fio_unused *unused)
 		return 0;
 
 	p = str = strdup(td->o.directory);
-	while ((dirname = get_next_name(&str)) != NULL) {
+	while ((dirname = get_next_str(&str)) != NULL) {
 		if (lstat(dirname, &sb) < 0) {
 			ret = errno;
 
diff --git a/options.h b/options.h
index 8fdd1363..5276f31e 100644
--- a/options.h
+++ b/options.h
@@ -16,6 +16,8 @@ void add_opt_posval(const char *, const char *, const char *);
 void del_opt_posval(const char *, const char *);
 struct thread_data;
 void fio_options_free(struct thread_data *);
+char *get_next_str(char **ptr);
+int get_max_str_idx(char *input);
 char* get_name_by_idx(char *input, int index);
 int set_name_idx(char *, size_t, char *, int, bool);
 
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2018-09-20 18:08 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-09-19 18:25 [PATCH 0/4] add option to interleave blktraces Dennis Zhou
2018-09-19 18:25 ` [PATCH 1/4] options: rename name string operations for more general use Dennis Zhou
2018-09-19 18:25 ` [PATCH 2/4] blktrace: add support to interleave blktrace files Dennis Zhou
2018-09-19 18:47   ` Jens Axboe
2018-09-19 19:29     ` Dennis Zhou
2018-09-19 19:34       ` Jens Axboe
2018-09-19 18:25 ` [PATCH 3/4] blktrace: add option to scale a trace Dennis Zhou
2018-09-19 18:49   ` Jens Axboe
2018-09-19 19:56     ` Dennis Zhou
2018-09-19 19:59       ` Jens Axboe
2018-09-19 21:06         ` Dennis Zhou
2018-09-19 21:19           ` Jens Axboe
2018-09-19 18:25 ` [PATCH 4/4] blktrace: add option to iterate over a trace multiple times Dennis Zhou
  -- strict thread matches above, loose matches on Subject: below --
2018-09-20 18:08 [PATCH v2 0/4] add option to interleave blktraces Dennis Zhou
2018-09-20 18:08 ` [PATCH 1/4] options: rename name string operations for more general use Dennis Zhou

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).