From: "jon ernst" <jonernst07@gmx.com>
To: unlisted-recipients:; (no To-header on input)
Cc: linux-ext4@vger.kernel.org
Subject: Re: xfstest ext4 'utility required' run stop
Date: Thu, 13 Jun 2013 01:17:30 -0400 [thread overview]
Message-ID: <20130613051731.207470@gmx.com> (raw)
Hi All,
about xfstestI installed fio. But still got error when I run ext4 test 301.
Could anyone enlight me what;s wrong with my configuration?
I have latest ext4 dev branch code with latest xfstest code.
fio version is 2.1.1
this is whole log:
Thank you!
fio: failed parsing ioengine=ioe_e4defrag
fio: job global dropped
fio valid values: sync Use read/write
: psync Use pread/pwrite
: vsync Use readv/writev
: libaio Linux native asynchronous IO
: posixaio POSIX asynchronous IO
: mmap Memory mapped IO
: splice splice/vmsplice based IO
: netsplice splice/vmsplice to/from the network
: sg SCSI generic v3 IO
: null Testing engine (no data transfer)
: net Network IO
: syslet-rw syslet enabled async pread/pwrite IO
: cpuio CPU cycle burner engine
: binject binject direct inject block engine
: rdma RDMA IO engine
: external Load external engine (append name)
fio --ioengine=ioe_e4defrag --iodepth=1 --directory=/device8 --filesize=3424725990 --size=999G --buffered=0 --fadvise_hint=0 --name=defrag-4k --ioengine=e4defrag --iodepth=1 --bs=128k --donorname=test1.def --inplace=0 --rw=write --numjobs=4 --runtime=30*1 --time_based --filename=test1 --name=aio-dio-verifier --ioengine=libaio --iodepth=128*1 --numjobs=1 --verify=crc32c-intel --verify_fatal=1 --verify_dump=1 --verify_backlog=1024 --verify_async=1 --verifysort=1 --direct=1 --bs=64k --rw=randwrite --runtime=30*1 --time_based --filename=test1
mke2fs 1.42 (29-Nov-2011)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
629552 inodes, 2518180 blocks
125909 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2579496960
77 block groups
32768 blocks per group, 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables:
# Common e4defrag regression tests
[global]
ioengine=ioe_e4defrag
iodepth=1
directory=/device8
filesize=3424725990
size=999G
buffered=0
fadvise_hint=0
#################################
# Test1
# Defragment file while other task does direct io
# Continious sequential defrag activity
[defrag-4k]
ioengine=e4defrag
iodepth=1
bs=128k
donorname=test1.def
filename=test1
inplace=0
rw=write
numjobs=4
runtime=30*1
time_based
# Verifier
[aio-dio-verifier]
ioengine=libaio
iodepth=128*1
numjobs=1
verify=crc32c-intel
verify_fatal=1
verify_dump=1
verify_backlog=1024
verify_async=1
verifysort=1
direct=1
bs=64k
rw=randwrite
filename=test1
runtime=30*1
time_based
# /usr/bin/fio /tmp/5884.fio
fio: engine libaio not loadable
fio: failed to load engine libaio
defrag-4k: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=e4defrag, iodepth=1
...
defrag-4k: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=e4defrag, iodepth=1
fio: file:ioengines.c:99, func=dlopen, error=libaio: cannot open shared object file: No such file or directory
failed: '/usr/bin/fio /tmp/5884.fio'
fio --ioengine=ioe_e4defrag --iodepth=1 --directory=/device8 --filesize=3424725990 --size=999G --buffered=0 --fadvise_hint=0 --name=defrag-4k --ioengine=e4defrag --iodepth=1 --bs=128k --donorname=test1.def --inplace=0 --rw=write --numjobs=4 --runtime=30*1 --time_based --filename=test1 --name=aio-dio-verifier --ioengine=libaio --iodepth=128*1 --numjobs=1 --verify=crc32c-intel --verify_fatal=1 --verify_dump=1 --verify_backlog=1024 --verify_async=1 --verifysort=1 --direct=1 --bs=64k --rw=randwrite --runtime=30*1 --time_based --filename=test1
mke2fs 1.42 (29-Nov-2011)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
629552 inodes, 2518180 blocks
125909 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2579496960
77 block groups
32768 blocks per group, 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables:
# Common e4defrag regression tests
[global]
ioengine=ioe_e4defrag
iodepth=1
directory=/device8
filesize=3424725990
size=999G
buffered=0
fadvise_hint=0
#################################
# Test1
# Defragment file while other task does direct io
# Continious sequential defrag activity
[defrag-4k]
ioengine=e4defrag
iodepth=1
bs=128k
donorname=test1.def
filename=test1
inplace=0
rw=write
numjobs=4
runtime=30*1
time_based
# Verifier
[aio-dio-verifier]
ioengine=libaio
iodepth=128*1
numjobs=1
verify=crc32c-intel
verify_fatal=1
verify_dump=1
verify_backlog=1024
verify_async=1
verifysort=1
direct=1
bs=64k
rw=randwrite
filename=test1
runtime=30*1
time_based
# /usr/bin/fio /tmp/3999.fio
fio: engine libaio not loadable
fio: failed to load engine libaio
defrag-4k: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=e4defrag, iodepth=1
...
defrag-4k: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=e4defrag, iodepth=1
fio: file:ioengines.c:99, func=dlopen, error=libaio: cannot open shared object file: No such file or directory
failed: '/usr/bin/fio /tmp/3999.fio'
fio --ioengine=ioe_e4defrag --iodepth=1 --directory=/device8 --filesize=3424725990 --size=999G --buffered=0 --fadvise_hint=0 --name=defrag-4k --ioengine=e4defrag --iodepth=1 --bs=128k --donorname=test1.def --inplace=0 --rw=write --numjobs=4 --runtime=30*1 --time_based --filename=test1 --name=aio-dio-verifier --ioengine=libaio --iodepth=128*1 --numjobs=1 --verify=crc32c-intel --verify_fatal=1 --verify_dump=1 --verify_backlog=1024 --verify_async=1 --verifysort=1 --direct=1 --bs=64k --rw=randwrite --runtime=30*1 --time_based --filename=test1
mke2fs 1.42 (29-Nov-2011)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
629552 inodes, 2518180 blocks
125909 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2579496960
77 block groups
32768 blocks per group, 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables:
# Common e4defrag regression tests
[global]
ioengine=ioe_e4defrag
iodepth=1
directory=/device8
filesize=3424725990
size=999G
buffered=0
fadvise_hint=0
#################################
# Test1
# Defragment file while other task does direct io
# Continious sequential defrag activity
[defrag-4k]
ioengine=e4defrag
iodepth=1
bs=128k
donorname=test1.def
filename=test1
inplace=0
rw=write
numjobs=4
runtime=30*1
time_based
# Verifier
[aio-dio-verifier]
ioengine=libaio
iodepth=128*1
numjobs=1
verify=crc32c-intel
verify_fatal=1
verify_dump=1
verify_backlog=1024
verify_async=1
verifysort=1
direct=1
bs=64k
rw=randwrite
filename=test1
runtime=30*1
time_based
# /usr/bin/fio /tmp/12231.fio
fio: engine libaio not loadable
fio: failed to load engine libaio
defrag-4k: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=e4defrag, iodepth=1
...
defrag-4k: (g=0): rw=write, bs=128K-128K/128K-128K/128K-128K, ioengine=e4defrag, iodepth=1
fio: file:ioengines.c:99, func=dlopen, error=libaio: cannot open shared object file: No such file or directory
failed: '/usr/bin/fio /tmp/12231.fio'
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next reply other threads:[~2013-06-13 5:17 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-13 5:17 jon ernst [this message]
2013-06-13 7:02 ` xfstest ext4 'utility required' run stop Zheng Liu
2013-06-13 7:33 ` Dmitry Monakhov
-- strict thread matches above, loose matches on Subject: below --
2013-06-13 13:43 jon ernst
2013-06-13 13:59 ` Theodore Ts'o
2013-06-13 3:53 jon ernst
2013-06-12 5:36 jon ernst
2013-06-12 6:31 ` Zheng Liu
2013-06-12 13:57 ` Lukáš Czerner
2013-06-12 14:47 ` Theodore Ts'o
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130613051731.207470@gmx.com \
--to=jonernst07@gmx.com \
--cc=linux-ext4@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).