* [PATCH] xfs/013: allow non-write fsstress operations in background workload
@ 2014-06-03 18:28 Brian Foster
2014-06-17 23:55 ` Dave Chinner
0 siblings, 1 reply; 3+ messages in thread
From: Brian Foster @ 2014-06-03 18:28 UTC (permalink / raw)
To: fstests; +Cc: xfs
It has been reported that test xfs/013 probably uses more space than
necessary, exhausting space if run against a several GB sized ramdisk.
xfs/013 primarily creates, links and removes inodes. Most of the space
consumption occurs via the background fsstress workload.
Remove the fsstress -w option that suppresses non-write operations. This
slightly reduces the storage footprint while still providing a
background workload for the test.
Signed-off-by: Brian Foster <bfoster@redhat.com>
---
Dave,
I was able to squeeze an xfs/013 run into a 3GB ramdisk on my VM with
this tweak. Let me know if this works for you. If not, we could probably
start turning off some of the heavier allocating fsstress ops so long as
the cost isn't too much. I'm measuring the effectiveness of this test
via the fibt stats exported to /proc.
Brian
tests/xfs/013 | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tests/xfs/013 b/tests/xfs/013
index e95d027..d47bf53 100755
--- a/tests/xfs/013
+++ b/tests/xfs/013
@@ -121,7 +121,7 @@ _create $SCRATCH_MNT/dir1 $COUNT
_cleaner $SCRATCH_MNT $LOOPS $MINDIRS &
# start a background stress workload on the fs
-$FSSTRESS_PROG -d $SCRATCH_MNT/fsstress -w -n 9999999 -p 2 -S t \
+$FSSTRESS_PROG -d $SCRATCH_MNT/fsstress -n 9999999 -p 2 -S t \
>> $seqres.full 2>&1 &
# Each cycle clones the current directory and makes a random file replacement
--
1.8.3.1
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] xfs/013: allow non-write fsstress operations in background workload
2014-06-03 18:28 [PATCH] xfs/013: allow non-write fsstress operations in background workload Brian Foster
@ 2014-06-17 23:55 ` Dave Chinner
2014-06-18 1:22 ` Dave Chinner
0 siblings, 1 reply; 3+ messages in thread
From: Dave Chinner @ 2014-06-17 23:55 UTC (permalink / raw)
To: Brian Foster; +Cc: fstests, xfs
On Tue, Jun 03, 2014 at 02:28:49PM -0400, Brian Foster wrote:
> It has been reported that test xfs/013 probably uses more space than
> necessary, exhausting space if run against a several GB sized ramdisk.
> xfs/013 primarily creates, links and removes inodes. Most of the space
> consumption occurs via the background fsstress workload.
>
> Remove the fsstress -w option that suppresses non-write operations. This
> slightly reduces the storage footprint while still providing a
> background workload for the test.
>
> Signed-off-by: Brian Foster <bfoster@redhat.com>
This change makes the runtime blow out on a ramdisk from 4s to over
ten minutes on my test machine. Non-ramdisk machines seem to be
completely unaffected.
I was going to say "no, bad change", but I noticed that my
spinning disk VMs weren't affected at all. Looking more closely,
xfs/013 is now pegging all 16 CPUs on the VM. The profile:
- 60.73% [kernel] [k] do_raw_spin_lock
- do_raw_spin_lock
- 99.98% _raw_spin_lock
- 99.83% sync_inodes_sb
sync_inodes_one_sb
iterate_supers
sys_sync
tracesys
sync
- 32.76% [kernel] [k] delay_tsc
- delay_tsc
- 98.43% __delay
do_raw_spin_lock
- _raw_spin_lock
- 99.99% sync_inodes_sb
sync_inodes_one_sb
iterate_supers
sys_sync
tracesys
sync
OK, that's a kernel problem, not a problem with the change in the
test...
/me goes and dusts off his "concurrent sync scalability" patches.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH] xfs/013: allow non-write fsstress operations in background workload
2014-06-17 23:55 ` Dave Chinner
@ 2014-06-18 1:22 ` Dave Chinner
0 siblings, 0 replies; 3+ messages in thread
From: Dave Chinner @ 2014-06-18 1:22 UTC (permalink / raw)
To: Brian Foster; +Cc: fstests, xfs
On Wed, Jun 18, 2014 at 09:55:46AM +1000, Dave Chinner wrote:
> On Tue, Jun 03, 2014 at 02:28:49PM -0400, Brian Foster wrote:
> > It has been reported that test xfs/013 probably uses more space than
> > necessary, exhausting space if run against a several GB sized ramdisk.
> > xfs/013 primarily creates, links and removes inodes. Most of the space
> > consumption occurs via the background fsstress workload.
> >
> > Remove the fsstress -w option that suppresses non-write operations. This
> > slightly reduces the storage footprint while still providing a
> > background workload for the test.
> >
> > Signed-off-by: Brian Foster <bfoster@redhat.com>
>
> This change makes the runtime blow out on a ramdisk from 4s to over
> ten minutes on my test machine. Non-ramdisk machines seem to be
> completely unaffected.
>
> I was going to say "no, bad change", but I noticed that my
> spinning disk VMs weren't affected at all. Looking more closely,
> xfs/013 is now pegging all 16 CPUs on the VM. The profile:
>
> - 60.73% [kernel] [k] do_raw_spin_lock
> - do_raw_spin_lock
> - 99.98% _raw_spin_lock
> - 99.83% sync_inodes_sb
> sync_inodes_one_sb
> iterate_supers
> sys_sync
> tracesys
> sync
> - 32.76% [kernel] [k] delay_tsc
> - delay_tsc
> - 98.43% __delay
> do_raw_spin_lock
> - _raw_spin_lock
> - 99.99% sync_inodes_sb
> sync_inodes_one_sb
> iterate_supers
> sys_sync
> tracesys
> sync
>
> OK, that's a kernel problem, not a problem with the change in the
> test...
>
> /me goes and dusts off his "concurrent sync scalability" patches.
Turns out the reason for this problem suddenly showing up was that I
had another (500TB) XFS filesystem mounted that had several million
clean cached inodes on it from other testing I was doing before the
xfstests run. Even so, having sync go off the deep end when there's
lots of clean cached inodes seems like a Bad Thing to me. :/
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2014-06-18 1:22 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-03 18:28 [PATCH] xfs/013: allow non-write fsstress operations in background workload Brian Foster
2014-06-17 23:55 ` Dave Chinner
2014-06-18 1:22 ` Dave Chinner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox