public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* xfstests: 226: have xfs_io use bigger buffers
@ 2010-05-19 22:44 Alex Elder
  2010-05-20  4:39 ` Eric Sandeen
  0 siblings, 1 reply; 2+ messages in thread
From: Alex Elder @ 2010-05-19 22:44 UTC (permalink / raw)
  To: xfs

By default xfs_io uses a buffer size of 4096 bytes.  On test 226,
the result is that the test runs much slower (at least an order
of magnitude) than it needs to.

Add a flag to the "pwrite" command sent to xfs_io so it uses
larger buffers, thereby speeding things up considerably.

Signed-off-by: Alex Elder <aelder@sgi.com>

---
 226 |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

Index: b/226
===================================================================
--- a/226
+++ b/226
@@ -49,10 +49,14 @@ _scratch_mount
 
 loops=16
 
+# Buffer size argument supplied to xfs_io "pwrite" command
+buffer="-b $(expr 512 \* 1024)"
+
 echo "--> $loops buffered 64m writes in a loop"
 for I in `seq 1 $loops`; do
 	echo -n "$I "
-	xfs_io -F -f -c 'pwrite 0 64m' $SCRATCH_MNT/test >> $seq.full
+	xfs_io -F -f \
+		-c "pwrite ${buffer} 0 64m" $SCRATCH_MNT/test >> $seq.full
 	rm -f $SCRATCH_MNT/test
 done
 
@@ -63,7 +67,8 @@ _scratch_mount
 echo "--> $loops direct 64m writes in a loop"
 for I in `seq 1 $loops`; do
 	echo -n "$I "
-	xfs_io -F -f -d -c 'pwrite 0 64m' $SCRATCH_MNT/test >> $seq.full
+	xfs_io -F -f -d \
+		-c "pwrite ${buffer} 0 64m" $SCRATCH_MNT/test >> $seq.full
 	rm -f $SCRATCH_MNT/test 
 done
 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: xfstests: 226: have xfs_io use bigger buffers
  2010-05-19 22:44 xfstests: 226: have xfs_io use bigger buffers Alex Elder
@ 2010-05-20  4:39 ` Eric Sandeen
  0 siblings, 0 replies; 2+ messages in thread
From: Eric Sandeen @ 2010-05-20  4:39 UTC (permalink / raw)
  To: Alex Elder; +Cc: xfs

Alex Elder wrote:
> By default xfs_io uses a buffer size of 4096 bytes.  On test 226,
> the result is that the test runs much slower (at least an order
> of magnitude) than it needs to.
> 
> Add a flag to the "pwrite" command sent to xfs_io so it uses
> larger buffers, thereby speeding things up considerably.
> 
> Signed-off-by: Alex Elder <aelder@sgi.com>

Reviewed-by: Eric Sandeen <sandeen@sandeen.net>

> 
> ---
>  226 |    9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> Index: b/226
> ===================================================================
> --- a/226
> +++ b/226
> @@ -49,10 +49,14 @@ _scratch_mount
>  
>  loops=16
>  
> +# Buffer size argument supplied to xfs_io "pwrite" command
> +buffer="-b $(expr 512 \* 1024)"
> +
>  echo "--> $loops buffered 64m writes in a loop"
>  for I in `seq 1 $loops`; do
>  	echo -n "$I "
> -	xfs_io -F -f -c 'pwrite 0 64m' $SCRATCH_MNT/test >> $seq.full
> +	xfs_io -F -f \
> +		-c "pwrite ${buffer} 0 64m" $SCRATCH_MNT/test >> $seq.full
>  	rm -f $SCRATCH_MNT/test
>  done
>  
> @@ -63,7 +67,8 @@ _scratch_mount
>  echo "--> $loops direct 64m writes in a loop"
>  for I in `seq 1 $loops`; do
>  	echo -n "$I "
> -	xfs_io -F -f -d -c 'pwrite 0 64m' $SCRATCH_MNT/test >> $seq.full
> +	xfs_io -F -f -d \
> +		-c "pwrite ${buffer} 0 64m" $SCRATCH_MNT/test >> $seq.full
>  	rm -f $SCRATCH_MNT/test 
>  done
>  
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2010-05-20  4:37 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-05-19 22:44 xfstests: 226: have xfs_io use bigger buffers Alex Elder
2010-05-20  4:39 ` Eric Sandeen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox