fstests.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 1/3] xfstests: dedupe a single big file and verify integrity
@ 2018-06-20  8:41 Zorro Lang
  2018-06-20  8:41 ` [PATCH v2 2/3] xfstests: iterate dedupe integrity test Zorro Lang
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Zorro Lang @ 2018-06-20  8:41 UTC (permalink / raw)
  To: fstests; +Cc: linux-xfs

Duperemove is a tool for finding duplicated extents and submitting
them for deduplication, and it supports XFS. This case trys to
verify the integrity of XFS after running duperemove.

Signed-off-by: Zorro Lang <zlang@redhat.com>
---

Thanks for Eryu's review.

V2 changed $TEST_DIR/${seq}md5.sum to $tmp.md5sum.

I didn't move this case to generic, due to duperemove tool only supports
Btrfs and XFS for now.

Thanks,
Zorro


 common/config        |  1 +
 tests/shared/008     | 79 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 tests/shared/008.out |  3 ++
 tests/shared/group   |  1 +
 4 files changed, 84 insertions(+)
 create mode 100755 tests/shared/008
 create mode 100644 tests/shared/008.out

diff --git a/common/config b/common/config
index 09e7ffee..d02d6ed5 100644
--- a/common/config
+++ b/common/config
@@ -192,6 +192,7 @@ export SETCAP_PROG="$(type -P setcap)"
 export GETCAP_PROG="$(type -P getcap)"
 export CHECKBASHISMS_PROG="$(type -P checkbashisms)"
 export XFS_INFO_PROG="$(type -P xfs_info)"
+export DUPEREMOVE_PROG="$(type -P duperemove)"
 
 # use 'udevadm settle' or 'udevsettle' to wait for lv to be settled.
 # newer systems have udevadm command but older systems like RHEL5 don't.
diff --git a/tests/shared/008 b/tests/shared/008
new file mode 100755
index 00000000..a28f5cc1
--- /dev/null
+++ b/tests/shared/008
@@ -0,0 +1,79 @@
+#! /bin/bash
+# FS QA Test 008
+#
+# Dedupe a single big file and verify integrity
+#
+#-----------------------------------------------------------------------
+# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it would be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write the Free Software Foundation,
+# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+#-----------------------------------------------------------------------
+#
+
+seq=`basename $0`
+seqres=$RESULT_DIR/$seq
+echo "QA output created by $seq"
+
+here=`pwd`
+tmp=/tmp/$$
+status=1	# failure is the default!
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+_cleanup()
+{
+	cd /
+	rm -f $tmp.*
+}
+
+# get standard environment, filters and checks
+. ./common/rc
+. ./common/filter
+. ./common/reflink
+
+# remove previous $seqres.full before test
+rm -f $seqres.full
+
+# duperemove only supports btrfs and xfs (with reflink feature).
+# Add other filesystems if it supports more later.
+_supported_fs xfs btrfs
+_supported_os Linux
+_require_scratch_dedupe
+_require_command "$DUPEREMOVE_PROG" duperemove
+
+fssize=$((2 * 1024 * 1024 * 1024))
+_scratch_mkfs_sized $fssize > $seqres.full 2>&1
+_scratch_mount >> $seqres.full 2>&1
+
+# fill the fs with a big file has same contents
+$XFS_IO_PROG -f -c "pwrite -S 0x55 0 $fssize" $SCRATCH_MNT/${seq}.file \
+	>> $seqres.full 2>&1
+md5sum $SCRATCH_MNT/${seq}.file > ${tmp}.md5sum
+
+echo "= before cycle mount ="
+# Dedupe with 1M blocksize
+$DUPEREMOVE_PROG -dr --dedupe-options=same -b 1048576 $SCRATCH_MNT/ >>$seqres.full 2>&1
+# Verify integrity
+md5sum -c --quiet ${tmp}.md5sum
+# Dedupe with 64k blocksize
+$DUPEREMOVE_PROG -dr --dedupe-options=same -b 65536 $SCRATCH_MNT/ >>$seqres.full 2>&1
+# Verify integrity again
+md5sum -c --quiet ${tmp}.md5sum
+
+# umount and mount again, verify pagecache contents don't mutate
+_scratch_cycle_mount
+echo "= after cycle mount ="
+md5sum -c --quiet ${tmp}.md5sum
+
+status=0
+exit
diff --git a/tests/shared/008.out b/tests/shared/008.out
new file mode 100644
index 00000000..f29d478f
--- /dev/null
+++ b/tests/shared/008.out
@@ -0,0 +1,3 @@
+QA output created by 008
+= before cycle mount =
+= after cycle mount =
diff --git a/tests/shared/group b/tests/shared/group
index b3663a03..49ffa8dd 100644
--- a/tests/shared/group
+++ b/tests/shared/group
@@ -10,6 +10,7 @@
 005 dangerous_fuzzers
 006 auto enospc
 007 dangerous_fuzzers
+008 auto stress dedupe
 032 mkfs auto quick
 272 auto enospc rw
 289 auto quick
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/3] xfstests: iterate dedupe integrity test
  2018-06-20  8:41 [PATCH v2 1/3] xfstests: dedupe a single big file and verify integrity Zorro Lang
@ 2018-06-20  8:41 ` Zorro Lang
  2018-06-20 16:26   ` Darrick J. Wong
  2018-06-20  8:41 ` [PATCH v2 3/3] xfstests: dedupe with random io race test Zorro Lang
  2018-06-20 16:21 ` [PATCH v2 1/3] xfstests: dedupe a single big file and verify integrity Darrick J. Wong
  2 siblings, 1 reply; 8+ messages in thread
From: Zorro Lang @ 2018-06-20  8:41 UTC (permalink / raw)
  To: fstests; +Cc: linux-xfs

This case does dedupe on a dir, then copy the dir to next dir. Dedupe
the next dir again, then copy this dir to next again, and dedupe
again ... At the end, verify the data in the last dir is still same
with the first one.

Signed-off-by: Zorro Lang <zlang@redhat.com>
---

V2 did below changes:
1) Added more description at the case beginning
2) Changed $TEST_DIR/${seq}md5.sum to $tmp.md5sum
3) Changed for ...;do format
4) Remove "-f mknod=0" fsstress option
5) Added some noise (by fsstress) in each test round.

Thanks,
Zorro

 tests/shared/009     | 119 +++++++++++++++++++++++++++++++++++++++++++++++++++
 tests/shared/009.out |   4 ++
 tests/shared/group   |   1 +
 3 files changed, 124 insertions(+)
 create mode 100755 tests/shared/009
 create mode 100644 tests/shared/009.out

diff --git a/tests/shared/009 b/tests/shared/009
new file mode 100755
index 00000000..5ed9faee
--- /dev/null
+++ b/tests/shared/009
@@ -0,0 +1,119 @@
+#! /bin/bash
+# FS QA Test 009
+#
+# Iterate dedupe integrity test. Copy an original data0 several
+# times (d0 -> d1, d1 -> d2, ... dn-1 -> dn), dedupe dataN everytime
+# before copy. At last, verify dataN same with data0.
+#
+#-----------------------------------------------------------------------
+# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it would be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write the Free Software Foundation,
+# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+#-----------------------------------------------------------------------
+#
+
+seq=`basename $0`
+seqres=$RESULT_DIR/$seq
+echo "QA output created by $seq"
+
+here=`pwd`
+tmp=/tmp/$$
+status=1	# failure is the default!
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+_cleanup()
+{
+	cd /
+	rm -f $tmp.*
+}
+
+# get standard environment, filters and checks
+. ./common/rc
+. ./common/filter
+. ./common/reflink
+
+# remove previous $seqres.full before test
+rm -f $seqres.full
+
+# real QA test starts here
+
+# duperemove only supports btrfs and xfs (with reflink feature).
+# Add other filesystems if it supports more later.
+_supported_fs xfs btrfs
+_supported_os Linux
+_require_scratch_dedupe
+_require_command "$DUPEREMOVE_PROG" duperemove
+
+_scratch_mkfs > $seqres.full 2>&1
+_scratch_mount >> $seqres.full 2>&1
+
+function iterate_dedup_verify()
+{
+	local src=$srcdir
+	local dest=$dupdir/1
+
+	for ((index = 1; index <= times; index++)); do
+		cp -a $src $dest
+		find $dest -type f -exec md5sum {} \; \
+			> $md5file$index
+		# Make some noise
+		$FSSTRESS_PROG $fsstress_opts -d $noisedir \
+			       -n 200 -p $((5 * LOAD_FACTOR)) >/dev/null 2>&1
+		# Too many output, so only save error output
+		$DUPEREMOVE_PROG -dr --dedupe-options=same $dupdir \
+			>/dev/null 2>$seqres.full
+		md5sum -c --quiet $md5file$index
+		src=$dest
+		dest=$dupdir/$((index + 1))
+	done
+}
+
+srcdir=$SCRATCH_MNT/src
+dupdir=$SCRATCH_MNT/dup
+noisedir=$dupdir/noise
+mkdir $srcdir $dupdir
+mkdir $dupdir/noise
+
+md5file=${tmp}.md5sum
+
+fsstress_opts="-w -r"
+# Create some files to be original data
+$FSSTRESS_PROG $fsstress_opts -d $srcdir \
+	       -n 500 -p $((5 * LOAD_FACTOR)) >/dev/null 2>&1
+
+# Calculate how many test cycles will be run
+src_size=`du -ks $srcdir | awk '{print $1}'`
+free_size=`df -kP $SCRATCH_MNT | grep -v Filesystem | awk '{print $4}'`
+times=$((free_size / src_size))
+if [ $times -gt $((4 * TIME_FACTOR)) ]; then
+	times=$((4 * TIME_FACTOR))
+fi
+
+echo "= Do dedup and verify ="
+iterate_dedup_verify
+
+# Use the last checksum file to verify the original data
+sed -e s#dup/$times#src#g $md5file$times > $md5file
+echo "= Backwords verify ="
+md5sum -c --quiet $md5file
+
+# read from the disk also doesn't show mutations.
+_scratch_cycle_mount
+echo "= Verify after cycle mount ="
+for ((index = 1; index <= times; index++)); do
+	md5sum -c --quiet $md5file$index
+done
+
+status=0
+exit
diff --git a/tests/shared/009.out b/tests/shared/009.out
new file mode 100644
index 00000000..44a78ba3
--- /dev/null
+++ b/tests/shared/009.out
@@ -0,0 +1,4 @@
+QA output created by 009
+= Do dedup and verify =
+= Backwords verify =
+= Verify after cycle mount =
diff --git a/tests/shared/group b/tests/shared/group
index 49ffa8dd..9c484794 100644
--- a/tests/shared/group
+++ b/tests/shared/group
@@ -11,6 +11,7 @@
 006 auto enospc
 007 dangerous_fuzzers
 008 auto stress dedupe
+009 auto stress dedupe
 032 mkfs auto quick
 272 auto enospc rw
 289 auto quick
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 3/3] xfstests: dedupe with random io race test
  2018-06-20  8:41 [PATCH v2 1/3] xfstests: dedupe a single big file and verify integrity Zorro Lang
  2018-06-20  8:41 ` [PATCH v2 2/3] xfstests: iterate dedupe integrity test Zorro Lang
@ 2018-06-20  8:41 ` Zorro Lang
  2018-06-20 16:30   ` Darrick J. Wong
  2018-06-20 16:21 ` [PATCH v2 1/3] xfstests: dedupe a single big file and verify integrity Darrick J. Wong
  2 siblings, 1 reply; 8+ messages in thread
From: Zorro Lang @ 2018-06-20  8:41 UTC (permalink / raw)
  To: fstests; +Cc: linux-xfs

Run several duperemove processes with fsstress on same directory at
same time. Make sure the race won't break the fs or kernel.

Signed-off-by: Zorro Lang <zlang@redhat.com>
---

V2 did below changes:
1) do sleep 1 after kill processes
2) change SLEEP_TIME to sleep_time
3) add the case to stress group

Thanks,
Zorro

 tests/shared/010     | 111 +++++++++++++++++++++++++++++++++++++++++++++++++++
 tests/shared/010.out |   2 +
 tests/shared/group   |   1 +
 3 files changed, 114 insertions(+)
 create mode 100755 tests/shared/010
 create mode 100644 tests/shared/010.out

diff --git a/tests/shared/010 b/tests/shared/010
new file mode 100755
index 00000000..c449c247
--- /dev/null
+++ b/tests/shared/010
@@ -0,0 +1,111 @@
+#! /bin/bash
+# FS QA Test 010
+#
+# Dedup & random I/O race test, do multi-threads fsstress and dedupe on
+# same directory/files
+#
+#-----------------------------------------------------------------------
+# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it would be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write the Free Software Foundation,
+# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
+#-----------------------------------------------------------------------
+#
+
+seq=`basename $0`
+seqres=$RESULT_DIR/$seq
+echo "QA output created by $seq"
+
+here=`pwd`
+tmp=/tmp/$$
+status=1	# failure is the default!
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+_cleanup()
+{
+	cd /
+	rm -f $tmp.*
+	kill_all_stress
+}
+
+# get standard environment, filters and checks
+. ./common/rc
+. ./common/filter
+. ./common/reflink
+
+# remove previous $seqres.full before test
+rm -f $seqres.full
+
+# real QA test starts here
+
+# duperemove only supports btrfs and xfs (with reflink feature).
+# Add other filesystems if it supports more later.
+_supported_fs xfs btrfs
+_supported_os Linux
+_require_scratch_dedupe
+_require_command "$DUPEREMOVE_PROG" duperemove
+_require_command "$KILLALL_PROG" killall
+
+_scratch_mkfs > $seqres.full 2>&1
+_scratch_mount >> $seqres.full 2>&1
+
+function kill_all_stress()
+{
+	local f=1
+	local d=1
+
+	# kill the bash process which loop run duperemove
+	if [ -n "$loop_dedup_pid" ]; then
+		kill $loop_dedup_pid > /dev/null 2>&1
+		wait $loop_dedup_pid > /dev/null 2>&1
+		loop_dedup_pid=""
+	fi
+
+	# Make sure all fsstress and duperemove processes get killed
+	while [ $((f + d)) -ne 0 ]; do
+		$KILLALL_PROG -q $FSSTRESS_PROG > /dev/null 2>&1
+		$KILLALL_PROG -q $DUPEREMOVE_PROG > /dev/null 2>&1
+		sleep 1
+		f=`ps -eLf | grep $FSSTRESS_PROG | grep -v "grep" | wc -l`
+		d=`ps -eLf | grep $DUPEREMOVE_PROG | grep -v "grep" | wc -l`
+	done
+}
+
+sleep_time=$((50 * TIME_FACTOR))
+
+# Start fsstress
+fsstress_opts="-r -n 1000 -p $((5 * LOAD_FACTOR))"
+$FSSTRESS_PROG $fsstress_opts -d $SCRATCH_MNT -l 0 >> $seqres.full 2>&1 &
+loop_dedup_pid=""
+# Start several dedupe processes on same directory
+for ((i = 0; i < $((2 * LOAD_FACTOR)); i++)); do
+	while true; do
+		$DUPEREMOVE_PROG -dr --dedupe-options=same $SCRATCH_MNT/ \
+			>>$seqres.full 2>&1
+	done &
+	loop_dedup_pid="$! $loop_dedup_pid"
+done
+
+# End the test after $sleep_time seconds
+sleep $sleep_time
+kill_all_stress
+
+# umount and mount again, verify pagecache contents don't mutate and a fresh
+# read from the disk also doesn't show mutations.
+find $testdir -type f -exec md5sum {} \; > $TEST_DIR/${seq}md5.sum
+_scratch_cycle_mount
+md5sum -c --quiet $TEST_DIR/${seq}md5.sum
+
+echo "Silence is golden"
+status=0
+exit
diff --git a/tests/shared/010.out b/tests/shared/010.out
new file mode 100644
index 00000000..1d83a8d6
--- /dev/null
+++ b/tests/shared/010.out
@@ -0,0 +1,2 @@
+QA output created by 010
+Silence is golden
diff --git a/tests/shared/group b/tests/shared/group
index 9c484794..094da27d 100644
--- a/tests/shared/group
+++ b/tests/shared/group
@@ -12,6 +12,7 @@
 007 dangerous_fuzzers
 008 auto stress dedupe
 009 auto stress dedupe
+010 auto stress dedupe
 032 mkfs auto quick
 272 auto enospc rw
 289 auto quick
-- 
2.14.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/3] xfstests: dedupe a single big file and verify integrity
  2018-06-20  8:41 [PATCH v2 1/3] xfstests: dedupe a single big file and verify integrity Zorro Lang
  2018-06-20  8:41 ` [PATCH v2 2/3] xfstests: iterate dedupe integrity test Zorro Lang
  2018-06-20  8:41 ` [PATCH v2 3/3] xfstests: dedupe with random io race test Zorro Lang
@ 2018-06-20 16:21 ` Darrick J. Wong
  2 siblings, 0 replies; 8+ messages in thread
From: Darrick J. Wong @ 2018-06-20 16:21 UTC (permalink / raw)
  To: Zorro Lang; +Cc: fstests, linux-xfs

On Wed, Jun 20, 2018 at 04:41:12PM +0800, Zorro Lang wrote:
> Duperemove is a tool for finding duplicated extents and submitting
> them for deduplication, and it supports XFS. This case trys to
> verify the integrity of XFS after running duperemove.
> 
> Signed-off-by: Zorro Lang <zlang@redhat.com>
> ---
> 
> Thanks for Eryu's review.
> 
> V2 changed $TEST_DIR/${seq}md5.sum to $tmp.md5sum.
> 
> I didn't move this case to generic, due to duperemove tool only supports
> Btrfs and XFS for now.
> 
> Thanks,
> Zorro
> 
> 
>  common/config        |  1 +
>  tests/shared/008     | 79 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  tests/shared/008.out |  3 ++
>  tests/shared/group   |  1 +
>  4 files changed, 84 insertions(+)
>  create mode 100755 tests/shared/008
>  create mode 100644 tests/shared/008.out
> 
> diff --git a/common/config b/common/config
> index 09e7ffee..d02d6ed5 100644
> --- a/common/config
> +++ b/common/config
> @@ -192,6 +192,7 @@ export SETCAP_PROG="$(type -P setcap)"
>  export GETCAP_PROG="$(type -P getcap)"
>  export CHECKBASHISMS_PROG="$(type -P checkbashisms)"
>  export XFS_INFO_PROG="$(type -P xfs_info)"
> +export DUPEREMOVE_PROG="$(type -P duperemove)"
>  
>  # use 'udevadm settle' or 'udevsettle' to wait for lv to be settled.
>  # newer systems have udevadm command but older systems like RHEL5 don't.
> diff --git a/tests/shared/008 b/tests/shared/008
> new file mode 100755
> index 00000000..a28f5cc1
> --- /dev/null
> +++ b/tests/shared/008
> @@ -0,0 +1,79 @@
> +#! /bin/bash
> +# FS QA Test 008
> +#
> +# Dedupe a single big file and verify integrity
> +#
> +#-----------------------------------------------------------------------
> +# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
> +#
> +# This program is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU General Public License as
> +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it would be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write the Free Software Foundation,
> +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA

This probably could be converted to SPDX, though Eryu will be the
decider <cough> about if/when that goes in.

Otherwise looks decent.

Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>

--D

> +#-----------------------------------------------------------------------
> +#
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1	# failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +	cd /
> +	rm -f $tmp.*
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +. ./common/reflink
> +
> +# remove previous $seqres.full before test
> +rm -f $seqres.full
> +
> +# duperemove only supports btrfs and xfs (with reflink feature).
> +# Add other filesystems if it supports more later.
> +_supported_fs xfs btrfs
> +_supported_os Linux
> +_require_scratch_dedupe
> +_require_command "$DUPEREMOVE_PROG" duperemove
> +
> +fssize=$((2 * 1024 * 1024 * 1024))
> +_scratch_mkfs_sized $fssize > $seqres.full 2>&1
> +_scratch_mount >> $seqres.full 2>&1
> +
> +# fill the fs with a big file has same contents
> +$XFS_IO_PROG -f -c "pwrite -S 0x55 0 $fssize" $SCRATCH_MNT/${seq}.file \
> +	>> $seqres.full 2>&1
> +md5sum $SCRATCH_MNT/${seq}.file > ${tmp}.md5sum
> +
> +echo "= before cycle mount ="
> +# Dedupe with 1M blocksize
> +$DUPEREMOVE_PROG -dr --dedupe-options=same -b 1048576 $SCRATCH_MNT/ >>$seqres.full 2>&1
> +# Verify integrity
> +md5sum -c --quiet ${tmp}.md5sum
> +# Dedupe with 64k blocksize
> +$DUPEREMOVE_PROG -dr --dedupe-options=same -b 65536 $SCRATCH_MNT/ >>$seqres.full 2>&1
> +# Verify integrity again
> +md5sum -c --quiet ${tmp}.md5sum
> +
> +# umount and mount again, verify pagecache contents don't mutate
> +_scratch_cycle_mount
> +echo "= after cycle mount ="
> +md5sum -c --quiet ${tmp}.md5sum
> +
> +status=0
> +exit
> diff --git a/tests/shared/008.out b/tests/shared/008.out
> new file mode 100644
> index 00000000..f29d478f
> --- /dev/null
> +++ b/tests/shared/008.out
> @@ -0,0 +1,3 @@
> +QA output created by 008
> += before cycle mount =
> += after cycle mount =
> diff --git a/tests/shared/group b/tests/shared/group
> index b3663a03..49ffa8dd 100644
> --- a/tests/shared/group
> +++ b/tests/shared/group
> @@ -10,6 +10,7 @@
>  005 dangerous_fuzzers
>  006 auto enospc
>  007 dangerous_fuzzers
> +008 auto stress dedupe
>  032 mkfs auto quick
>  272 auto enospc rw
>  289 auto quick
> -- 
> 2.14.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe fstests" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/3] xfstests: iterate dedupe integrity test
  2018-06-20  8:41 ` [PATCH v2 2/3] xfstests: iterate dedupe integrity test Zorro Lang
@ 2018-06-20 16:26   ` Darrick J. Wong
  0 siblings, 0 replies; 8+ messages in thread
From: Darrick J. Wong @ 2018-06-20 16:26 UTC (permalink / raw)
  To: Zorro Lang; +Cc: fstests, linux-xfs

On Wed, Jun 20, 2018 at 04:41:13PM +0800, Zorro Lang wrote:
> This case does dedupe on a dir, then copy the dir to next dir. Dedupe
> the next dir again, then copy this dir to next again, and dedupe
> again ... At the end, verify the data in the last dir is still same
> with the first one.
> 
> Signed-off-by: Zorro Lang <zlang@redhat.com>

Same SPDX comment as before but otherwise looks ok,
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>

--D

> ---
> 
> V2 did below changes:
> 1) Added more description at the case beginning
> 2) Changed $TEST_DIR/${seq}md5.sum to $tmp.md5sum
> 3) Changed for ...;do format
> 4) Remove "-f mknod=0" fsstress option
> 5) Added some noise (by fsstress) in each test round.
> 
> Thanks,
> Zorro
> 
>  tests/shared/009     | 119 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  tests/shared/009.out |   4 ++
>  tests/shared/group   |   1 +
>  3 files changed, 124 insertions(+)
>  create mode 100755 tests/shared/009
>  create mode 100644 tests/shared/009.out
> 
> diff --git a/tests/shared/009 b/tests/shared/009
> new file mode 100755
> index 00000000..5ed9faee
> --- /dev/null
> +++ b/tests/shared/009
> @@ -0,0 +1,119 @@
> +#! /bin/bash
> +# FS QA Test 009
> +#
> +# Iterate dedupe integrity test. Copy an original data0 several
> +# times (d0 -> d1, d1 -> d2, ... dn-1 -> dn), dedupe dataN everytime
> +# before copy. At last, verify dataN same with data0.
> +#
> +#-----------------------------------------------------------------------
> +# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
> +#
> +# This program is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU General Public License as
> +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it would be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write the Free Software Foundation,
> +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
> +#-----------------------------------------------------------------------
> +#
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1	# failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +	cd /
> +	rm -f $tmp.*
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +. ./common/reflink
> +
> +# remove previous $seqres.full before test
> +rm -f $seqres.full
> +
> +# real QA test starts here
> +
> +# duperemove only supports btrfs and xfs (with reflink feature).
> +# Add other filesystems if it supports more later.
> +_supported_fs xfs btrfs
> +_supported_os Linux
> +_require_scratch_dedupe
> +_require_command "$DUPEREMOVE_PROG" duperemove
> +
> +_scratch_mkfs > $seqres.full 2>&1
> +_scratch_mount >> $seqres.full 2>&1
> +
> +function iterate_dedup_verify()
> +{
> +	local src=$srcdir
> +	local dest=$dupdir/1
> +
> +	for ((index = 1; index <= times; index++)); do
> +		cp -a $src $dest
> +		find $dest -type f -exec md5sum {} \; \
> +			> $md5file$index
> +		# Make some noise
> +		$FSSTRESS_PROG $fsstress_opts -d $noisedir \
> +			       -n 200 -p $((5 * LOAD_FACTOR)) >/dev/null 2>&1
> +		# Too many output, so only save error output
> +		$DUPEREMOVE_PROG -dr --dedupe-options=same $dupdir \
> +			>/dev/null 2>$seqres.full
> +		md5sum -c --quiet $md5file$index
> +		src=$dest
> +		dest=$dupdir/$((index + 1))
> +	done
> +}
> +
> +srcdir=$SCRATCH_MNT/src
> +dupdir=$SCRATCH_MNT/dup
> +noisedir=$dupdir/noise
> +mkdir $srcdir $dupdir
> +mkdir $dupdir/noise
> +
> +md5file=${tmp}.md5sum
> +
> +fsstress_opts="-w -r"
> +# Create some files to be original data
> +$FSSTRESS_PROG $fsstress_opts -d $srcdir \
> +	       -n 500 -p $((5 * LOAD_FACTOR)) >/dev/null 2>&1
> +
> +# Calculate how many test cycles will be run
> +src_size=`du -ks $srcdir | awk '{print $1}'`
> +free_size=`df -kP $SCRATCH_MNT | grep -v Filesystem | awk '{print $4}'`
> +times=$((free_size / src_size))
> +if [ $times -gt $((4 * TIME_FACTOR)) ]; then
> +	times=$((4 * TIME_FACTOR))
> +fi
> +
> +echo "= Do dedup and verify ="
> +iterate_dedup_verify
> +
> +# Use the last checksum file to verify the original data
> +sed -e s#dup/$times#src#g $md5file$times > $md5file
> +echo "= Backwords verify ="
> +md5sum -c --quiet $md5file
> +
> +# read from the disk also doesn't show mutations.
> +_scratch_cycle_mount
> +echo "= Verify after cycle mount ="
> +for ((index = 1; index <= times; index++)); do
> +	md5sum -c --quiet $md5file$index
> +done
> +
> +status=0
> +exit
> diff --git a/tests/shared/009.out b/tests/shared/009.out
> new file mode 100644
> index 00000000..44a78ba3
> --- /dev/null
> +++ b/tests/shared/009.out
> @@ -0,0 +1,4 @@
> +QA output created by 009
> += Do dedup and verify =
> += Backwords verify =
> += Verify after cycle mount =
> diff --git a/tests/shared/group b/tests/shared/group
> index 49ffa8dd..9c484794 100644
> --- a/tests/shared/group
> +++ b/tests/shared/group
> @@ -11,6 +11,7 @@
>  006 auto enospc
>  007 dangerous_fuzzers
>  008 auto stress dedupe
> +009 auto stress dedupe
>  032 mkfs auto quick
>  272 auto enospc rw
>  289 auto quick
> -- 
> 2.14.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe fstests" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/3] xfstests: dedupe with random io race test
  2018-06-20  8:41 ` [PATCH v2 3/3] xfstests: dedupe with random io race test Zorro Lang
@ 2018-06-20 16:30   ` Darrick J. Wong
  2018-06-21  1:58     ` Eryu Guan
  0 siblings, 1 reply; 8+ messages in thread
From: Darrick J. Wong @ 2018-06-20 16:30 UTC (permalink / raw)
  To: Zorro Lang; +Cc: fstests, linux-xfs

On Wed, Jun 20, 2018 at 04:41:14PM +0800, Zorro Lang wrote:
> Run several duperemove processes with fsstress on same directory at
> same time. Make sure the race won't break the fs or kernel.
> 
> Signed-off-by: Zorro Lang <zlang@redhat.com>
> ---
> 
> V2 did below changes:
> 1) do sleep 1 after kill processes
> 2) change SLEEP_TIME to sleep_time
> 3) add the case to stress group
> 
> Thanks,
> Zorro
> 
>  tests/shared/010     | 111 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  tests/shared/010.out |   2 +
>  tests/shared/group   |   1 +
>  3 files changed, 114 insertions(+)
>  create mode 100755 tests/shared/010
>  create mode 100644 tests/shared/010.out
> 
> diff --git a/tests/shared/010 b/tests/shared/010
> new file mode 100755
> index 00000000..c449c247
> --- /dev/null
> +++ b/tests/shared/010
> @@ -0,0 +1,111 @@
> +#! /bin/bash
> +# FS QA Test 010
> +#
> +# Dedup & random I/O race test, do multi-threads fsstress and dedupe on
> +# same directory/files
> +#
> +#-----------------------------------------------------------------------
> +# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
> +#
> +# This program is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU General Public License as
> +# published by the Free Software Foundation.
> +#
> +# This program is distributed in the hope that it would be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write the Free Software Foundation,
> +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
> +#-----------------------------------------------------------------------
> +#
> +
> +seq=`basename $0`
> +seqres=$RESULT_DIR/$seq
> +echo "QA output created by $seq"
> +
> +here=`pwd`
> +tmp=/tmp/$$
> +status=1	# failure is the default!
> +trap "_cleanup; exit \$status" 0 1 2 3 15
> +
> +_cleanup()
> +{
> +	cd /
> +	rm -f $tmp.*
> +	kill_all_stress
> +}
> +
> +# get standard environment, filters and checks
> +. ./common/rc
> +. ./common/filter
> +. ./common/reflink
> +
> +# remove previous $seqres.full before test
> +rm -f $seqres.full
> +
> +# real QA test starts here
> +
> +# duperemove only supports btrfs and xfs (with reflink feature).
> +# Add other filesystems if it supports more later.
> +_supported_fs xfs btrfs
> +_supported_os Linux
> +_require_scratch_dedupe
> +_require_command "$DUPEREMOVE_PROG" duperemove
> +_require_command "$KILLALL_PROG" killall
> +
> +_scratch_mkfs > $seqres.full 2>&1
> +_scratch_mount >> $seqres.full 2>&1
> +
> +function kill_all_stress()
> +{
> +	local f=1
> +	local d=1
> +
> +	# kill the bash process which loop run duperemove
> +	if [ -n "$loop_dedup_pid" ]; then
> +		kill $loop_dedup_pid > /dev/null 2>&1
> +		wait $loop_dedup_pid > /dev/null 2>&1
> +		loop_dedup_pid=""
> +	fi
> +
> +	# Make sure all fsstress and duperemove processes get killed
> +	while [ $((f + d)) -ne 0 ]; do
> +		$KILLALL_PROG -q $FSSTRESS_PROG > /dev/null 2>&1
> +		$KILLALL_PROG -q $DUPEREMOVE_PROG > /dev/null 2>&1
> +		sleep 1
> +		f=`ps -eLf | grep $FSSTRESS_PROG | grep -v "grep" | wc -l`
> +		d=`ps -eLf | grep $DUPEREMOVE_PROG | grep -v "grep" | wc -l`
> +	done
> +}
> +
> +sleep_time=$((50 * TIME_FACTOR))
> +
> +# Start fsstress
> +fsstress_opts="-r -n 1000 -p $((5 * LOAD_FACTOR))"
> +$FSSTRESS_PROG $fsstress_opts -d $SCRATCH_MNT -l 0 >> $seqres.full 2>&1 &
> +loop_dedup_pid=""
> +# Start several dedupe processes on same directory
> +for ((i = 0; i < $((2 * LOAD_FACTOR)); i++)); do
> +	while true; do
> +		$DUPEREMOVE_PROG -dr --dedupe-options=same $SCRATCH_MNT/ \
> +			>>$seqres.full 2>&1

/me wonders why not just touch $TEST_DIR/run, have this loop do:

while test -e $TEST_DIR/run; do
	duperemove...
done

and then rm -f $TEST_DIR/run to end the loop?  Then you don't need the
complex machinery to shut down the bash loop and kill the fsstress and
duperemove processes.

Otherwise looks decent,

--D

> +	done &
> +	loop_dedup_pid="$! $loop_dedup_pid"
> +done
> +
> +# End the test after $sleep_time seconds
> +sleep $sleep_time
> +kill_all_stress
> +
> +# umount and mount again, verify pagecache contents don't mutate and a fresh
> +# read from the disk also doesn't show mutations.
> +find $testdir -type f -exec md5sum {} \; > $TEST_DIR/${seq}md5.sum
> +_scratch_cycle_mount
> +md5sum -c --quiet $TEST_DIR/${seq}md5.sum
> +
> +echo "Silence is golden"
> +status=0
> +exit
> diff --git a/tests/shared/010.out b/tests/shared/010.out
> new file mode 100644
> index 00000000..1d83a8d6
> --- /dev/null
> +++ b/tests/shared/010.out
> @@ -0,0 +1,2 @@
> +QA output created by 010
> +Silence is golden
> diff --git a/tests/shared/group b/tests/shared/group
> index 9c484794..094da27d 100644
> --- a/tests/shared/group
> +++ b/tests/shared/group
> @@ -12,6 +12,7 @@
>  007 dangerous_fuzzers
>  008 auto stress dedupe
>  009 auto stress dedupe
> +010 auto stress dedupe
>  032 mkfs auto quick
>  272 auto enospc rw
>  289 auto quick
> -- 
> 2.14.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe fstests" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/3] xfstests: dedupe with random io race test
  2018-06-20 16:30   ` Darrick J. Wong
@ 2018-06-21  1:58     ` Eryu Guan
  2018-06-21  2:20       ` Zorro Lang
  0 siblings, 1 reply; 8+ messages in thread
From: Eryu Guan @ 2018-06-21  1:58 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: Zorro Lang, fstests, linux-xfs

On Wed, Jun 20, 2018 at 09:30:01AM -0700, Darrick J. Wong wrote:
> On Wed, Jun 20, 2018 at 04:41:14PM +0800, Zorro Lang wrote:
> > Run several duperemove processes with fsstress on same directory at
> > same time. Make sure the race won't break the fs or kernel.
> > 
> > Signed-off-by: Zorro Lang <zlang@redhat.com>
> > ---
> > 
> > V2 did below changes:
> > 1) do sleep 1 after kill processes
> > 2) change SLEEP_TIME to sleep_time
> > 3) add the case to stress group
> > 
> > Thanks,
> > Zorro
> > 
> >  tests/shared/010     | 111 +++++++++++++++++++++++++++++++++++++++++++++++++++
> >  tests/shared/010.out |   2 +
> >  tests/shared/group   |   1 +
> >  3 files changed, 114 insertions(+)
> >  create mode 100755 tests/shared/010
> >  create mode 100644 tests/shared/010.out
> > 
> > diff --git a/tests/shared/010 b/tests/shared/010
> > new file mode 100755
> > index 00000000..c449c247
> > --- /dev/null
> > +++ b/tests/shared/010
> > @@ -0,0 +1,111 @@
> > +#! /bin/bash
> > +# FS QA Test 010
> > +#
> > +# Dedup & random I/O race test, do multi-threads fsstress and dedupe on
> > +# same directory/files
> > +#
> > +#-----------------------------------------------------------------------
> > +# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
> > +#
> > +# This program is free software; you can redistribute it and/or
> > +# modify it under the terms of the GNU General Public License as
> > +# published by the Free Software Foundation.
> > +#
> > +# This program is distributed in the hope that it would be useful,
> > +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > +# GNU General Public License for more details.
> > +#
> > +# You should have received a copy of the GNU General Public License
> > +# along with this program; if not, write the Free Software Foundation,
> > +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
> > +#-----------------------------------------------------------------------
> > +#
> > +
> > +seq=`basename $0`
> > +seqres=$RESULT_DIR/$seq
> > +echo "QA output created by $seq"
> > +
> > +here=`pwd`
> > +tmp=/tmp/$$
> > +status=1	# failure is the default!
> > +trap "_cleanup; exit \$status" 0 1 2 3 15
> > +
> > +_cleanup()
> > +{
> > +	cd /
> > +	rm -f $tmp.*
> > +	kill_all_stress
> > +}
> > +
> > +# get standard environment, filters and checks
> > +. ./common/rc
> > +. ./common/filter
> > +. ./common/reflink
> > +
> > +# remove previous $seqres.full before test
> > +rm -f $seqres.full
> > +
> > +# real QA test starts here
> > +
> > +# duperemove only supports btrfs and xfs (with reflink feature).
> > +# Add other filesystems if it supports more later.
> > +_supported_fs xfs btrfs
> > +_supported_os Linux
> > +_require_scratch_dedupe
> > +_require_command "$DUPEREMOVE_PROG" duperemove
> > +_require_command "$KILLALL_PROG" killall
> > +
> > +_scratch_mkfs > $seqres.full 2>&1
> > +_scratch_mount >> $seqres.full 2>&1
> > +
> > +function kill_all_stress()
> > +{
> > +	local f=1
> > +	local d=1
> > +
> > +	# kill the bash process which loop run duperemove
> > +	if [ -n "$loop_dedup_pid" ]; then
> > +		kill $loop_dedup_pid > /dev/null 2>&1
> > +		wait $loop_dedup_pid > /dev/null 2>&1
> > +		loop_dedup_pid=""
> > +	fi
> > +
> > +	# Make sure all fsstress and duperemove processes get killed
> > +	while [ $((f + d)) -ne 0 ]; do
> > +		$KILLALL_PROG -q $FSSTRESS_PROG > /dev/null 2>&1
> > +		$KILLALL_PROG -q $DUPEREMOVE_PROG > /dev/null 2>&1
> > +		sleep 1
> > +		f=`ps -eLf | grep $FSSTRESS_PROG | grep -v "grep" | wc -l`
> > +		d=`ps -eLf | grep $DUPEREMOVE_PROG | grep -v "grep" | wc -l`
> > +	done
> > +}
> > +
> > +sleep_time=$((50 * TIME_FACTOR))
> > +
> > +# Start fsstress
> > +fsstress_opts="-r -n 1000 -p $((5 * LOAD_FACTOR))"
> > +$FSSTRESS_PROG $fsstress_opts -d $SCRATCH_MNT -l 0 >> $seqres.full 2>&1 &
> > +loop_dedup_pid=""
> > +# Start several dedupe processes on same directory
> > +for ((i = 0; i < $((2 * LOAD_FACTOR)); i++)); do
> > +	while true; do
> > +		$DUPEREMOVE_PROG -dr --dedupe-options=same $SCRATCH_MNT/ \
> > +			>>$seqres.full 2>&1
> 
> /me wonders why not just touch $TEST_DIR/run, have this loop do:
> 
> while test -e $TEST_DIR/run; do
> 	duperemove...
> done
> 
> and then rm -f $TEST_DIR/run to end the loop?  Then you don't need the
> complex machinery to shut down the bash loop and kill the fsstress and
> duperemove processes.

Yeah, this looks cleaner to me.

Zorro, could you please take a look and update the test as Darrick
suggested? Also, you could update the fstests repo and use new 'new'
script to generate new test template which has correct SPDX tag :)

> 
> Otherwise looks decent,

Thanks for reviewing!

Eryu

> 
> --D
> 
> > +	done &
> > +	loop_dedup_pid="$! $loop_dedup_pid"
> > +done
> > +
> > +# End the test after $sleep_time seconds
> > +sleep $sleep_time
> > +kill_all_stress
> > +
> > +# umount and mount again, verify pagecache contents don't mutate and a fresh
> > +# read from the disk also doesn't show mutations.
> > +find $testdir -type f -exec md5sum {} \; > $TEST_DIR/${seq}md5.sum
> > +_scratch_cycle_mount
> > +md5sum -c --quiet $TEST_DIR/${seq}md5.sum
> > +
> > +echo "Silence is golden"
> > +status=0
> > +exit
> > diff --git a/tests/shared/010.out b/tests/shared/010.out
> > new file mode 100644
> > index 00000000..1d83a8d6
> > --- /dev/null
> > +++ b/tests/shared/010.out
> > @@ -0,0 +1,2 @@
> > +QA output created by 010
> > +Silence is golden
> > diff --git a/tests/shared/group b/tests/shared/group
> > index 9c484794..094da27d 100644
> > --- a/tests/shared/group
> > +++ b/tests/shared/group
> > @@ -12,6 +12,7 @@
> >  007 dangerous_fuzzers
> >  008 auto stress dedupe
> >  009 auto stress dedupe
> > +010 auto stress dedupe
> >  032 mkfs auto quick
> >  272 auto enospc rw
> >  289 auto quick
> > -- 
> > 2.14.4
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe fstests" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe fstests" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/3] xfstests: dedupe with random io race test
  2018-06-21  1:58     ` Eryu Guan
@ 2018-06-21  2:20       ` Zorro Lang
  0 siblings, 0 replies; 8+ messages in thread
From: Zorro Lang @ 2018-06-21  2:20 UTC (permalink / raw)
  To: Eryu Guan; +Cc: Darrick J. Wong, fstests, linux-xfs

On Thu, Jun 21, 2018 at 09:58:43AM +0800, Eryu Guan wrote:
> On Wed, Jun 20, 2018 at 09:30:01AM -0700, Darrick J. Wong wrote:
> > On Wed, Jun 20, 2018 at 04:41:14PM +0800, Zorro Lang wrote:
> > > Run several duperemove processes with fsstress on same directory at
> > > same time. Make sure the race won't break the fs or kernel.
> > > 
> > > Signed-off-by: Zorro Lang <zlang@redhat.com>
> > > ---
> > > 
> > > V2 did below changes:
> > > 1) do sleep 1 after kill processes
> > > 2) change SLEEP_TIME to sleep_time
> > > 3) add the case to stress group
> > > 
> > > Thanks,
> > > Zorro
> > > 
> > >  tests/shared/010     | 111 +++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  tests/shared/010.out |   2 +
> > >  tests/shared/group   |   1 +
> > >  3 files changed, 114 insertions(+)
> > >  create mode 100755 tests/shared/010
> > >  create mode 100644 tests/shared/010.out
> > > 
> > > diff --git a/tests/shared/010 b/tests/shared/010
> > > new file mode 100755
> > > index 00000000..c449c247
> > > --- /dev/null
> > > +++ b/tests/shared/010
> > > @@ -0,0 +1,111 @@
> > > +#! /bin/bash
> > > +# FS QA Test 010
> > > +#
> > > +# Dedup & random I/O race test, do multi-threads fsstress and dedupe on
> > > +# same directory/files
> > > +#
> > > +#-----------------------------------------------------------------------
> > > +# Copyright (c) 2018 Red Hat Inc.  All Rights Reserved.
> > > +#
> > > +# This program is free software; you can redistribute it and/or
> > > +# modify it under the terms of the GNU General Public License as
> > > +# published by the Free Software Foundation.
> > > +#
> > > +# This program is distributed in the hope that it would be useful,
> > > +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > > +# GNU General Public License for more details.
> > > +#
> > > +# You should have received a copy of the GNU General Public License
> > > +# along with this program; if not, write the Free Software Foundation,
> > > +# Inc.,  51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
> > > +#-----------------------------------------------------------------------
> > > +#
> > > +
> > > +seq=`basename $0`
> > > +seqres=$RESULT_DIR/$seq
> > > +echo "QA output created by $seq"
> > > +
> > > +here=`pwd`
> > > +tmp=/tmp/$$
> > > +status=1	# failure is the default!
> > > +trap "_cleanup; exit \$status" 0 1 2 3 15
> > > +
> > > +_cleanup()
> > > +{
> > > +	cd /
> > > +	rm -f $tmp.*
> > > +	kill_all_stress
> > > +}
> > > +
> > > +# get standard environment, filters and checks
> > > +. ./common/rc
> > > +. ./common/filter
> > > +. ./common/reflink
> > > +
> > > +# remove previous $seqres.full before test
> > > +rm -f $seqres.full
> > > +
> > > +# real QA test starts here
> > > +
> > > +# duperemove only supports btrfs and xfs (with reflink feature).
> > > +# Add other filesystems if it supports more later.
> > > +_supported_fs xfs btrfs
> > > +_supported_os Linux
> > > +_require_scratch_dedupe
> > > +_require_command "$DUPEREMOVE_PROG" duperemove
> > > +_require_command "$KILLALL_PROG" killall
> > > +
> > > +_scratch_mkfs > $seqres.full 2>&1
> > > +_scratch_mount >> $seqres.full 2>&1
> > > +
> > > +function kill_all_stress()
> > > +{
> > > +	local f=1
> > > +	local d=1
> > > +
> > > +	# kill the bash process which loop run duperemove
> > > +	if [ -n "$loop_dedup_pid" ]; then
> > > +		kill $loop_dedup_pid > /dev/null 2>&1
> > > +		wait $loop_dedup_pid > /dev/null 2>&1
> > > +		loop_dedup_pid=""
> > > +	fi
> > > +
> > > +	# Make sure all fsstress and duperemove processes get killed
> > > +	while [ $((f + d)) -ne 0 ]; do
> > > +		$KILLALL_PROG -q $FSSTRESS_PROG > /dev/null 2>&1
> > > +		$KILLALL_PROG -q $DUPEREMOVE_PROG > /dev/null 2>&1
> > > +		sleep 1
> > > +		f=`ps -eLf | grep $FSSTRESS_PROG | grep -v "grep" | wc -l`
> > > +		d=`ps -eLf | grep $DUPEREMOVE_PROG | grep -v "grep" | wc -l`
> > > +	done
> > > +}
> > > +
> > > +sleep_time=$((50 * TIME_FACTOR))
> > > +
> > > +# Start fsstress
> > > +fsstress_opts="-r -n 1000 -p $((5 * LOAD_FACTOR))"
> > > +$FSSTRESS_PROG $fsstress_opts -d $SCRATCH_MNT -l 0 >> $seqres.full 2>&1 &
> > > +loop_dedup_pid=""
> > > +# Start several dedupe processes on same directory
> > > +for ((i = 0; i < $((2 * LOAD_FACTOR)); i++)); do
> > > +	while true; do
> > > +		$DUPEREMOVE_PROG -dr --dedupe-options=same $SCRATCH_MNT/ \
> > > +			>>$seqres.full 2>&1
> > 
> > /me wonders why not just touch $TEST_DIR/run, have this loop do:
> > 
> > while test -e $TEST_DIR/run; do
> > 	duperemove...
> > done

Thanks for your suggestion :)

> > 
> > and then rm -f $TEST_DIR/run to end the loop?  Then you don't need the
> > complex machinery to shut down the bash loop and kill the fsstress and
> > duperemove processes.
> 
> Yeah, this looks cleaner to me.
> 
> Zorro, could you please take a look and update the test as Darrick
> suggested? Also, you could update the fstests repo and use new 'new'
> script to generate new test template which has correct SPDX tag :)

Sure, I'll do that. /me is going to google what is SPDX ...

Thanks,
Zorro

> 
> > 
> > Otherwise looks decent,
> 
> Thanks for reviewing!
> 
> Eryu
> 
> > 
> > --D
> > 
> > > +	done &
> > > +	loop_dedup_pid="$! $loop_dedup_pid"
> > > +done
> > > +
> > > +# End the test after $sleep_time seconds
> > > +sleep $sleep_time
> > > +kill_all_stress
> > > +
> > > +# umount and mount again, verify pagecache contents don't mutate and a fresh
> > > +# read from the disk also doesn't show mutations.
> > > +find $testdir -type f -exec md5sum {} \; > $TEST_DIR/${seq}md5.sum
> > > +_scratch_cycle_mount
> > > +md5sum -c --quiet $TEST_DIR/${seq}md5.sum
> > > +
> > > +echo "Silence is golden"
> > > +status=0
> > > +exit
> > > diff --git a/tests/shared/010.out b/tests/shared/010.out
> > > new file mode 100644
> > > index 00000000..1d83a8d6
> > > --- /dev/null
> > > +++ b/tests/shared/010.out
> > > @@ -0,0 +1,2 @@
> > > +QA output created by 010
> > > +Silence is golden
> > > diff --git a/tests/shared/group b/tests/shared/group
> > > index 9c484794..094da27d 100644
> > > --- a/tests/shared/group
> > > +++ b/tests/shared/group
> > > @@ -12,6 +12,7 @@
> > >  007 dangerous_fuzzers
> > >  008 auto stress dedupe
> > >  009 auto stress dedupe
> > > +010 auto stress dedupe
> > >  032 mkfs auto quick
> > >  272 auto enospc rw
> > >  289 auto quick
> > > -- 
> > > 2.14.4
> > > 
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe fstests" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe fstests" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-06-21  2:21 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-06-20  8:41 [PATCH v2 1/3] xfstests: dedupe a single big file and verify integrity Zorro Lang
2018-06-20  8:41 ` [PATCH v2 2/3] xfstests: iterate dedupe integrity test Zorro Lang
2018-06-20 16:26   ` Darrick J. Wong
2018-06-20  8:41 ` [PATCH v2 3/3] xfstests: dedupe with random io race test Zorro Lang
2018-06-20 16:30   ` Darrick J. Wong
2018-06-21  1:58     ` Eryu Guan
2018-06-21  2:20       ` Zorro Lang
2018-06-20 16:21 ` [PATCH v2 1/3] xfstests: dedupe a single big file and verify integrity Darrick J. Wong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).