qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: qemu-block@nongnu.org
Cc: kwolf@redhat.com, stefanha@redhat.com, qemu-devel@nongnu.org
Subject: [Qemu-devel] [PULL 53/58] iotests: Improve _filter_qemu_img_map
Date: Thu, 11 May 2017 16:32:56 +0200	[thread overview]
Message-ID: <1494513181-7900-54-git-send-email-kwolf@redhat.com> (raw)
In-Reply-To: <1494513181-7900-1-git-send-email-kwolf@redhat.com>

From: Eric Blake <eblake@redhat.com>

Although _filter_qemu_img_map documents that it scrubs offsets, it
was only doing so for human mode.  Of the existing tests using the
filter (97, 122, 150, 154, 176), two of them are affected, but it
does not hurt the validity of the tests to not require particular
mappings (another test, 66, uses offsets but intentionally does not
pass through _filter_qemu_img_map, because it checks that offsets
are unchanged before and after an operation).

Another justification for this patch is that it will allow a future
patch to utilize 'qemu-img map --output=json' to check the status of
preallocated zero clusters without regards to the mapping (since
the qcow2 mapping can be very sensitive to the chosen cluster size,
when preallocation is not in use).

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20170507000552.20847-9-eblake@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
---
 tests/qemu-iotests/122.out       | 16 ++++++++--------
 tests/qemu-iotests/154.out       | 30 +++++++++++++++---------------
 tests/qemu-iotests/common.filter |  4 +++-
 3 files changed, 26 insertions(+), 24 deletions(-)

diff --git a/tests/qemu-iotests/122.out b/tests/qemu-iotests/122.out
index 9317d80..47d8656 100644
--- a/tests/qemu-iotests/122.out
+++ b/tests/qemu-iotests/122.out
@@ -112,7 +112,7 @@ read 3145728/3145728 bytes at offset 0
 3 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 read 63963136/63963136 bytes at offset 3145728
 61 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-[{ "start": 0, "length": 67108864, "depth": 0, "zero": false, "data": true, "offset": 327680}]
+[{ "start": 0, "length": 67108864, "depth": 0, "zero": false, "data": true, "offset": OFFSET}]
 
 convert -c -S 0:
 read 3145728/3145728 bytes at offset 0
@@ -134,7 +134,7 @@ read 30408704/30408704 bytes at offset 3145728
 29 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 read 33554432/33554432 bytes at offset 33554432
 32 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-[{ "start": 0, "length": 67108864, "depth": 0, "zero": false, "data": true, "offset": 327680}]
+[{ "start": 0, "length": 67108864, "depth": 0, "zero": false, "data": true, "offset": OFFSET}]
 
 convert -c -S 0 with source backing file:
 read 3145728/3145728 bytes at offset 0
@@ -152,7 +152,7 @@ read 30408704/30408704 bytes at offset 3145728
 29 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 read 33554432/33554432 bytes at offset 33554432
 32 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-[{ "start": 0, "length": 67108864, "depth": 0, "zero": false, "data": true, "offset": 327680}]
+[{ "start": 0, "length": 67108864, "depth": 0, "zero": false, "data": true, "offset": OFFSET}]
 
 convert -c -S 0 -B ...
 read 3145728/3145728 bytes at offset 0
@@ -176,11 +176,11 @@ wrote 1024/1024 bytes at offset 17408
 1 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 
 convert -S 4k
-[{ "start": 0, "length": 1024, "depth": 0, "zero": false, "data": true, "offset": 8192},
+[{ "start": 0, "length": 1024, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 1024, "length": 7168, "depth": 0, "zero": true, "data": false},
-{ "start": 8192, "length": 1024, "depth": 0, "zero": false, "data": true, "offset": 9216},
+{ "start": 8192, "length": 1024, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 9216, "length": 8192, "depth": 0, "zero": true, "data": false},
-{ "start": 17408, "length": 1024, "depth": 0, "zero": false, "data": true, "offset": 10240},
+{ "start": 17408, "length": 1024, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 18432, "length": 67090432, "depth": 0, "zero": true, "data": false}]
 
 convert -c -S 4k
@@ -192,9 +192,9 @@ convert -c -S 4k
 { "start": 18432, "length": 67090432, "depth": 0, "zero": true, "data": false}]
 
 convert -S 8k
-[{ "start": 0, "length": 9216, "depth": 0, "zero": false, "data": true, "offset": 8192},
+[{ "start": 0, "length": 9216, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 9216, "length": 8192, "depth": 0, "zero": true, "data": false},
-{ "start": 17408, "length": 1024, "depth": 0, "zero": false, "data": true, "offset": 17408},
+{ "start": 17408, "length": 1024, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 18432, "length": 67090432, "depth": 0, "zero": true, "data": false}]
 
 convert -c -S 8k
diff --git a/tests/qemu-iotests/154.out b/tests/qemu-iotests/154.out
index da9eabd..d3b68e7 100644
--- a/tests/qemu-iotests/154.out
+++ b/tests/qemu-iotests/154.out
@@ -42,9 +42,9 @@ read 1024/1024 bytes at offset 65536
 read 2048/2048 bytes at offset 67584
 2 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 [{ "start": 0, "length": 32768, "depth": 1, "zero": true, "data": false},
-{ "start": 32768, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 20480},
+{ "start": 32768, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 36864, "length": 28672, "depth": 1, "zero": true, "data": false},
-{ "start": 65536, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 24576},
+{ "start": 65536, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 69632, "length": 134148096, "depth": 1, "zero": true, "data": false}]
 
 == backing file contains non-zero data after write_zeroes ==
@@ -69,9 +69,9 @@ read 1024/1024 bytes at offset 44032
 read 3072/3072 bytes at offset 40960
 3 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 [{ "start": 0, "length": 32768, "depth": 1, "zero": true, "data": false},
-{ "start": 32768, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 20480},
+{ "start": 32768, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 36864, "length": 4096, "depth": 1, "zero": true, "data": false},
-{ "start": 40960, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 24576},
+{ "start": 40960, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 45056, "length": 134172672, "depth": 1, "zero": true, "data": false}]
 
 == write_zeroes covers non-zero data ==
@@ -143,13 +143,13 @@ read 1024/1024 bytes at offset 67584
 read 5120/5120 bytes at offset 68608
 5 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 [{ "start": 0, "length": 32768, "depth": 1, "zero": true, "data": false},
-{ "start": 32768, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 20480},
+{ "start": 32768, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 36864, "length": 4096, "depth": 0, "zero": true, "data": false},
 { "start": 40960, "length": 8192, "depth": 1, "zero": true, "data": false},
-{ "start": 49152, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 24576},
+{ "start": 49152, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 53248, "length": 4096, "depth": 0, "zero": true, "data": false},
 { "start": 57344, "length": 8192, "depth": 1, "zero": true, "data": false},
-{ "start": 65536, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 28672},
+{ "start": 65536, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 69632, "length": 4096, "depth": 0, "zero": true, "data": false},
 { "start": 73728, "length": 134144000, "depth": 1, "zero": true, "data": false}]
 
@@ -186,13 +186,13 @@ read 1024/1024 bytes at offset 72704
 1 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 [{ "start": 0, "length": 32768, "depth": 1, "zero": true, "data": false},
 { "start": 32768, "length": 4096, "depth": 0, "zero": true, "data": false},
-{ "start": 36864, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 20480},
+{ "start": 36864, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 40960, "length": 8192, "depth": 1, "zero": true, "data": false},
 { "start": 49152, "length": 4096, "depth": 0, "zero": true, "data": false},
-{ "start": 53248, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 24576},
+{ "start": 53248, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 57344, "length": 8192, "depth": 1, "zero": true, "data": false},
 { "start": 65536, "length": 4096, "depth": 0, "zero": true, "data": false},
-{ "start": 69632, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 28672},
+{ "start": 69632, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 73728, "length": 134144000, "depth": 1, "zero": true, "data": false}]
 
 == spanning two clusters, partially overwriting backing file ==
@@ -212,7 +212,7 @@ read 1024/1024 bytes at offset 5120
 1 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 read 2048/2048 bytes at offset 6144
 2 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
-[{ "start": 0, "length": 8192, "depth": 0, "zero": false, "data": true, "offset": 20480},
+[{ "start": 0, "length": 8192, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 8192, "length": 134209536, "depth": 1, "zero": true, "data": false}]
 
 == spanning multiple clusters, non-zero in first cluster ==
@@ -227,7 +227,7 @@ read 2048/2048 bytes at offset 65536
 read 10240/10240 bytes at offset 67584
 10 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 [{ "start": 0, "length": 65536, "depth": 1, "zero": true, "data": false},
-{ "start": 65536, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 20480},
+{ "start": 65536, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 69632, "length": 8192, "depth": 0, "zero": true, "data": false},
 { "start": 77824, "length": 134139904, "depth": 1, "zero": true, "data": false}]
 
@@ -257,7 +257,7 @@ read 2048/2048 bytes at offset 75776
 2 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 [{ "start": 0, "length": 65536, "depth": 1, "zero": true, "data": false},
 { "start": 65536, "length": 8192, "depth": 0, "zero": true, "data": false},
-{ "start": 73728, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 20480},
+{ "start": 73728, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 77824, "length": 134139904, "depth": 1, "zero": true, "data": false}]
 
 == spanning multiple clusters, partially overwriting backing file ==
@@ -278,8 +278,8 @@ read 2048/2048 bytes at offset 74752
 read 1024/1024 bytes at offset 76800
 1 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
 [{ "start": 0, "length": 65536, "depth": 1, "zero": true, "data": false},
-{ "start": 65536, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 20480},
+{ "start": 65536, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 69632, "length": 4096, "depth": 0, "zero": true, "data": false},
-{ "start": 73728, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": 24576},
+{ "start": 73728, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
 { "start": 77824, "length": 134139904, "depth": 1, "zero": true, "data": false}]
 *** done
diff --git a/tests/qemu-iotests/common.filter b/tests/qemu-iotests/common.filter
index f58548d..2f595b2 100644
--- a/tests/qemu-iotests/common.filter
+++ b/tests/qemu-iotests/common.filter
@@ -152,10 +152,12 @@ _filter_img_info()
         -e "/log_size: [0-9]\\+/d"
 }
 
-# filter out offsets and file names from qemu-img map
+# filter out offsets and file names from qemu-img map; good for both
+# human and json output
 _filter_qemu_img_map()
 {
     sed -e 's/\([0-9a-fx]* *[0-9a-fx]* *\)[0-9a-fx]* */\1/g' \
+        -e 's/"offset": [0-9]\+/"offset": OFFSET/g' \
         -e 's/Mapped to *//' | _filter_testdir | _filter_imgfmt
 }
 
-- 
1.8.3.1

  parent reply	other threads:[~2017-05-11 14:35 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-11 14:32 [Qemu-devel] [PULL 00/58] Block layer patches Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 01/58] block: Make bdrv_perm_names public Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 02/58] block: Add, parse and store "force-share" option Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 03/58] block: Respect "force-share" in perm propagating Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 04/58] qemu-img: Add --force-share option to subcommands Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 05/58] qemu-img: Update documentation for -U Kevin Wolf
2017-05-12 17:37   ` [Qemu-devel] [Qemu-block] " Max Reitz
2017-05-15  9:12     ` Fam Zheng
2017-05-11 14:32 ` [Qemu-devel] [PULL 06/58] qemu-io: Add --force-share option Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 07/58] iotests: 030: Prepare for image locking Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 08/58] iotests: 046: " Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 09/58] iotests: 055: Don't attach the target image already for drive-backup Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 10/58] iotests: 085: Avoid image locking conflict Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 11/58] iotests: 087: Don't attach test image twice Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 12/58] iotests: 091: Quit QEMU before checking image Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 13/58] iotests: 172: Use separate images for multiple devices Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 14/58] tests: Use null-co:// instead of /dev/null as the dummy image Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 15/58] file-posix: Add 'locking' option Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 16/58] file-win32: Error out if locking=on Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 17/58] tests: Disable image lock in test-replication Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 18/58] block: Reuse bs as backing hd for drive-backup sync=none Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 19/58] osdep: Add qemu_lock_fd and qemu_unlock_fd Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 20/58] osdep: Fall back to posix lock when OFD lock is unavailable Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 21/58] file-posix: Add image locking to perm operations Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 22/58] qemu-iotests: Add test case 153 for image locking Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 23/58] tests: Add POSIX image locking test case 182 Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 24/58] qcow2: Fix preallocation size formula Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 25/58] qcow2: Reuse preallocated zero clusters Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 26/58] qcow2: Discard " Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 27/58] iotests: Extend test 066 Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 28/58] migration: Unify block node activation error handling Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 29/58] block: New BdrvChildRole.activate() for blk_resume_after_migration() Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 30/58] block: Drop permissions when migration completes Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 31/58] block: Inactivate parents before children Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 32/58] block: Fix write/resize permissions for inactive images Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 33/58] file-posix: Remove .bdrv_inactivate/invalidate_cache Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 34/58] qemu-img: wait for convert coroutines to complete Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 35/58] nvme: Implement Write Zeroes Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 36/58] blockdev: use drained_begin/end for qmp_block_resize Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 37/58] qemu-io: Improve alignment checks Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 38/58] qemu-io: Switch 'alloc' command to byte-based length Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 39/58] qemu-io: Switch 'map' output to byte-based reporting Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 40/58] blkdebug: Sanity check block layer guarantees Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 41/58] blkdebug: Refactor error injection Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 42/58] blkdebug: Add pass-through write_zero and discard support Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 43/58] blkdebug: Simplify override logic Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 44/58] blkdebug: Add ability to override unmap geometries Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 45/58] tests: Add coverage for recent block geometry fixes Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 46/58] qcow2: Nicer variable names in qcow2_update_snapshot_refcount() Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 47/58] qcow2: Use consistent switch indentation Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 48/58] block: Update comments on BDRV_BLOCK_* meanings Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 49/58] qcow2: Correctly report status of preallocated zero clusters Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 50/58] qcow2: Name typedef for cluster type Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 51/58] qcow2: Make distinction between zero cluster types obvious Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 52/58] qcow2: Optimize zero_single_l2() to minimize L2 churn Kevin Wolf
2017-05-11 14:32 ` Kevin Wolf [this message]
2017-05-11 14:32 ` [Qemu-devel] [PULL 54/58] iotests: Add test 179 to cover write zeroes with unmap Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 55/58] qcow2: Optimize write zero of unaligned tail cluster Kevin Wolf
2017-05-11 14:32 ` [Qemu-devel] [PULL 56/58] qcow2: Assert that cluster operations are aligned Kevin Wolf
2017-05-11 14:33 ` [Qemu-devel] [PULL 57/58] qcow2: Discard/zero clusters by byte count Kevin Wolf
2017-05-11 14:33 ` [Qemu-devel] [PULL 58/58] MAINTAINERS: Add qemu-progress to the block layer Kevin Wolf
2017-05-12 13:39 ` [Qemu-devel] [PULL 00/58] Block layer patches Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1494513181-7900-54-git-send-email-kwolf@redhat.com \
    --to=kwolf@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).