linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2
@ 2024-03-01  9:56 Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 1/9] md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume Yu Kuai
                   ` (10 more replies)
  0 siblings, 11 replies; 19+ messages in thread
From: Yu Kuai @ 2024-03-01  9:56 UTC (permalink / raw)
  To: zkabelac, xni, agk, snitzer, mpatocka, dm-devel, song, yukuai3,
	heinzm, neilb, jbrassow
  Cc: linux-kernel, linux-raid, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

link to part1: https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/

part1 contains fixes for deadlocks for stopping sync_thread

This set contains fixes:
 - reshape can start unexpected, cause data corruption, patch 1,5,6;
 - deadlocks that reshape concurrent with IO, patch 8;
 - a lockdep warning, patch 9;

I'm runing lvm2 tests with following scripts with a few rounds now,

for t in `ls test/shell`; do
        if cat test/shell/$t | grep raid &> /dev/null; then
                make check T=shell/$t
        fi
done

There are no deadlock and no fs corrupt now, however, there are still four
failed tests:

###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
###       failed: [ndev-vanilla] shell/lvextend-raid.sh

And failed reasons are the same:

## ERROR: The test started dmeventd (147856) unexpectedly

I have no clue yet, and it seems other folks doesn't have this issue.

Yu Kuai (9):
  md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
  md: export helpers to stop sync_thread
  md: export helper md_is_rdwr()
  md: add a new helper reshape_interrupted()
  dm-raid: really frozen sync_thread during suspend
  md/dm-raid: don't call md_reap_sync_thread() directly
  dm-raid: add a new helper prepare_suspend() in md_personality
  dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
    concurrent with reshape
  dm-raid: fix lockdep waring in "pers->hot_add_disk"

 drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
 drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
 drivers/md/md.h      | 38 +++++++++++++++++-
 drivers/md/raid5.c   | 32 ++++++++++++++-
 4 files changed, 196 insertions(+), 40 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH -next 1/9] md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
  2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
@ 2024-03-01  9:56 ` Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 2/9] md: export helpers to stop sync_thread Yu Kuai
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Yu Kuai @ 2024-03-01  9:56 UTC (permalink / raw)
  To: zkabelac, xni, agk, snitzer, mpatocka, dm-devel, song, yukuai3,
	heinzm, neilb, jbrassow
  Cc: linux-kernel, linux-raid, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

After commit 9dbd1aa3a81c ("dm raid: add reshaping support to the
target") raid_ctr() will set MD_RECOVERY_FROZEN before md_run() and
expect to keep array frozen until resume. However, md_run() will clear
the flag by setting mddev->recovery to 0.

Before commit 1baae052cccd ("md: Don't ignore suspended array in
md_check_recovery()"), dm-raid actually relied on suspending to prevent
starting new sync_thread.

Fix this problem by keeping 'MD_RECOVERY_FROZEN' for dm-raid in
md_run().

Fixes: 1baae052cccd ("md: Don't ignore suspended array in md_check_recovery()")
Fixes: 9dbd1aa3a81c ("dm raid: add reshaping support to the target")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/md.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 3bd42d76e95f..7156e765d027 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -6032,7 +6032,10 @@ int md_run(struct mddev *mddev)
 			pr_warn("True protection against single-disk failure might be compromised.\n");
 	}
 
-	mddev->recovery = 0;
+	/* dm-raid expect sync_thread to be frozen until resume */
+	if (mddev->gendisk)
+		mddev->recovery = 0;
+
 	/* may be over-ridden by personality */
 	mddev->resync_max_sectors = mddev->dev_sectors;
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH -next 2/9] md: export helpers to stop sync_thread
  2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 1/9] md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume Yu Kuai
@ 2024-03-01  9:56 ` Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 3/9] md: export helper md_is_rdwr() Yu Kuai
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Yu Kuai @ 2024-03-01  9:56 UTC (permalink / raw)
  To: zkabelac, xni, agk, snitzer, mpatocka, dm-devel, song, yukuai3,
	heinzm, neilb, jbrassow
  Cc: linux-kernel, linux-raid, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

The new heleprs will be used in dm-raid in later patches to fix
regressions and prevent calling md_reap_sync_thread() directly.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/md.c | 29 +++++++++++++++++++++++++++++
 drivers/md/md.h |  3 +++
 2 files changed, 32 insertions(+)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 7156e765d027..5f6496cf43f5 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -4912,6 +4912,35 @@ static void stop_sync_thread(struct mddev *mddev, bool locked, bool check_seq)
 		mddev_lock_nointr(mddev);
 }
 
+void md_idle_sync_thread(struct mddev *mddev)
+{
+	lockdep_assert_held(&mddev->reconfig_mutex);
+
+	clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+	stop_sync_thread(mddev, true, true);
+}
+EXPORT_SYMBOL_GPL(md_idle_sync_thread);
+
+void md_frozen_sync_thread(struct mddev *mddev)
+{
+	lockdep_assert_held(&mddev->reconfig_mutex);
+
+	set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+	stop_sync_thread(mddev, true, false);
+}
+EXPORT_SYMBOL_GPL(md_frozen_sync_thread);
+
+void md_unfrozen_sync_thread(struct mddev *mddev)
+{
+	lockdep_assert_held(&mddev->reconfig_mutex);
+
+	clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+	md_wakeup_thread(mddev->thread);
+	sysfs_notify_dirent_safe(mddev->sysfs_action);
+}
+EXPORT_SYMBOL_GPL(md_unfrozen_sync_thread);
+
 static void idle_sync_thread(struct mddev *mddev)
 {
 	mutex_lock(&mddev->sync_mutex);
diff --git a/drivers/md/md.h b/drivers/md/md.h
index a079ee9b6190..a6d33f10b107 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -781,6 +781,9 @@ extern void md_rdev_clear(struct md_rdev *rdev);
 extern void md_handle_request(struct mddev *mddev, struct bio *bio);
 extern int mddev_suspend(struct mddev *mddev, bool interruptible);
 extern void mddev_resume(struct mddev *mddev);
+extern void md_idle_sync_thread(struct mddev *mddev);
+extern void md_frozen_sync_thread(struct mddev *mddev);
+extern void md_unfrozen_sync_thread(struct mddev *mddev);
 
 extern void md_reload_sb(struct mddev *mddev, int raid_disk);
 extern void md_update_sb(struct mddev *mddev, int force);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH -next 3/9] md: export helper md_is_rdwr()
  2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 1/9] md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 2/9] md: export helpers to stop sync_thread Yu Kuai
@ 2024-03-01  9:56 ` Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 4/9] md: add a new helper reshape_interrupted() Yu Kuai
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Yu Kuai @ 2024-03-01  9:56 UTC (permalink / raw)
  To: zkabelac, xni, agk, snitzer, mpatocka, dm-devel, song, yukuai3,
	heinzm, neilb, jbrassow
  Cc: linux-kernel, linux-raid, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

There are no functional changes for now, prepare to fix a deadlock for
dm-raid456.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/md.c | 12 ------------
 drivers/md/md.h | 12 ++++++++++++
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 5f6496cf43f5..2fe8b937998b 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -99,18 +99,6 @@ static void mddev_detach(struct mddev *mddev);
 static void export_rdev(struct md_rdev *rdev, struct mddev *mddev);
 static void md_wakeup_thread_directly(struct md_thread __rcu *thread);
 
-enum md_ro_state {
-	MD_RDWR,
-	MD_RDONLY,
-	MD_AUTO_READ,
-	MD_MAX_STATE
-};
-
-static bool md_is_rdwr(struct mddev *mddev)
-{
-	return (mddev->ro == MD_RDWR);
-}
-
 /*
  * Default number of read corrections we'll attempt on an rdev
  * before ejecting it from the array. We divide the read error
diff --git a/drivers/md/md.h b/drivers/md/md.h
index a6d33f10b107..09368517cc6c 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -558,6 +558,18 @@ enum recovery_flags {
 	MD_RESYNCING_REMOTE,	/* remote node is running resync thread */
 };
 
+enum md_ro_state {
+	MD_RDWR,
+	MD_RDONLY,
+	MD_AUTO_READ,
+	MD_MAX_STATE
+};
+
+static inline bool md_is_rdwr(struct mddev *mddev)
+{
+	return (mddev->ro == MD_RDWR);
+}
+
 static inline int __must_check mddev_lock(struct mddev *mddev)
 {
 	return mutex_lock_interruptible(&mddev->reconfig_mutex);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH -next 4/9] md: add a new helper reshape_interrupted()
  2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
                   ` (2 preceding siblings ...)
  2024-03-01  9:56 ` [PATCH -next 3/9] md: export helper md_is_rdwr() Yu Kuai
@ 2024-03-01  9:56 ` Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 5/9] dm-raid: really frozen sync_thread during suspend Yu Kuai
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Yu Kuai @ 2024-03-01  9:56 UTC (permalink / raw)
  To: zkabelac, xni, agk, snitzer, mpatocka, dm-devel, song, yukuai3,
	heinzm, neilb, jbrassow
  Cc: linux-kernel, linux-raid, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

The helper will be used for dm-raid456 later to detect the case that
reshape can't make progress.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/md.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/drivers/md/md.h b/drivers/md/md.h
index 09368517cc6c..b961c1b4ead7 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -570,6 +570,25 @@ static inline bool md_is_rdwr(struct mddev *mddev)
 	return (mddev->ro == MD_RDWR);
 }
 
+static inline bool reshape_interrupted(struct mddev *mddev)
+{
+	/* reshape never start */
+	if (mddev->reshape_position == MaxSector)
+		return false;
+
+	/* interrupted */
+	if (!test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
+		return true;
+
+	/* running reshape will be interrupted soon. */
+	if (test_bit(MD_RECOVERY_WAIT, &mddev->recovery) ||
+	    test_bit(MD_RECOVERY_INTR, &mddev->recovery) ||
+	    test_bit(MD_RECOVERY_FROZEN, &mddev->recovery))
+		return true;
+
+	return false;
+}
+
 static inline int __must_check mddev_lock(struct mddev *mddev)
 {
 	return mutex_lock_interruptible(&mddev->reconfig_mutex);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH -next 5/9] dm-raid: really frozen sync_thread during suspend
  2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
                   ` (3 preceding siblings ...)
  2024-03-01  9:56 ` [PATCH -next 4/9] md: add a new helper reshape_interrupted() Yu Kuai
@ 2024-03-01  9:56 ` Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 6/9] md/dm-raid: don't call md_reap_sync_thread() directly Yu Kuai
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Yu Kuai @ 2024-03-01  9:56 UTC (permalink / raw)
  To: zkabelac, xni, agk, snitzer, mpatocka, dm-devel, song, yukuai3,
	heinzm, neilb, jbrassow
  Cc: linux-kernel, linux-raid, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

1) commit f52f5c71f3d4 ("md: fix stopping sync thread") remove
   MD_RECOVERY_FROZEN from __md_stop_writes() and doesn't realize that
   dm-raid relies on __md_stop_writes() to frozen sync_thread
   indirectly. Fix this problem by adding MD_RECOVERY_FROZEN in
   md_stop_writes(), and since stop_sync_thread() is only used for
   dm-raid in this case, also move stop_sync_thread() to
   md_stop_writes().
2) The flag MD_RECOVERY_FROZEN doesn't mean that sync thread is frozen,
   it only prevent new sync_thread to start, and it can't stop the
   running sync thread; In order to frozen sync_thread, after seting the
   flag, stop_sync_thread() should be used.
3) The flag MD_RECOVERY_FROZEN doesn't mean that writes are stopped, use
   it as condition for md_stop_writes() in raid_postsuspend() doesn't
   look correct. Consider that reentrant stop_sync_thread() do nothing,
   always call md_stop_writes() in raid_postsuspend().
4) raid_message can set/clear the flag MD_RECOVERY_FROZEN at anytime,
   and if MD_RECOVERY_FROZEN is cleared while the array is suspended,
   new sync_thread can start unexpected. Fix this by disallow
   raid_message() to change sync_thread status during suspend.

Note that after commit f52f5c71f3d4 ("md: fix stopping sync thread"), the
test shell/lvconvert-raid-reshape.sh start to hang in stop_sync_thread(),
and with previous fixes, the test won't hang there anymore, however, the
test will still fail and complain that ext4 is corrupted. And with this
patch, the test won't hang due to stop_sync_thread() or fail due to ext4
is corrupted anymore. However, there is still a deadlock related to
dm-raid456 that will be fixed in following patches.

Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Closes: https://lore.kernel.org/all/e5e8afe2-e9a8-49a2-5ab0-958d4065c55e@redhat.com/
Fixes: 1af2048a3e87 ("dm raid: fix deadlock caused by premature md_stop_writes()")
Fixes: 9dbd1aa3a81c ("dm raid: add reshaping support to the target")
Fixes: f52f5c71f3d4 ("md: fix stopping sync thread")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/dm-raid.c | 25 +++++++++++++++----------
 drivers/md/md.c      |  3 ++-
 2 files changed, 17 insertions(+), 11 deletions(-)

diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 6bb1765be1e5..ac8b37fcf76f 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -3240,11 +3240,12 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
 	rs->md.ro = 1;
 	rs->md.in_sync = 1;
 
-	/* Keep array frozen until resume. */
-	set_bit(MD_RECOVERY_FROZEN, &rs->md.recovery);
-
 	/* Has to be held on running the array */
 	mddev_suspend_and_lock_nointr(&rs->md);
+
+	/* Keep array frozen until resume. */
+	md_frozen_sync_thread(&rs->md);
+
 	r = md_run(&rs->md);
 	rs->md.in_sync = 0; /* Assume already marked dirty */
 	if (r) {
@@ -3722,6 +3723,9 @@ static int raid_message(struct dm_target *ti, unsigned int argc, char **argv,
 	if (!mddev->pers || !mddev->pers->sync_request)
 		return -EINVAL;
 
+	if (test_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags))
+		return -EBUSY;
+
 	if (!strcasecmp(argv[0], "frozen"))
 		set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
 	else
@@ -3796,10 +3800,11 @@ static void raid_postsuspend(struct dm_target *ti)
 	struct raid_set *rs = ti->private;
 
 	if (!test_and_set_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags)) {
-		/* Writes have to be stopped before suspending to avoid deadlocks. */
-		if (!test_bit(MD_RECOVERY_FROZEN, &rs->md.recovery))
-			md_stop_writes(&rs->md);
-
+		/*
+		 * sync_thread must be stopped during suspend, and writes have
+		 * to be stopped before suspending to avoid deadlocks.
+		 */
+		md_stop_writes(&rs->md);
 		mddev_suspend(&rs->md, false);
 	}
 }
@@ -4012,8 +4017,6 @@ static int raid_preresume(struct dm_target *ti)
 	}
 
 	/* Check for any resize/reshape on @rs and adjust/initiate */
-	/* Be prepared for mddev_resume() in raid_resume() */
-	set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
 	if (mddev->recovery_cp && mddev->recovery_cp < MaxSector) {
 		set_bit(MD_RECOVERY_REQUESTED, &mddev->recovery);
 		mddev->resync_min = mddev->recovery_cp;
@@ -4055,10 +4058,12 @@ static void raid_resume(struct dm_target *ti)
 		if (mddev->delta_disks < 0)
 			rs_set_capacity(rs);
 
+		WARN_ON_ONCE(!test_bit(MD_RECOVERY_FROZEN, &mddev->recovery));
+		WARN_ON_ONCE(test_bit(MD_RECOVERY_RUNNING, &mddev->recovery));
 		mddev_lock_nointr(mddev);
-		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
 		mddev->ro = 0;
 		mddev->in_sync = 0;
+		md_unfrozen_sync_thread(mddev);
 		mddev_unlock_and_resume(mddev);
 	}
 }
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 2fe8b937998b..94dff077fbf4 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -6326,7 +6326,6 @@ static void md_clean(struct mddev *mddev)
 
 static void __md_stop_writes(struct mddev *mddev)
 {
-	stop_sync_thread(mddev, true, false);
 	del_timer_sync(&mddev->safemode_timer);
 
 	if (mddev->pers && mddev->pers->quiesce) {
@@ -6351,6 +6350,8 @@ static void __md_stop_writes(struct mddev *mddev)
 void md_stop_writes(struct mddev *mddev)
 {
 	mddev_lock_nointr(mddev);
+	set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+	stop_sync_thread(mddev, true, false);
 	__md_stop_writes(mddev);
 	mddev_unlock(mddev);
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH -next 6/9] md/dm-raid: don't call md_reap_sync_thread() directly
  2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
                   ` (4 preceding siblings ...)
  2024-03-01  9:56 ` [PATCH -next 5/9] dm-raid: really frozen sync_thread during suspend Yu Kuai
@ 2024-03-01  9:56 ` Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 7/9] dm-raid: add a new helper prepare_suspend() in md_personality Yu Kuai
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Yu Kuai @ 2024-03-01  9:56 UTC (permalink / raw)
  To: zkabelac, xni, agk, snitzer, mpatocka, dm-devel, song, yukuai3,
	heinzm, neilb, jbrassow
  Cc: linux-kernel, linux-raid, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

Currently md_reap_sync_thread() is called from raid_message() directly
without holding 'reconfig_mutex', this is definitely unsafe because
md_reap_sync_thread() can change many fields that is protected by
'reconfig_mutex'.

However, hold 'reconfig_mutex' here is still problematic because this
will cause deadlock, for example, commit 130443d60b1b ("md: refactor
idle/frozen_sync_thread() to fix deadlock").

Fix this problem by using stop_sync_thread() to unregister sync_thread,
like md/raid did.

Fixes: be83651f0050 ("DM RAID: Add message/status support for changing sync action")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/dm-raid.c | 28 ++++++++++++++++++----------
 1 file changed, 18 insertions(+), 10 deletions(-)

diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index ac8b37fcf76f..766a0334460e 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -3719,6 +3719,7 @@ static int raid_message(struct dm_target *ti, unsigned int argc, char **argv,
 {
 	struct raid_set *rs = ti->private;
 	struct mddev *mddev = &rs->md;
+	int ret = 0;
 
 	if (!mddev->pers || !mddev->pers->sync_request)
 		return -EINVAL;
@@ -3726,17 +3727,24 @@ static int raid_message(struct dm_target *ti, unsigned int argc, char **argv,
 	if (test_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags))
 		return -EBUSY;
 
-	if (!strcasecmp(argv[0], "frozen"))
-		set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
-	else
-		clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+	if (!strcasecmp(argv[0], "frozen")) {
+		ret = mddev_lock(mddev);
+		if (ret)
+			return ret;
 
-	if (!strcasecmp(argv[0], "idle") || !strcasecmp(argv[0], "frozen")) {
-		if (mddev->sync_thread) {
-			set_bit(MD_RECOVERY_INTR, &mddev->recovery);
-			md_reap_sync_thread(mddev);
-		}
-	} else if (decipher_sync_action(mddev, mddev->recovery) != st_idle)
+		md_frozen_sync_thread(mddev);
+		mddev_unlock(mddev);
+	} else if (!strcasecmp(argv[0], "idle")) {
+		ret = mddev_lock(mddev);
+		if (ret)
+			return ret;
+
+		md_idle_sync_thread(mddev);
+		mddev_unlock(mddev);
+	}
+
+	clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+	if (decipher_sync_action(mddev, mddev->recovery) != st_idle)
 		return -EBUSY;
 	else if (!strcasecmp(argv[0], "resync"))
 		; /* MD_RECOVERY_NEEDED set below */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH -next 7/9] dm-raid: add a new helper prepare_suspend() in md_personality
  2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
                   ` (5 preceding siblings ...)
  2024-03-01  9:56 ` [PATCH -next 6/9] md/dm-raid: don't call md_reap_sync_thread() directly Yu Kuai
@ 2024-03-01  9:56 ` Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 8/9] dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io concurrent with reshape Yu Kuai
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Yu Kuai @ 2024-03-01  9:56 UTC (permalink / raw)
  To: zkabelac, xni, agk, snitzer, mpatocka, dm-devel, song, yukuai3,
	heinzm, neilb, jbrassow
  Cc: linux-kernel, linux-raid, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

There are no functional changes for now, prepare to fix a deadlock for
dm-raid456.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/dm-raid.c | 18 ++++++++++++++++++
 drivers/md/md.h      |  1 +
 2 files changed, 19 insertions(+)

diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 766a0334460e..002dcac20403 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -3803,6 +3803,23 @@ static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
 	blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs));
 }
 
+static void raid_presuspend(struct dm_target *ti)
+{
+	struct raid_set *rs = ti->private;
+	struct mddev *mddev = &rs->md;
+
+	if (!reshape_interrupted(mddev))
+		return;
+
+	/*
+	 * For raid456, if reshape is interrupted, IO across reshape position
+	 * will never make progress, while caller will wait for IO to be done.
+	 * Inform raid456 to handle those IO to prevent deadlock.
+	 */
+	if (mddev->pers && mddev->pers->prepare_suspend)
+		mddev->pers->prepare_suspend(mddev);
+}
+
 static void raid_postsuspend(struct dm_target *ti)
 {
 	struct raid_set *rs = ti->private;
@@ -4087,6 +4104,7 @@ static struct target_type raid_target = {
 	.message = raid_message,
 	.iterate_devices = raid_iterate_devices,
 	.io_hints = raid_io_hints,
+	.presuspend = raid_presuspend,
 	.postsuspend = raid_postsuspend,
 	.preresume = raid_preresume,
 	.resume = raid_resume,
diff --git a/drivers/md/md.h b/drivers/md/md.h
index b961c1b4ead7..23080727e75b 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -648,6 +648,7 @@ struct md_personality
 	int (*start_reshape) (struct mddev *mddev);
 	void (*finish_reshape) (struct mddev *mddev);
 	void (*update_reshape_pos) (struct mddev *mddev);
+	void (*prepare_suspend) (struct mddev *mddev);
 	/* quiesce suspends or resumes internal processing.
 	 * 1 - stop new actions and wait for action io to complete
 	 * 0 - return to normal behaviour
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH -next 8/9] dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io concurrent with reshape
  2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
                   ` (6 preceding siblings ...)
  2024-03-01  9:56 ` [PATCH -next 7/9] dm-raid: add a new helper prepare_suspend() in md_personality Yu Kuai
@ 2024-03-01  9:56 ` Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 9/9] dm-raid: fix lockdep waring in "pers->hot_add_disk" Yu Kuai
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Yu Kuai @ 2024-03-01  9:56 UTC (permalink / raw)
  To: zkabelac, xni, agk, snitzer, mpatocka, dm-devel, song, yukuai3,
	heinzm, neilb, jbrassow
  Cc: linux-kernel, linux-raid, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

For raid456, if reshape is still in progress, then IO across reshape
position will wait for reshape to make progress. However, for dm-raid,
in following cases reshape will never make progress hence IO will hang:

1) the array is read-only;
2) MD_RECOVERY_WAIT is set;
3) MD_RECOVERY_FROZEN is set;

After commit c467e97f079f ("md/raid6: use valid sector values to determine
if an I/O should wait on the reshape") fix the problem that IO across
reshape position doesn't wait for reshape, the dm-raid test
shell/lvconvert-raid-reshape.sh start to hang:

[root@fedora ~]# cat /proc/979/stack
[<0>] wait_woken+0x7d/0x90
[<0>] raid5_make_request+0x929/0x1d70 [raid456]
[<0>] md_handle_request+0xc2/0x3b0 [md_mod]
[<0>] raid_map+0x2c/0x50 [dm_raid]
[<0>] __map_bio+0x251/0x380 [dm_mod]
[<0>] dm_submit_bio+0x1f0/0x760 [dm_mod]
[<0>] __submit_bio+0xc2/0x1c0
[<0>] submit_bio_noacct_nocheck+0x17f/0x450
[<0>] submit_bio_noacct+0x2bc/0x780
[<0>] submit_bio+0x70/0xc0
[<0>] mpage_readahead+0x169/0x1f0
[<0>] blkdev_readahead+0x18/0x30
[<0>] read_pages+0x7c/0x3b0
[<0>] page_cache_ra_unbounded+0x1ab/0x280
[<0>] force_page_cache_ra+0x9e/0x130
[<0>] page_cache_sync_ra+0x3b/0x110
[<0>] filemap_get_pages+0x143/0xa30
[<0>] filemap_read+0xdc/0x4b0
[<0>] blkdev_read_iter+0x75/0x200
[<0>] vfs_read+0x272/0x460
[<0>] ksys_read+0x7a/0x170
[<0>] __x64_sys_read+0x1c/0x30
[<0>] do_syscall_64+0xc6/0x230
[<0>] entry_SYSCALL_64_after_hwframe+0x6c/0x74

This is because reshape can't make progress.

For md/raid, the problem doesn't exist because register new sync_thread
doesn't rely on the IO to be done any more:

1) If array is read-only, it can switch to read-write by ioctl/sysfs;
2) md/raid never set MD_RECOVERY_WAIT;
3) If MD_RECOVERY_FROZEN is set, mddev_suspend() doesn't hold
   'reconfig_mutex', hence it can be cleared and reshape can continue by
   sysfs api 'sync_action'.

However, I'm not sure yet how to avoid the problem in dm-raid yet. This
patch one the one hand make sure raid_message() can't change
sync_thread() through raid_message() after presuspend(), on the other
hand detect the above 3 cases before wait for IO do be done in
dm_suspend(), and let dm-raid requeue those IO.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/dm-raid.c | 22 ++++++++++++++++++++--
 drivers/md/md.c      | 24 ++++++++++++++++++++++--
 drivers/md/md.h      |  3 ++-
 drivers/md/raid5.c   | 32 ++++++++++++++++++++++++++++++--
 4 files changed, 74 insertions(+), 7 deletions(-)

diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 002dcac20403..64d381123ce3 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -213,6 +213,7 @@ struct raid_dev {
 #define RT_FLAG_RS_IN_SYNC		6
 #define RT_FLAG_RS_RESYNCING		7
 #define RT_FLAG_RS_GROW			8
+#define RT_FLAG_RS_FROZEN		9
 
 /* Array elements of 64 bit needed for rebuild/failed disk bits */
 #define DISKS_ARRAY_ELEMS ((MAX_RAID_DEVICES + (sizeof(uint64_t) * 8 - 1)) / sizeof(uint64_t) / 8)
@@ -3340,7 +3341,8 @@ static int raid_map(struct dm_target *ti, struct bio *bio)
 	if (unlikely(bio_end_sector(bio) > mddev->array_sectors))
 		return DM_MAPIO_REQUEUE;
 
-	md_handle_request(mddev, bio);
+	if (unlikely(!md_handle_request(mddev, bio)))
+		return DM_MAPIO_REQUEUE;
 
 	return DM_MAPIO_SUBMITTED;
 }
@@ -3724,7 +3726,8 @@ static int raid_message(struct dm_target *ti, unsigned int argc, char **argv,
 	if (!mddev->pers || !mddev->pers->sync_request)
 		return -EINVAL;
 
-	if (test_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags))
+	if (test_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags) ||
+	    test_bit(RT_FLAG_RS_FROZEN, &rs->runtime_flags))
 		return -EBUSY;
 
 	if (!strcasecmp(argv[0], "frozen")) {
@@ -3808,6 +3811,12 @@ static void raid_presuspend(struct dm_target *ti)
 	struct raid_set *rs = ti->private;
 	struct mddev *mddev = &rs->md;
 
+	/*
+	 * From now on, disallow raid_message() to change sync_thread until
+	 * resume, raid_postsuspend() is too late.
+	 */
+	set_bit(RT_FLAG_RS_FROZEN, &rs->runtime_flags);
+
 	if (!reshape_interrupted(mddev))
 		return;
 
@@ -3820,6 +3829,13 @@ static void raid_presuspend(struct dm_target *ti)
 		mddev->pers->prepare_suspend(mddev);
 }
 
+static void raid_presuspend_undo(struct dm_target *ti)
+{
+	struct raid_set *rs = ti->private;
+
+	clear_bit(RT_FLAG_RS_FROZEN, &rs->runtime_flags);
+}
+
 static void raid_postsuspend(struct dm_target *ti)
 {
 	struct raid_set *rs = ti->private;
@@ -4085,6 +4101,7 @@ static void raid_resume(struct dm_target *ti)
 
 		WARN_ON_ONCE(!test_bit(MD_RECOVERY_FROZEN, &mddev->recovery));
 		WARN_ON_ONCE(test_bit(MD_RECOVERY_RUNNING, &mddev->recovery));
+		clear_bit(RT_FLAG_RS_FROZEN, &rs->runtime_flags);
 		mddev_lock_nointr(mddev);
 		mddev->ro = 0;
 		mddev->in_sync = 0;
@@ -4105,6 +4122,7 @@ static struct target_type raid_target = {
 	.iterate_devices = raid_iterate_devices,
 	.io_hints = raid_io_hints,
 	.presuspend = raid_presuspend,
+	.presuspend_undo = raid_presuspend_undo,
 	.postsuspend = raid_postsuspend,
 	.preresume = raid_preresume,
 	.resume = raid_resume,
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 94dff077fbf4..f37903786fc5 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -366,7 +366,7 @@ static bool is_suspended(struct mddev *mddev, struct bio *bio)
 	return true;
 }
 
-void md_handle_request(struct mddev *mddev, struct bio *bio)
+bool md_handle_request(struct mddev *mddev, struct bio *bio)
 {
 check_suspended:
 	if (is_suspended(mddev, bio)) {
@@ -374,7 +374,7 @@ void md_handle_request(struct mddev *mddev, struct bio *bio)
 		/* Bail out if REQ_NOWAIT is set for the bio */
 		if (bio->bi_opf & REQ_NOWAIT) {
 			bio_wouldblock_error(bio);
-			return;
+			return true;
 		}
 		for (;;) {
 			prepare_to_wait(&mddev->sb_wait, &__wait,
@@ -390,10 +390,13 @@ void md_handle_request(struct mddev *mddev, struct bio *bio)
 
 	if (!mddev->pers->make_request(mddev, bio)) {
 		percpu_ref_put(&mddev->active_io);
+		if (!mddev->gendisk && mddev->pers->prepare_suspend)
+			return false;
 		goto check_suspended;
 	}
 
 	percpu_ref_put(&mddev->active_io);
+	return true;
 }
 EXPORT_SYMBOL(md_handle_request);
 
@@ -8738,6 +8741,23 @@ void md_account_bio(struct mddev *mddev, struct bio **bio)
 }
 EXPORT_SYMBOL_GPL(md_account_bio);
 
+void md_free_cloned_bio(struct bio *bio)
+{
+	struct md_io_clone *md_io_clone = bio->bi_private;
+	struct bio *orig_bio = md_io_clone->orig_bio;
+	struct mddev *mddev = md_io_clone->mddev;
+
+	if (bio->bi_status && !orig_bio->bi_status)
+		orig_bio->bi_status = bio->bi_status;
+
+	if (md_io_clone->start_time)
+		bio_end_io_acct(orig_bio, md_io_clone->start_time);
+
+	bio_put(bio);
+	percpu_ref_put(&mddev->active_io);
+}
+EXPORT_SYMBOL_GPL(md_free_cloned_bio);
+
 /* md_allow_write(mddev)
  * Calling this ensures that the array is marked 'active' so that writes
  * may proceed without blocking.  It is important to call this before
diff --git a/drivers/md/md.h b/drivers/md/md.h
index 23080727e75b..588381fa25de 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -782,6 +782,7 @@ extern void md_finish_reshape(struct mddev *mddev);
 void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
 			struct bio *bio, sector_t start, sector_t size);
 void md_account_bio(struct mddev *mddev, struct bio **bio);
+void md_free_cloned_bio(struct bio *bio);
 
 extern bool __must_check md_flush_request(struct mddev *mddev, struct bio *bio);
 extern void md_super_write(struct mddev *mddev, struct md_rdev *rdev,
@@ -810,7 +811,7 @@ extern void md_stop_writes(struct mddev *mddev);
 extern int md_rdev_init(struct md_rdev *rdev);
 extern void md_rdev_clear(struct md_rdev *rdev);
 
-extern void md_handle_request(struct mddev *mddev, struct bio *bio);
+extern bool md_handle_request(struct mddev *mddev, struct bio *bio);
 extern int mddev_suspend(struct mddev *mddev, bool interruptible);
 extern void mddev_resume(struct mddev *mddev);
 extern void md_idle_sync_thread(struct mddev *mddev);
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index edf368fdc733..f1c41fd0f636 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -760,6 +760,7 @@ enum stripe_result {
 	STRIPE_RETRY,
 	STRIPE_SCHEDULE_AND_RETRY,
 	STRIPE_FAIL,
+	STRIPE_WAIT_RESHAPE,
 };
 
 struct stripe_request_ctx {
@@ -5946,7 +5947,8 @@ static enum stripe_result make_stripe_request(struct mddev *mddev,
 			if (ahead_of_reshape(mddev, logical_sector,
 					     conf->reshape_safe)) {
 				spin_unlock_irq(&conf->device_lock);
-				return STRIPE_SCHEDULE_AND_RETRY;
+				ret = STRIPE_SCHEDULE_AND_RETRY;
+				goto out;
 			}
 		}
 		spin_unlock_irq(&conf->device_lock);
@@ -6025,6 +6027,12 @@ static enum stripe_result make_stripe_request(struct mddev *mddev,
 
 out_release:
 	raid5_release_stripe(sh);
+out:
+	if (ret == STRIPE_SCHEDULE_AND_RETRY && reshape_interrupted(mddev)) {
+		bi->bi_status = BLK_STS_RESOURCE;
+		ret = STRIPE_WAIT_RESHAPE;
+		pr_err_ratelimited("dm-raid456: io across reshape position while reshape can't make progress");
+	}
 	return ret;
 }
 
@@ -6146,7 +6154,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
 	while (1) {
 		res = make_stripe_request(mddev, conf, &ctx, logical_sector,
 					  bi);
-		if (res == STRIPE_FAIL)
+		if (res == STRIPE_FAIL || res == STRIPE_WAIT_RESHAPE)
 			break;
 
 		if (res == STRIPE_RETRY)
@@ -6184,6 +6192,11 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
 
 	if (rw == WRITE)
 		md_write_end(mddev);
+	if (res == STRIPE_WAIT_RESHAPE) {
+		md_free_cloned_bio(bi);
+		return false;
+	}
+
 	bio_endio(bi);
 	return true;
 }
@@ -8907,6 +8920,18 @@ static int raid5_start(struct mddev *mddev)
 	return r5l_start(conf->log);
 }
 
+/*
+ * This is only used for dm-raid456, caller already frozen sync_thread, hence
+ * if rehsape is still in progress, io that is waiting for reshape can never be
+ * done now, hence wake up and handle those IO.
+ */
+static void raid5_prepare_suspend(struct mddev *mddev)
+{
+	struct r5conf *conf = mddev->private;
+
+	wake_up(&conf->wait_for_overlap);
+}
+
 static struct md_personality raid6_personality =
 {
 	.name		= "raid6",
@@ -8930,6 +8955,7 @@ static struct md_personality raid6_personality =
 	.quiesce	= raid5_quiesce,
 	.takeover	= raid6_takeover,
 	.change_consistency_policy = raid5_change_consistency_policy,
+	.prepare_suspend = raid5_prepare_suspend,
 };
 static struct md_personality raid5_personality =
 {
@@ -8954,6 +8980,7 @@ static struct md_personality raid5_personality =
 	.quiesce	= raid5_quiesce,
 	.takeover	= raid5_takeover,
 	.change_consistency_policy = raid5_change_consistency_policy,
+	.prepare_suspend = raid5_prepare_suspend,
 };
 
 static struct md_personality raid4_personality =
@@ -8979,6 +9006,7 @@ static struct md_personality raid4_personality =
 	.quiesce	= raid5_quiesce,
 	.takeover	= raid4_takeover,
 	.change_consistency_policy = raid5_change_consistency_policy,
+	.prepare_suspend = raid5_prepare_suspend,
 };
 
 static int __init raid5_init(void)
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH -next 9/9] dm-raid: fix lockdep waring in "pers->hot_add_disk"
  2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
                   ` (7 preceding siblings ...)
  2024-03-01  9:56 ` [PATCH -next 8/9] dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io concurrent with reshape Yu Kuai
@ 2024-03-01  9:56 ` Yu Kuai
  2024-03-01 22:36 ` [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Song Liu
  2024-03-03 13:16 ` Xiao Ni
  10 siblings, 0 replies; 19+ messages in thread
From: Yu Kuai @ 2024-03-01  9:56 UTC (permalink / raw)
  To: zkabelac, xni, agk, snitzer, mpatocka, dm-devel, song, yukuai3,
	heinzm, neilb, jbrassow
  Cc: linux-kernel, linux-raid, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

The lockdep assert is added by commit a448af25becf ("md/raid10: remove
rcu protection to access rdev from conf") in print_conf(). And I didn't
notice that dm-raid is calling "pers->hot_add_disk" without holding
'reconfig_mutex'.

"pers->hot_add_disk" read and write many fields that is protected by
'reconfig_mutex', and raid_resume() already grab the lock in other
contex. Hence fix this problem by protecting "pers->host_add_disk"
with the lock.

Fixes: 9092c02d9435 ("DM RAID: Add ability to restore transiently failed devices on resume")
Fixes: a448af25becf ("md/raid10: remove rcu protection to access rdev from conf")

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/dm-raid.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 64d381123ce3..97ad4a8582c4 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -4091,7 +4091,9 @@ static void raid_resume(struct dm_target *ti)
 		 * Take this opportunity to check whether any failed
 		 * devices are reachable again.
 		 */
+		mddev_lock_nointr(mddev);
 		attempt_restore_of_faulty_devices(rs);
+		mddev_unlock(mddev);
 	}
 
 	if (test_and_clear_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags)) {
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2
  2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
                   ` (8 preceding siblings ...)
  2024-03-01  9:56 ` [PATCH -next 9/9] dm-raid: fix lockdep waring in "pers->hot_add_disk" Yu Kuai
@ 2024-03-01 22:36 ` Song Liu
  2024-03-02 15:56   ` Mike Snitzer
  2024-03-03 13:16 ` Xiao Ni
  10 siblings, 1 reply; 19+ messages in thread
From: Song Liu @ 2024-03-01 22:36 UTC (permalink / raw)
  To: Yu Kuai, Jens Axboe
  Cc: zkabelac, xni, agk, snitzer, mpatocka, dm-devel, yukuai3, heinzm,
	neilb, jbrassow, linux-kernel, linux-raid, yi.zhang, yangerkun

On Fri, Mar 1, 2024 at 2:03 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@huawei.com>
>
> link to part1: https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
>
> part1 contains fixes for deadlocks for stopping sync_thread
>
> This set contains fixes:
>  - reshape can start unexpected, cause data corruption, patch 1,5,6;
>  - deadlocks that reshape concurrent with IO, patch 8;
>  - a lockdep warning, patch 9;
>
> I'm runing lvm2 tests with following scripts with a few rounds now,
>
> for t in `ls test/shell`; do
>         if cat test/shell/$t | grep raid &> /dev/null; then
>                 make check T=shell/$t
>         fi
> done
>
> There are no deadlock and no fs corrupt now, however, there are still four
> failed tests:
>
> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
>
> And failed reasons are the same:
>
> ## ERROR: The test started dmeventd (147856) unexpectedly
>
> I have no clue yet, and it seems other folks doesn't have this issue.
>
> Yu Kuai (9):
>   md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
>   md: export helpers to stop sync_thread
>   md: export helper md_is_rdwr()
>   md: add a new helper reshape_interrupted()
>   dm-raid: really frozen sync_thread during suspend
>   md/dm-raid: don't call md_reap_sync_thread() directly
>   dm-raid: add a new helper prepare_suspend() in md_personality
>   dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
>     concurrent with reshape
>   dm-raid: fix lockdep waring in "pers->hot_add_disk"

This set looks good to me and passes the tests: reshape tests from
lvm2, mdadm tests, and the reboot test that catches some issue in
Xiao's version.

DM folks, please help review and test this set. If it looks good, we
can route it either via the md tree (I am thinking about md-6.8
branch) or the dm tree.

CC Jens,

I understand it is already late in the release cycle for 6.8 kernel.
Please let us know your thoughts on this set. These patches fixes
a crash when running lvm2 tests that are related to md-raid
reshape.

Thanks,
Song

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2
  2024-03-01 22:36 ` [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Song Liu
@ 2024-03-02 15:56   ` Mike Snitzer
  0 siblings, 0 replies; 19+ messages in thread
From: Mike Snitzer @ 2024-03-02 15:56 UTC (permalink / raw)
  To: Song Liu
  Cc: Yu Kuai, Jens Axboe, zkabelac, xni, agk, mpatocka, dm-devel,
	yukuai3, heinzm, neilb, jbrassow, linux-kernel, linux-raid,
	yi.zhang, yangerkun

On Fri, Mar 01 2024 at  5:36P -0500,
Song Liu <song@kernel.org> wrote:

> On Fri, Mar 1, 2024 at 2:03 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >
> > From: Yu Kuai <yukuai3@huawei.com>
> >
> > link to part1: https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
> >
> > part1 contains fixes for deadlocks for stopping sync_thread
> >
> > This set contains fixes:
> >  - reshape can start unexpected, cause data corruption, patch 1,5,6;
> >  - deadlocks that reshape concurrent with IO, patch 8;
> >  - a lockdep warning, patch 9;
> >
> > I'm runing lvm2 tests with following scripts with a few rounds now,
> >
> > for t in `ls test/shell`; do
> >         if cat test/shell/$t | grep raid &> /dev/null; then
> >                 make check T=shell/$t
> >         fi
> > done
> >
> > There are no deadlock and no fs corrupt now, however, there are still four
> > failed tests:
> >
> > ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
> > ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
> > ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
> > ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
> >
> > And failed reasons are the same:
> >
> > ## ERROR: The test started dmeventd (147856) unexpectedly
> >
> > I have no clue yet, and it seems other folks doesn't have this issue.
> >
> > Yu Kuai (9):
> >   md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
> >   md: export helpers to stop sync_thread
> >   md: export helper md_is_rdwr()
> >   md: add a new helper reshape_interrupted()
> >   dm-raid: really frozen sync_thread during suspend
> >   md/dm-raid: don't call md_reap_sync_thread() directly
> >   dm-raid: add a new helper prepare_suspend() in md_personality
> >   dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
> >     concurrent with reshape
> >   dm-raid: fix lockdep waring in "pers->hot_add_disk"
> 
> This set looks good to me and passes the tests: reshape tests from
> lvm2, mdadm tests, and the reboot test that catches some issue in
> Xiao's version.
> 
> DM folks, please help review and test this set. If it looks good, we
> can route it either via the md tree (I am thinking about md-6.8
> branch) or the dm tree.

Please send these changes through md-6.8.

There are a few typos in patch subjects and headers but:

Acked-by: Mike Snitzer <snitzer@kernel.org>

> CC Jens,
> 
> I understand it is already late in the release cycle for 6.8 kernel.
> Please let us know your thoughts on this set. These patches fixes
> a crash when running lvm2 tests that are related to md-raid
> reshape.

Would be good to get these into 6.8, but worst case if they slip to
the 6.9 merge is they'll go to relevant stable kernels (due to
"Fixes:" tags, though not all commits have Fixes).

Mike

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2
  2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
                   ` (9 preceding siblings ...)
  2024-03-01 22:36 ` [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Song Liu
@ 2024-03-03 13:16 ` Xiao Ni
  2024-03-04  1:07   ` Yu Kuai
  10 siblings, 1 reply; 19+ messages in thread
From: Xiao Ni @ 2024-03-03 13:16 UTC (permalink / raw)
  To: Yu Kuai
  Cc: zkabelac, agk, snitzer, mpatocka, dm-devel, song, yukuai3, heinzm,
	neilb, jbrassow, linux-kernel, linux-raid, yi.zhang, yangerkun

[-- Attachment #1: Type: text/plain, Size: 2532 bytes --]

Hi all

There is a error report from lvm regression tests. The case is
lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
tried to fix dmraid regression problems too. In my patch set,  after
reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
sync_thread for reshape directly), this problem doesn't appear.

I put the log in the attachment.

On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@huawei.com>
>
> link to part1: https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
>
> part1 contains fixes for deadlocks for stopping sync_thread
>
> This set contains fixes:
>  - reshape can start unexpected, cause data corruption, patch 1,5,6;
>  - deadlocks that reshape concurrent with IO, patch 8;
>  - a lockdep warning, patch 9;
>
> I'm runing lvm2 tests with following scripts with a few rounds now,
>
> for t in `ls test/shell`; do
>         if cat test/shell/$t | grep raid &> /dev/null; then
>                 make check T=shell/$t
>         fi
> done
>
> There are no deadlock and no fs corrupt now, however, there are still four
> failed tests:
>
> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
>
> And failed reasons are the same:
>
> ## ERROR: The test started dmeventd (147856) unexpectedly
>
> I have no clue yet, and it seems other folks doesn't have this issue.
>
> Yu Kuai (9):
>   md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
>   md: export helpers to stop sync_thread
>   md: export helper md_is_rdwr()
>   md: add a new helper reshape_interrupted()
>   dm-raid: really frozen sync_thread during suspend
>   md/dm-raid: don't call md_reap_sync_thread() directly
>   dm-raid: add a new helper prepare_suspend() in md_personality
>   dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
>     concurrent with reshape
>   dm-raid: fix lockdep waring in "pers->hot_add_disk"
>
>  drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
>  drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
>  drivers/md/md.h      | 38 +++++++++++++++++-
>  drivers/md/raid5.c   | 32 ++++++++++++++-
>  4 files changed, 196 insertions(+), 40 deletions(-)
>
> --
> 2.39.2
>

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: shell_lvconvert-raid-reshape-stripes-load-reload.sh.txt --]
[-- Type: text/plain; charset="GB18030";  name="shell_lvconvert-raid-reshape-stripes-load-reload.sh.txt", Size: 163301 bytes --]

[ 0:00.223] Library version:   1.02.198-git (2023-11-21)
[ 0:00.223] Driver version:    4.48.0
[ 0:00.223] Kernel is Linux hp-dl380eg8-02.rhts.eng.pek2.redhat.com 6.8.0-rc1-dmraid+ #1 SMP PREEMPT_DYNAMIC Sat Mar  2 21:48:55 EST 2024 x86_64 x86_64 x86_64 GNU/Linux
[ 0:00.410] Selinux mode is Enforcing.
[ 0:00.427]                total        used        free      shared  buff/cache   available
[ 0:00.440] Mem:           15569         760       14935          20         104       14808
[ 0:00.440] Swap:           7975           0        7975
[ 0:00.440] Filesystem                              Size  Used Avail Use% Mounted on
[ 0:00.443] devtmpfs                                4.0M     0  4.0M   0% /dev
[ 0:00.443] tmpfs                                   7.7G     0  7.7G   0% /dev/shm
[ 0:00.443] tmpfs                                   3.1G   18M  3.1G   1% /run
[ 0:00.443] /dev/mapper/rhel_hp--dl380eg8--02-root   70G  4.1G   66G   6% /
[ 0:00.443] /dev/sda1                               960M  313M  648M  33% /boot
[ 0:00.443] /dev/mapper/rhel_hp--dl380eg8--02-home  853G   37G  816G   5% /home
[ 0:00.443] tmpfs                                   1.6G  4.0K  1.6G   1% /run/user/0
[ 0:00.443] @TESTDIR=/tmp/LVMTEST500118.AxR1K9qRUi
[ 0:00.445] @PREFIX=LVMTEST500118
[ 0:00.445] ## LVMCONF: activation {
[ 0:00.501] ## LVMCONF:     checks = 1
[ 0:00.501] ## LVMCONF:     monitoring = 0
[ 0:00.501] ## LVMCONF:     polling_interval = 1
[ 0:00.501] ## LVMCONF:     raid_region_size = 512
[ 0:00.501] ## LVMCONF:     retry_deactivation = 1
[ 0:00.501] ## LVMCONF:     snapshot_autoextend_percent = 50
[ 0:00.501] ## LVMCONF:     snapshot_autoextend_threshold = 50
[ 0:00.501] ## LVMCONF:     udev_rules = 1
[ 0:00.501] ## LVMCONF:     udev_sync = 1
[ 0:00.501] ## LVMCONF:     verify_udev_operations = 1
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] ## LVMCONF: allocation {
[ 0:00.501] ## LVMCONF:     vdo_slab_size_mb = 128
[ 0:00.501] ## LVMCONF:     wipe_signatures_when_zeroing_new_lvs = 0
[ 0:00.501] ## LVMCONF:     zero_metadata = 0
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] ## LVMCONF: backup {
[ 0:00.501] ## LVMCONF:     archive = 0
[ 0:00.501] ## LVMCONF:     backup = 0
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] ## LVMCONF: devices {
[ 0:00.501] ## LVMCONF:     cache_dir = "/tmp/LVMTEST500118.AxR1K9qRUi/etc"
[ 0:00.501] ## LVMCONF:     default_data_alignment = 1
[ 0:00.501] ## LVMCONF:     dir = "/tmp/LVMTEST500118.AxR1K9qRUi/dev"
[ 0:00.501] ## LVMCONF:     filter = "a|.*|"
[ 0:00.501] ## LVMCONF:     global_filter = [ "a|/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118.*pv[0-9_]*$|", "r|.*|" ]
[ 0:00.501] ## LVMCONF:     md_component_detection = 0
[ 0:00.501] ## LVMCONF:     scan = "/tmp/LVMTEST500118.AxR1K9qRUi/dev"
[ 0:00.501] ## LVMCONF:     sysfs_scan = 1
[ 0:00.501] ## LVMCONF:     use_devicesfile = 0
[ 0:00.501] ## LVMCONF:     write_cache_state = 0
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] ## LVMCONF: dmeventd {
[ 0:00.501] ## LVMCONF:     executable = "/home/lvm2/test/lib/dmeventd"
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] ## LVMCONF: global {
[ 0:00.501] ## LVMCONF:     abort_on_internal_errors = 1
[ 0:00.501] ## LVMCONF:     cache_check_executable = "/usr/sbin/cache_check"
[ 0:00.501] ## LVMCONF:     cache_dump_executable = "/usr/sbin/cache_dump"
[ 0:00.501] ## LVMCONF:     cache_repair_executable = "/usr/sbin/cache_repair"
[ 0:00.501] ## LVMCONF:     cache_restore_executable = "/usr/sbin/cache_restore"
[ 0:00.501] ## LVMCONF:     detect_internal_vg_cache_corruption = 1
[ 0:00.501] ## LVMCONF:     etc = "/tmp/LVMTEST500118.AxR1K9qRUi/etc"
[ 0:00.501] ## LVMCONF:     fallback_to_local_locking = 0
[ 0:00.501] ## LVMCONF:     fsadm_executable = "/home/lvm2/test/lib/fsadm"
[ 0:00.501] ## LVMCONF:     library_dir = "/tmp/LVMTEST500118.AxR1K9qRUi/lib"
[ 0:00.501] ## LVMCONF:     locking_dir = "/tmp/LVMTEST500118.AxR1K9qRUi/var/lock/lvm"
[ 0:00.501] ## LVMCONF:     locking_type=1
[ 0:00.501] ## LVMCONF:     notify_dbus = 0
[ 0:00.501] ## LVMCONF:     si_unit_consistency = 1
[ 0:00.501] ## LVMCONF:     thin_check_executable = "/usr/sbin/thin_check"
[ 0:00.501] ## LVMCONF:     thin_dump_executable = "/usr/sbin/thin_dump"
[ 0:00.501] ## LVMCONF:     thin_repair_executable = "/usr/sbin/thin_repair"
[ 0:00.501] ## LVMCONF:     thin_restore_executable = "/usr/sbin/thin_restore"
[ 0:00.501] ## LVMCONF:     use_lvmlockd = 0
[ 0:00.501] ## LVMCONF:     use_lvmpolld = 0
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] ## LVMCONF: log {
[ 0:00.501] ## LVMCONF:     activation = 1
[ 0:00.501] ## LVMCONF:     file = "/tmp/LVMTEST500118.AxR1K9qRUi/debug.log"
[ 0:00.501] ## LVMCONF:     indent = 1
[ 0:00.501] ## LVMCONF:     level = 9
[ 0:00.501] ## LVMCONF:     overwrite = 1
[ 0:00.501] ## LVMCONF:     syslog = 0
[ 0:00.501] ## LVMCONF:     verbose = 0
[ 0:00.501] ## LVMCONF: }
[ 0:00.501] <======== Processing test: "lvconvert-raid-reshape-stripes-load-reload.sh" ========>
[ 0:00.507] 
[ 0:00.507] # Test reshaping under io load
[ 0:00.507] 
[ 0:00.507] which md5sum || skip
[ 0:00.507] #lvconvert-raid-reshape-stripes-load-reload.sh:20+ which md5sum
[ 0:00.507] #environment:0+ alias
[ 0:00.508] #environment:1+ eval declare -f
[ 0:00.508] declare -f
[ 0:00.508] ##environment:1+ declare -f
[ 0:00.508] #environment:1+ /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot md5sum
[ 0:00.508] /usr/bin/md5sum
[ 0:00.511] which mkfs.ext4 || skip
[ 0:00.511] #lvconvert-raid-reshape-stripes-load-reload.sh:21+ which mkfs.ext4
[ 0:00.511] #environment:0+ alias
[ 0:00.512] #environment:1+ eval declare -f
[ 0:00.512] declare -f
[ 0:00.512] ##environment:1+ declare -f
[ 0:00.512] #environment:1+ /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot mkfs.ext4
[ 0:00.512] /usr/sbin/mkfs.ext4
[ 0:00.514] aux have_raid 1 14 || skip
[ 0:00.515] #lvconvert-raid-reshape-stripes-load-reload.sh:22+ aux have_raid 1 14
[ 0:00.515] 
[ 0:00.595] mount_dir="mnt"
[ 0:00.595] #lvconvert-raid-reshape-stripes-load-reload.sh:24+ mount_dir=mnt
[ 0:00.595] 
[ 0:00.595] cleanup_mounted_and_teardown()
[ 0:00.595] {
[ 0:00.595] 	umount "$mount_dir" || true
[ 0:00.595] 	aux teardown
[ 0:00.595] }
[ 0:00.595] 
[ 0:00.595] checksum_()
[ 0:00.595] {
[ 0:00.595] 	md5sum "$1" | cut -f1 -d' '
[ 0:00.595] }
[ 0:00.595] 
[ 0:00.595] aux prepare_pvs 16 32
[ 0:00.595] #lvconvert-raid-reshape-stripes-load-reload.sh:37+ aux prepare_pvs 16 32
[ 0:00.596] ## preparing ramdisk device...ok (/dev/ram0)
[ 0:00.657] 6,17022,29996053013,-;brd: module loaded
[ 0:00.657] ## preparing 16 devices...ok
[ 0:00.725]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv1" successfully created.
[ 0:00.776]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv2" successfully created.
[ 0:00.777]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv3" successfully created.
[ 0:00.778]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv4" successfully created.
[ 0:00.778]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv5" successfully created.
[ 0:00.779]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv6" successfully created.
[ 0:00.779]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv7" successfully created.
[ 0:00.780]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv8" successfully created.
[ 0:00.781]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv9" successfully created.
[ 0:00.781]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv10" successfully created.
[ 0:00.782]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv11" successfully created.
[ 0:00.783]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv12" successfully created.
[ 0:00.783]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv13" successfully created.
[ 0:00.784]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv14" successfully created.
[ 0:00.785]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv15" successfully created.
[ 0:00.785]   Physical volume "/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv16" successfully created.
[ 0:00.786] 
[ 0:00.815] get_devs
[ 0:00.815] #lvconvert-raid-reshape-stripes-load-reload.sh:39+ get_devs
[ 0:00.815] #utils:270+ local 'IFS=
[ 0:00.815] '
[ 0:00.815] #utils:271+ DEVICES=($(<DEVICES))
[ 0:00.815] #utils:272+ export DEVICES
[ 0:00.817] 
[ 0:00.817] vgcreate $SHARED -s 1M "$vg" "${DEVICES[@]}"
[ 0:00.817] #lvconvert-raid-reshape-stripes-load-reload.sh:41+ vgcreate -s 1M LVMTEST500118vg /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv1 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv2 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv3 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv4 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv5 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv6 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv7 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv8 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv9 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv10 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv11 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv12 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv13 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv14 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv15 /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv16
[ 0:00.817]   WARNING: This metadata update is NOT backed up.
[ 0:00.880]   Volume group "LVMTEST500118vg" successfully created
[ 0:00.880] 
[ 0:00.894] trap 'cleanup_mounted_and_teardown' EXIT
[ 0:00.894] #lvconvert-raid-reshape-stripes-load-reload.sh:43+ trap cleanup_mounted_and_teardown EXIT
[ 0:00.894] 
[ 0:00.894] # Create 10-way striped raid5 (11 legs total)
[ 0:00.894] lvcreate --yes --type raid5_ls --stripesize 64K --stripes 10 -L4 -n$lv1 $vg
[ 0:00.894] #lvconvert-raid-reshape-stripes-load-reload.sh:46+ lvcreate --yes --type raid5_ls --stripesize 64K --stripes 10 -L4 -nLV1 LVMTEST500118vg
[ 0:00.894]   Rounding size 4.00 MiB (4 extents) up to stripe boundary size 10.00 MiB (10 extents).
[ 0:00.930]   Logical volume "LV1" created.
[ 0:01.259] 6,17023,29996612978,-;device-mapper: raid: Superblocks created for new raid set
[ 0:01.259] 5,17024,29996631884,-;md/raid:mdX: not clean -- starting background reconstruction
[ 0:01.259] 6,17025,29996632348,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:01.259] 6,17026,29996633474,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:01.259] 6,17027,29996634252,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:01.259] 6,17028,29996634972,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:01.259] 6,17029,29996635696,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:01.259] 6,17030,29996636407,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:01.259] 6,17031,29996637137,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:01.259] 6,17032,29996637846,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:01.259] 6,17033,29996638593,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:01.259] 6,17034,29996639486,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:01.259] 6,17035,29996640384,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:01.259] 6,17036,29996643318,-;md/raid:mdX: raid level 5 active with 11 out of 11 devices, algorithm 2
[ 0:01.259] 4,17037,29996645343,-;mdX: bitmap file is out of date, doing full recovery
[ 0:01.259] 6,17038,29996646487,-;md: resync of RAID array mdX
[ 0:01.259]   WARNING: This metadata update is NOT backed up.
[ 0:01.260] check lv_first_seg_field $vg/$lv1 segtype "raid5_ls"
[ 0:01.284] 6,17039,29996664728,-;md: mdX: resync done.
[ 0:01.284] #lvconvert-raid-reshape-stripes-load-reload.sh:47+ check lv_first_seg_field LVMTEST500118vg/LV1 segtype raid5_ls
[ 0:01.284] check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
[ 0:01.360] #lvconvert-raid-reshape-stripes-load-reload.sh:48+ check lv_first_seg_field LVMTEST500118vg/LV1 stripesize 64.00k
[ 0:01.360] check lv_first_seg_field $vg/$lv1 data_stripes 10
[ 0:01.442] #lvconvert-raid-reshape-stripes-load-reload.sh:49+ check lv_first_seg_field LVMTEST500118vg/LV1 data_stripes 10
[ 0:01.442] check lv_first_seg_field $vg/$lv1 stripes 11
[ 0:01.516] #lvconvert-raid-reshape-stripes-load-reload.sh:50+ check lv_first_seg_field LVMTEST500118vg/LV1 stripes 11
[ 0:01.517] wipefs -a "$DM_DEV_DIR/$vg/$lv1"
[ 0:01.600] #lvconvert-raid-reshape-stripes-load-reload.sh:51+ wipefs -a /tmp/LVMTEST500118.AxR1K9qRUi/dev/LVMTEST500118vg/LV1
[ 0:01.600] mkfs -t ext4 "$DM_DEV_DIR/$vg/$lv1"
[ 0:01.620] #lvconvert-raid-reshape-stripes-load-reload.sh:52+ mkfs -t ext4 /tmp/LVMTEST500118.AxR1K9qRUi/dev/LVMTEST500118vg/LV1
[ 0:01.620] mke2fs 1.46.5 (30-Dec-2021)
[ 0:01.654] Creating filesystem with 10240 1k blocks and 2560 inodes
[ 0:01.655] Filesystem UUID: 84c2201e-4589-48a8-ba44-019d481366f2
[ 0:01.655] Superblock backups stored on blocks: 
[ 0:01.655] 	8193
[ 0:01.655] 
[ 0:01.655] Allocating group tables: 0/2\b\b\b   \b\b\bdone                            
[ 0:01.655] Writing inode tables: 0/2\b\b\b   \b\b\bdone                            
[ 0:01.656] Creating journal (1024 blocks): done
[ 0:01.660] Writing superblocks and filesystem accounting information: 0/2\b\b\b   \b\b\bdone
[ 0:01.662] 
[ 0:01.662] 
[ 0:01.663] mkdir -p "$mount_dir"
[ 0:01.663] #lvconvert-raid-reshape-stripes-load-reload.sh:54+ mkdir -p mnt
[ 0:01.663] mount "$DM_DEV_DIR/$vg/$lv1" "$mount_dir"
[ 0:01.666] #lvconvert-raid-reshape-stripes-load-reload.sh:55+ mount /tmp/LVMTEST500118.AxR1K9qRUi/dev/LVMTEST500118vg/LV1 mnt
[ 0:01.666] 
[ 0:01.679] echo 3 >/proc/sys/vm/drop_caches
[ 0:01.679] 6,17040,29997074848,-;EXT4-fs (dm-41): mounted filesystem 84c2201e-4589-48a8-ba44-019d481366f2 r/w with ordered data mode. Quota mode: none.
[ 0:01.679] #lvconvert-raid-reshape-stripes-load-reload.sh:57+ echo 3
[ 0:01.679] # FIXME: This is filling up ram disk. Use sane amount of data please! Rate limit the data written!
[ 0:01.709] dd if=/dev/urandom of="$mount_dir/random" bs=1M count=4 conv=fdatasync
[ 0:01.709] 6,17041,29997106145,-;bash (500118): drop_caches: 3
[ 0:01.709] #lvconvert-raid-reshape-stripes-load-reload.sh:59+ dd if=/dev/urandom of=mnt/random bs=1M count=4 conv=fdatasync
[ 0:01.709] 4+0 records in
[ 0:02.154] 4+0 records out
[ 0:02.154] 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0978546 s, 42.9 MB/s
[ 0:02.154] checksum_ "$mount_dir/random" >MD5
[ 0:02.173] #lvconvert-raid-reshape-stripes-load-reload.sh:60+ checksum_ mnt/random
[ 0:02.173] #lvconvert-raid-reshape-stripes-load-reload.sh:34+ md5sum mnt/random
[ 0:02.186] #lvconvert-raid-reshape-stripes-load-reload.sh:34+ cut -f1 '-d '
[ 0:02.186] 
[ 0:02.296] # FIXME: wait_for_sync - is this really testing anything under load?
[ 0:02.296] aux wait_for_sync $vg $lv1
[ 0:02.296] #lvconvert-raid-reshape-stripes-load-reload.sh:63+ aux wait_for_sync LVMTEST500118vg LV1
[ 0:02.296] LVMTEST500118vg/LV1 (raid5_ls) is in-sync     0 20480 raid raid5_ls 11 AAAAAAAAAAA 2048/2048 idle 0 0 -
[ 0:02.487] aux delay_dev "$dev2" 0 200
[ 0:02.488] #lvconvert-raid-reshape-stripes-load-reload.sh:64+ aux delay_dev /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv2 0 200
[ 0:02.488] 
[ 0:02.803] # Reshape it to 15 data stripes
[ 0:02.803] lvconvert --yes --stripes 15 $vg/$lv1
[ 0:02.803] #lvconvert-raid-reshape-stripes-load-reload.sh:67+ lvconvert --yes --stripes 15 LVMTEST500118vg/LV1
[ 0:02.803]   Using default stripesize 64.00 KiB.
[ 0:03.682]   WARNING: Adding stripes to active and open logical volume LVMTEST500118vg/LV1 will grow it from 10 to 15 extents!
[ 0:03.683]   Run "lvresize -l10 LVMTEST500118vg/LV1" to shrink it or use the additional capacity.
[ 0:03.683]   Logical volume LVMTEST500118vg/LV1 successfully converted.
[ 0:11.171] 6,17042,30000258189,-;device-mapper: raid: Device 11 specified for rebuild; clearing superblock
[ 0:11.171] 6,17043,30000258720,-;device-mapper: raid: Device 12 specified for rebuild; clearing superblock
[ 0:11.171] 6,17044,30000259148,-;device-mapper: raid: Device 13 specified for rebuild; clearing superblock
[ 0:11.171] 6,17045,30000259613,-;device-mapper: raid: Device 14 specified for rebuild; clearing superblock
[ 0:11.171] 6,17046,30000260025,-;device-mapper: raid: Device 15 specified for rebuild; clearing superblock
[ 0:11.171] 6,17047,30000306430,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:11.171] 6,17048,30000307150,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:11.171] 6,17049,30000308006,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:11.171] 6,17050,30000308757,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:11.171] 6,17051,30000309488,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:11.171] 6,17052,30000310220,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:11.171] 6,17053,30000310955,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:11.171] 6,17054,30000311677,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:11.171] 6,17055,30000312395,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:11.171] 6,17056,30000313126,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:11.171] 6,17057,30000313850,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:11.171] 6,17058,30000316143,-;md/raid:mdX: raid level 5 active with 11 out of 11 devices, algorithm 2
[ 0:11.171] 4,17059,30000317235,-;mdX: bitmap file is out of date (20 < 21) -- forcing full recovery
[ 0:11.171] 4,17060,30001740496,-;mdX: bitmap file is out of date, doing full recovery
[ 0:11.171] 6,17061,30001948586,-;dm-41: detected capacity change from 30720 to 20480
[ 0:11.171] 6,17062,30002817703,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:11.171] 6,17063,30002818545,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:11.171] 6,17064,30002819394,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:11.171] 6,17065,30002820460,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:11.171] 6,17066,30002821197,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:11.171] 6,17067,30002821937,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:11.172] 6,17068,30002822700,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:11.172] 6,17069,30002823444,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:11.172] 6,17070,30002824167,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:11.172] 6,17071,30002824913,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:11.172] 6,17072,30002825619,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:11.172] 6,17073,30002827836,-;md/raid:mdX: raid level 5 active with 11 out of 11 devices, algorithm 2
[ 0:11.172] 6,17074,30004124627,-;dm-41: detected capacity change from 30720 to 20480
[ 0:11.172] 6,17075,30005415653,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:11.172] 6,17076,30005416418,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:11.172] 6,17077,30005417338,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:11.172] 6,17078,30005418354,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:11.172] 6,17079,30005419053,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:11.172] 6,17080,30005419743,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:11.172] 6,17081,30005420515,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:11.172] 6,17082,30005421263,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:11.172] 6,17083,30005421981,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:11.172] 6,17084,30005422720,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:11.172] 6,17085,30005423413,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:11.172] 6,17086,30005424138,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:11.172] 6,17087,30005424847,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:11.172] 6,17088,30005425731,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:11.172] 6,17089,30005426522,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:11.172] 6,17090,30005427252,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:11.172] 6,17091,30005430243,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:11.172] 3,17092,30006360584,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:11.172] 3,17093,30006361406,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:11.172] 3,17094,30006362352,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:11.172] 3,17095,30006363459,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:11.172]   WARNING: This metadata update is NOT backed up.
[ 0:11.178] check lv_first_seg_field $vg/$lv1 segtype "raid5_ls"
[ 0:11.199] #lvconvert-raid-reshape-stripes-load-reload.sh:68+ check lv_first_seg_field LVMTEST500118vg/LV1 segtype raid5_ls
[ 0:11.199] check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
[ 0:11.304] #lvconvert-raid-reshape-stripes-load-reload.sh:69+ check lv_first_seg_field LVMTEST500118vg/LV1 stripesize 64.00k
[ 0:11.304] check lv_first_seg_field $vg/$lv1 data_stripes 15
[ 0:11.378] 6,17096,30006769084,-;md: reshape of RAID array mdX
[ 0:11.378] #lvconvert-raid-reshape-stripes-load-reload.sh:70+ check lv_first_seg_field LVMTEST500118vg/LV1 data_stripes 15
[ 0:11.378] check lv_first_seg_field $vg/$lv1 stripes 16
[ 0:11.456] #lvconvert-raid-reshape-stripes-load-reload.sh:71+ check lv_first_seg_field LVMTEST500118vg/LV1 stripes 16
[ 0:11.456] 
[ 0:11.543] # Reload table during reshape to test for data corruption
[ 0:11.543] case "$(uname -r)" in
[ 0:11.543]   5.[89]*|5.1[012].*|3.10.0-862*|4.18.0-*.el8*)
[ 0:11.543] 	should not echo "Skipping table reload test on on unfixed kernel!!!" ;;
[ 0:11.543]   *)
[ 0:11.543] for i in {0..5}
[ 0:11.543] do
[ 0:11.543] 	dmsetup table $vg-$lv1|dmsetup load $vg-$lv1
[ 0:11.543] 	dmsetup suspend --noflush $vg-$lv1
[ 0:11.543] 	dmsetup resume $vg-$lv1
[ 0:11.543] 	sleep .5
[ 0:11.543] done
[ 0:11.543] 
[ 0:11.543] esac
[ 0:11.543] #lvconvert-raid-reshape-stripes-load-reload.sh:74+ case "$(uname -r)" in
[ 0:11.544] ##lvconvert-raid-reshape-stripes-load-reload.sh:74+ uname -r
[ 0:11.544] #lvconvert-raid-reshape-stripes-load-reload.sh:78+ for i in {0..5}
[ 0:11.567] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup table LVMTEST500118vg-LV1
[ 0:11.568] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup load LVMTEST500118vg-LV1
[ 0:11.568] #lvconvert-raid-reshape-stripes-load-reload.sh:81+ dmsetup suspend --noflush LVMTEST500118vg-LV1
[ 0:11.603] 6,17097,30006984799,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:11.603] 6,17098,30006985564,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:11.603] 6,17099,30006986463,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:11.603] 6,17100,30006987501,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:11.603] 6,17101,30006988250,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:11.604] 6,17102,30006988988,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:11.604] 6,17103,30006989747,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:11.604] 6,17104,30006990481,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:11.604] 6,17105,30006991229,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:11.604] 6,17106,30006991977,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:11.604] 6,17107,30006992711,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:11.604] 6,17108,30006993432,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:11.604] 6,17109,30006994192,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:11.604] 6,17110,30006994914,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:11.604] 6,17111,30006995670,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:11.604] 6,17112,30006996371,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:11.604] 6,17113,30006999783,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:11.604] #lvconvert-raid-reshape-stripes-load-reload.sh:82+ dmsetup resume LVMTEST500118vg-LV1
[ 0:13.909] 6,17114,30007428976,-;md: mdX: reshape interrupted.
[ 0:13.909] #lvconvert-raid-reshape-stripes-load-reload.sh:83+ sleep .5
[ 0:14.336] 3,17115,30009540918,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:14.336] 3,17116,30009541758,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:14.336] 3,17117,30009542650,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:14.336] 3,17118,30009543492,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:14.336] #lvconvert-raid-reshape-stripes-load-reload.sh:78+ for i in {0..5}
[ 0:14.839] 6,17119,30009932832,-;md: reshape of RAID array mdX
[ 0:14.839] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup table LVMTEST500118vg-LV1
[ 0:14.840] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup load LVMTEST500118vg-LV1
[ 0:14.840] #lvconvert-raid-reshape-stripes-load-reload.sh:81+ dmsetup suspend --noflush LVMTEST500118vg-LV1
[ 0:14.884] 6,17120,30010265679,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:14.884] 6,17121,30010266406,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:14.884] 6,17122,30010267319,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:14.884] 6,17123,30010268077,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:14.884] 6,17124,30010268805,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:14.884] 6,17125,30010269500,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:14.884] 6,17126,30010270233,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:14.884] 6,17127,30010270947,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:14.884] 6,17128,30010271662,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:14.884] 6,17129,30010272523,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:14.884] 6,17130,30010273444,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:14.884] 6,17131,30010274166,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:14.884] 6,17132,30010274922,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:14.884] 6,17133,30010275714,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:14.884] 6,17134,30010276417,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:14.884] 6,17135,30010277153,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:14.884] 6,17136,30010280355,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:14.884] #lvconvert-raid-reshape-stripes-load-reload.sh:82+ dmsetup resume LVMTEST500118vg-LV1
[ 0:16.637] 6,17137,30010361455,-;md: mdX: reshape interrupted.
[ 0:16.637] #lvconvert-raid-reshape-stripes-load-reload.sh:83+ sleep .5
[ 0:17.064] 3,17138,30012257019,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:17.064] 3,17139,30012257974,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:17.064] 3,17140,30012258833,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:17.064] 3,17141,30012259989,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:17.064] #lvconvert-raid-reshape-stripes-load-reload.sh:78+ for i in {0..5}
[ 0:17.567] 6,17142,30012660971,-;md: reshape of RAID array mdX
[ 0:17.567] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup table LVMTEST500118vg-LV1
[ 0:17.568] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup load LVMTEST500118vg-LV1
[ 0:17.568] #lvconvert-raid-reshape-stripes-load-reload.sh:81+ dmsetup suspend --noflush LVMTEST500118vg-LV1
[ 0:17.596] 6,17143,30012978632,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:17.596] 6,17144,30012979378,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:17.596] 6,17145,30012980100,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:17.596] 6,17146,30012980849,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:17.596] 6,17147,30012981544,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:17.596] 6,17148,30012982294,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:17.596] 6,17149,30012983120,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:17.596] 6,17150,30012983878,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:17.596] 6,17151,30012984576,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:17.596] 6,17152,30012985334,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:17.596] 6,17153,30012986029,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:17.596] 6,17154,30012986797,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:17.596] 6,17155,30012987500,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:17.596] 6,17156,30012988256,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:17.596] 6,17157,30012988987,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:17.596] 6,17158,30012989698,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:17.596] 6,17159,30012992381,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:17.596] #lvconvert-raid-reshape-stripes-load-reload.sh:82+ dmsetup resume LVMTEST500118vg-LV1
[ 0:19.993] 6,17160,30013501310,-;md: mdX: reshape interrupted.
[ 0:19.993] #lvconvert-raid-reshape-stripes-load-reload.sh:83+ sleep .5
[ 0:20.416] 3,17161,30015613049,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:20.416] 3,17162,30015613905,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:20.416] 3,17163,30015614847,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:20.416] 3,17164,30015615632,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:20.416] #lvconvert-raid-reshape-stripes-load-reload.sh:78+ for i in {0..5}
[ 0:20.919] 6,17165,30016013020,-;md: reshape of RAID array mdX
[ 0:20.919] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup table LVMTEST500118vg-LV1
[ 0:20.920] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup load LVMTEST500118vg-LV1
[ 0:20.920] #lvconvert-raid-reshape-stripes-load-reload.sh:81+ dmsetup suspend --noflush LVMTEST500118vg-LV1
[ 0:20.961] 6,17166,30016342829,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:20.961] 6,17167,30016343551,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:20.961] 6,17168,30016344518,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:20.961] 6,17169,30016345498,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:20.961] 6,17170,30016346194,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:20.961] 6,17171,30016346879,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:20.961] 6,17172,30016347589,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:20.961] 6,17173,30016348355,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:20.961] 6,17174,30016349071,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:20.961] 6,17175,30016349861,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:20.961] 6,17176,30016350608,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:20.961] 6,17177,30016351359,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:20.961] 6,17178,30016352070,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:20.961] 6,17179,30016352776,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:20.961] 6,17180,30016353487,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:20.961] 6,17181,30016354225,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:20.961] 6,17182,30016357360,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:20.961] #lvconvert-raid-reshape-stripes-load-reload.sh:82+ dmsetup resume LVMTEST500118vg-LV1
[ 0:23.149] 6,17183,30016665872,-;md: mdX: reshape interrupted.
[ 0:23.149] #lvconvert-raid-reshape-stripes-load-reload.sh:83+ sleep .5
[ 0:23.576] 3,17184,30018772956,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:23.576] 3,17185,30018773803,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:23.576] 3,17186,30018774727,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:23.576] 3,17187,30018775862,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:23.576] #lvconvert-raid-reshape-stripes-load-reload.sh:78+ for i in {0..5}
[ 0:24.079] 6,17188,30019173004,-;md: reshape of RAID array mdX
[ 0:24.079] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup table LVMTEST500118vg-LV1
[ 0:24.080] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup load LVMTEST500118vg-LV1
[ 0:24.080] #lvconvert-raid-reshape-stripes-load-reload.sh:81+ dmsetup suspend --noflush LVMTEST500118vg-LV1
[ 0:24.120] 6,17189,30019501847,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:24.120] 6,17190,30019502690,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:24.120] 6,17191,30019503725,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:24.120] 6,17192,30019504500,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:24.120] 6,17193,30019505259,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:24.120] 6,17194,30019505988,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:24.120] 6,17195,30019506721,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:24.120] 6,17196,30019507436,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:24.120] 6,17197,30019508142,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:24.120] 6,17198,30019508866,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:24.120] 6,17199,30019509566,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:24.120] 6,17200,30019510306,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:24.120] 6,17201,30019511022,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:24.120] 6,17202,30019511751,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:24.120] 6,17203,30019512508,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:24.120] 6,17204,30019513221,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:24.120] 6,17205,30019516335,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:24.120] #lvconvert-raid-reshape-stripes-load-reload.sh:82+ dmsetup resume LVMTEST500118vg-LV1
[ 0:25.881] 6,17206,30019601478,-;md: mdX: reshape interrupted.
[ 0:25.881] #lvconvert-raid-reshape-stripes-load-reload.sh:83+ sleep .5
[ 0:26.304] 3,17207,30021496938,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:26.304] 3,17208,30021497745,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:26.304] 3,17209,30021498689,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:26.304] 3,17210,30021499814,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:26.304] #lvconvert-raid-reshape-stripes-load-reload.sh:78+ for i in {0..5}
[ 0:26.806] 6,17211,30021901052,-;md: reshape of RAID array mdX
[ 0:26.806] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup table LVMTEST500118vg-LV1
[ 0:26.807] #lvconvert-raid-reshape-stripes-load-reload.sh:80+ dmsetup load LVMTEST500118vg-LV1
[ 0:26.808] #lvconvert-raid-reshape-stripes-load-reload.sh:81+ dmsetup suspend --noflush LVMTEST500118vg-LV1
[ 0:26.840] 6,17212,30022221732,-;md/raid:mdX: device dm-20 operational as raid disk 0
[ 0:26.840] 6,17213,30022222504,-;md/raid:mdX: device dm-22 operational as raid disk 1
[ 0:26.840] 6,17214,30022223360,-;md/raid:mdX: device dm-24 operational as raid disk 2
[ 0:26.840] 6,17215,30022224100,-;md/raid:mdX: device dm-26 operational as raid disk 3
[ 0:26.840] 6,17216,30022224812,-;md/raid:mdX: device dm-28 operational as raid disk 4
[ 0:26.840] 6,17217,30022225555,-;md/raid:mdX: device dm-30 operational as raid disk 5
[ 0:26.840] 6,17218,30022226304,-;md/raid:mdX: device dm-32 operational as raid disk 6
[ 0:26.840] 6,17219,30022227114,-;md/raid:mdX: device dm-34 operational as raid disk 7
[ 0:26.840] 6,17220,30022227891,-;md/raid:mdX: device dm-36 operational as raid disk 8
[ 0:26.840] 6,17221,30022228595,-;md/raid:mdX: device dm-38 operational as raid disk 9
[ 0:26.840] 6,17222,30022229350,-;md/raid:mdX: device dm-40 operational as raid disk 10
[ 0:26.840] 6,17223,30022230121,-;md/raid:mdX: device dm-43 operational as raid disk 11
[ 0:26.840] 6,17224,30022230846,-;md/raid:mdX: device dm-45 operational as raid disk 12
[ 0:26.840] 6,17225,30022231689,-;md/raid:mdX: device dm-47 operational as raid disk 13
[ 0:26.840] 6,17226,30022232713,-;md/raid:mdX: device dm-49 operational as raid disk 14
[ 0:26.840] 6,17227,30022233457,-;md/raid:mdX: device dm-51 operational as raid disk 15
[ 0:26.840] 6,17228,30022236365,-;md/raid:mdX: raid level 5 active with 16 out of 16 devices, algorithm 2
[ 0:26.840] #lvconvert-raid-reshape-stripes-load-reload.sh:82+ dmsetup resume LVMTEST500118vg-LV1
[ 0:29.221] 6,17229,30022534164,-;md: mdX: reshape done.
[ 0:29.221] 6,17230,30023572945,-;dm-41: detected capacity change from 30720 to 20480
[ 0:29.221] #lvconvert-raid-reshape-stripes-load-reload.sh:83+ sleep .5
[ 0:29.648] 3,17231,30024840682,-;Buffer I/O error on dev dm-41, logical block 15296, async page read
[ 0:29.648] 3,17232,30024841543,-;Buffer I/O error on dev dm-41, logical block 15297, async page read
[ 0:29.648] 3,17233,30024842483,-;Buffer I/O error on dev dm-41, logical block 15298, async page read
[ 0:29.648] 3,17234,30024843308,-;Buffer I/O error on dev dm-41, logical block 15299, async page read
[ 0:29.648] 
[ 0:30.150] aux delay_dev "$dev2" 0
[ 0:30.150] 6,17235,30025245116,-;md: reshape of RAID array mdX
[ 0:30.150] #lvconvert-raid-reshape-stripes-load-reload.sh:88+ aux delay_dev /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118pv2 0
[ 0:30.151] 
[ 0:30.194] kill -9 %% || true
[ 0:30.194] #lvconvert-raid-reshape-stripes-load-reload.sh:90+ kill -9 %%
[ 0:30.194] /home/lvm2/test/shell/lvconvert-raid-reshape-stripes-load-reload.sh: line 90: kill: %%: no such job
[ 0:30.194] #lvconvert-raid-reshape-stripes-load-reload.sh:90+ true
[ 0:30.194] wait
[ 0:30.194] #lvconvert-raid-reshape-stripes-load-reload.sh:91+ wait
[ 0:30.194] 
[ 0:30.194] checksum_ "$mount_dir/random" >MD5_new
[ 0:30.194] #lvconvert-raid-reshape-stripes-load-reload.sh:93+ checksum_ mnt/random
[ 0:30.194] #lvconvert-raid-reshape-stripes-load-reload.sh:34+ md5sum mnt/random
[ 0:30.195] #lvconvert-raid-reshape-stripes-load-reload.sh:34+ cut -f1 '-d '
[ 0:30.196] 
[ 0:30.220] umount "$mount_dir"
[ 0:30.220] #lvconvert-raid-reshape-stripes-load-reload.sh:95+ umount mnt
[ 0:30.220] 
[ 0:30.270] fsck -fn "$DM_DEV_DIR/$vg/$lv1"
[ 0:30.270] 6,17236,30025620079,-;md: mdX: reshape done.
[ 0:30.270] 6,17237,30025648112,-;dm-41: detected capacity change from 30720 to 20480
[ 0:30.270] 6,17238,30025666627,-;EXT4-fs (dm-41): unmounting filesystem 84c2201e-4589-48a8-ba44-019d481366f2.
[ 0:30.270] #lvconvert-raid-reshape-stripes-load-reload.sh:97+ fsck -fn /tmp/LVMTEST500118.AxR1K9qRUi/dev/LVMTEST500118vg/LV1
[ 0:30.271] fsck from util-linux 2.37.4
[ 0:30.285] e2fsck 1.46.5 (30-Dec-2021)
[ 0:30.376] Pass 1: Checking inodes, blocks, and sizes
[ 0:30.378] Pass 2: Checking directory structure
[ 0:30.379] Entry 'random' in / (2) references inode 12 found in group 0's unused inodes area.
[ 0:30.379] Fix? no
[ 0:30.379] 
[ 0:30.379] Entry 'random' in / (2) has deleted/unused inode 12.  Clear? no
[ 0:30.379] 
[ 0:30.379] Pass 3: Checking directory connectivity
[ 0:30.379] Pass 4: Checking reference counts
[ 0:30.379] Pass 5: Checking group summary information
[ 0:30.380] Block bitmap differences:  -(1920--1935) -(2560--5631) -(8289--9280) -(10209--10224)
[ 0:30.381] Fix? no
[ 0:30.381] 
[ 0:30.381] Free blocks count wrong for group #0 (6429, counted=3341).
[ 0:30.381] Fix? no
[ 0:30.381] 
[ 0:30.381] Free blocks count wrong for group #1 (1966, counted=958).
[ 0:30.381] Fix? no
[ 0:30.381] 
[ 0:30.381] Inode bitmap differences:  -12
[ 0:30.381] Fix? no
[ 0:30.381] 
[ 0:30.381] Free inodes count wrong for group #0 (1269, counted=1268).
[ 0:30.381] Fix? no
[ 0:30.381] 
[ 0:30.381] Inode bitmap differences: Group 0 inode bitmap does not match checksum.
[ 0:30.381] IGNORED.
[ 0:30.381] Block bitmap differences: Group 0 block bitmap does not match checksum.
[ 0:30.381] IGNORED.
[ 0:30.381] Group 1 block bitmap does not match checksum.
[ 0:30.381] IGNORED.
[ 0:30.381] 
[ 0:30.381] /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118vg-LV1: ********** WARNING: Filesystem still has errors **********
[ 0:30.381] 
[ 0:30.381] /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118vg-LV1: 12/2560 files (0.0% non-contiguous), 5941/10240 blocks
[ 0:30.381] set +vx; STACKTRACE; set -vx
[ 0:30.382] ##lvconvert-raid-reshape-stripes-load-reload.sh:97+ set +vx
[ 0:30.383] ## - /home/lvm2/test/shell/lvconvert-raid-reshape-stripes-load-reload.sh:97
[ 0:30.383] ## 1 STACKTRACE() called from /home/lvm2/test/shell/lvconvert-raid-reshape-stripes-load-reload.sh:97
[ 0:30.383] <======== Info ========>
[ 0:30.510] ## DMINFO:   Name                          Maj Min Stat Open Targ Event  UUID                                                                
[ 0:30.689] ## DMINFO:   LVMTEST500118pv1              254   3 L--w    2    1      0 TEST-LVMTEST500118pv1                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv10             254  12 L--w    2    1      0 TEST-LVMTEST500118pv10                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv11             254  13 L--w    2    1      0 TEST-LVMTEST500118pv11                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv12             254  14 L--w    2    1      0 TEST-LVMTEST500118pv12                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv13             254  15 L--w    2    1      0 TEST-LVMTEST500118pv13                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv14             254  16 L--w    2    1      0 TEST-LVMTEST500118pv14                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv15             254  17 L--w    2    1      0 TEST-LVMTEST500118pv15                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv16             254  18 L--w    2    1      0 TEST-LVMTEST500118pv16                                              
[ 0:30.689] ## DMINFO:   LVMTEST500118pv2              254   4 L--w    2    1      0 TEST-LVMTEST500118pv2                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv3              254   5 L--w    2    1      0 TEST-LVMTEST500118pv3                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv4              254   6 L--w    2    1      0 TEST-LVMTEST500118pv4                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv5              254   7 L--w    2    1      0 TEST-LVMTEST500118pv5                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv6              254   8 L--w    2    1      0 TEST-LVMTEST500118pv6                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv7              254   9 L--w    2    1      0 TEST-LVMTEST500118pv7                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv8              254  10 L--w    2    1      0 TEST-LVMTEST500118pv8                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118pv9              254  11 L--w    2    1      0 TEST-LVMTEST500118pv9                                               
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1           254  41 L--w    0    1     17 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtAkuGMPeDwwMOqVi8hTIbxgptEzZsufbB
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_0  254  20 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtpD6WQF1CobU3tkwiFxBT1XBcHwerULBu
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_1  254  22 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtqnoSSKda19GKA7WsZG1I9QA04OBf1B0y
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_10 254  40 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtTKg0whlxOIMgMzsuMlkxJ3KSb8XzSEWi
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_11 254  43 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt70tvr9yqHxN2GWU7yxH3YPL3k4xwA63I
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_12 254  45 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt8mxaG5SGE32WHeEosPk8YRzjdhgnimXj
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_13 254  47 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtYFS3S7q3tv79eaD0b9V2dvhXfFH5AzCe
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_14 254  49 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtxnwk2sgs9H0iRYfhExHQHhj8FeUZLHDK
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_15 254  51 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtvULsrV4VKIQovvvPUaqdvK6zXKegCBvX
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_2  254  24 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtKj25JNrKV6SBVwMzsGVHpu3vYfecbUAT
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_3  254  26 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtFC3gDcqmY50hGMoUH5DmBdP1TwoaWdfa
[ 0:30.689] ## DMINFO:   LVMTEST500118vg-LV1_rimage_4  254  28 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmty3RNUNUzcv7DcTiHTUOZMy3koAVhD0sc
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rimage_5  254  30 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt5LvpjI7Lxa8BYrtk6O3jrFwjfXKjPmuU
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rimage_6  254  32 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt6frue4ylGjtrPvQcVzeqnnJmBVHYiJLH
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rimage_7  254  34 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtchsDOVA6wUZnaa6VF0sVfj3bvta4bRru
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rimage_8  254  36 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtCWR2oKVdJld9Vfbzbyod2jo8EQjVhZpc
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rimage_9  254  38 L--w    1    2      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt8IuOMVo5Dq0bbWuODFimVzGzlTUAFEvz
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_0   254  19 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt9VLfstRkWdjLYSEeOBm2XUe6tyWxOfOw
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_1   254  21 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt1Kuhl8cjecGLPkGpXK3swZjCtmzaoffu
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_10  254  39 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt1zmjff3ohmUgjtiD5skuJVn485x6iFw1
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_11  254  42 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtYUT44N6SJiqq8cbNIXG5Kxi1XS1MHy7m
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_12  254  44 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtulWH74owXTdv6w9Lu7vy83W3oYxwff5L
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_13  254  46 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtajpWmgqoHqF78tHKorIemrhNI0NB7sj2
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_14  254  48 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtdPv93N0GAHAOf7VdiCCnSjCOmpUHtMuq
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_15  254  50 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtdCmqb463WIoq8Jf7vmvbwKinVFX0kDSA
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_2   254  23 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt1nvTeTVDkLiF009Z2hQzmZsoHQwmyOuU
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_3   254  25 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt9o0uj9s6CL0ZYEhKh2xg4Psukriei6bz
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_4   254  27 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtzzmLoHtGlsUTSMLMeIMofmzeeUmVCE5k
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_5   254  29 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtn0CwCkqlpgGgrYbEHSdKT6WDuoRwVm9y
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_6   254  31 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtjP5gBQ3S1Q3i0wDc6vWibWQpvx1p9tnR
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_7   254  33 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtqUglRG9A9AL8V81RCER6HpY5nWeL89jG
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_8   254  35 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtvHThjBcpzz3SsW1XJcxNj5ITg9qbmsw8
[ 0:30.690] ## DMINFO:   LVMTEST500118vg-LV1_rmeta_9   254  37 L--w    1    1      0 LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtHLzF2FgdKKO9QnZCW0Tm65J2iFuP0cqi
[ 0:30.690] <======== Active table ========>
[ 0:30.691] ## DMTABLE:  LVMTEST500118pv1: 0 65536 linear 1:0 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv10: 0 65536 linear 1:0 591872
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv11: 0 65536 linear 1:0 657408
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv12: 0 65536 linear 1:0 722944
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv13: 0 65536 linear 1:0 788480
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv14: 0 65536 linear 1:0 854016
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv15: 0 65536 linear 1:0 919552
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv16: 0 65536 linear 1:0 985088
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv2: 0 65536 linear 1:0 67584
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv3: 0 65536 linear 1:0 133120
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv4: 0 65536 linear 1:0 198656
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv5: 0 65536 linear 1:0 264192
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv6: 0 65536 linear 1:0 329728
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv7: 0 65536 linear 1:0 395264
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv8: 0 65536 linear 1:0 460800
[ 0:30.696] ## DMTABLE:  LVMTEST500118pv9: 0 65536 linear 1:0 526336
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1: 0 30720 raid raid5_ls 3 128 region_size 1024 16 254:19 254:20 254:21 254:22 254:23 254:24 254:25 254:26 254:27 254:28 254:29 254:30 254:31 254:32 254:33 254:34 254:35 254:36 254:37 254:38 254:39 254:40 254:42 254:43 254:44 254:45 254:46 254:47 254:48 254:49 254:50 254:51
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_0: 0 2048 linear 254:3 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_0: 2048 2048 linear 254:3 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_1: 0 2048 linear 254:4 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_1: 2048 2048 linear 254:4 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_10: 0 2048 linear 254:13 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_10: 2048 2048 linear 254:13 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_11: 0 2048 linear 254:14 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_11: 2048 2048 linear 254:14 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_12: 0 2048 linear 254:15 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_12: 2048 2048 linear 254:15 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_13: 0 2048 linear 254:16 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_13: 2048 2048 linear 254:16 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_14: 0 2048 linear 254:17 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_14: 2048 2048 linear 254:17 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_15: 0 2048 linear 254:18 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_15: 2048 2048 linear 254:18 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_2: 0 2048 linear 254:5 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_2: 2048 2048 linear 254:5 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_3: 0 2048 linear 254:6 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_3: 2048 2048 linear 254:6 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_4: 0 2048 linear 254:7 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_4: 2048 2048 linear 254:7 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_5: 0 2048 linear 254:8 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_5: 2048 2048 linear 254:8 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_6: 0 2048 linear 254:9 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_6: 2048 2048 linear 254:9 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_7: 0 2048 linear 254:10 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_7: 2048 2048 linear 254:10 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_8: 0 2048 linear 254:11 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_8: 2048 2048 linear 254:11 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_9: 0 2048 linear 254:12 6144
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rimage_9: 2048 2048 linear 254:12 4096
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_0: 0 2048 linear 254:3 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_1: 0 2048 linear 254:4 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_10: 0 2048 linear 254:13 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_11: 0 2048 linear 254:14 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_12: 0 2048 linear 254:15 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_13: 0 2048 linear 254:16 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_14: 0 2048 linear 254:17 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_15: 0 2048 linear 254:18 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_2: 0 2048 linear 254:5 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_3: 0 2048 linear 254:6 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_4: 0 2048 linear 254:7 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_5: 0 2048 linear 254:8 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_6: 0 2048 linear 254:9 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_7: 0 2048 linear 254:10 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_8: 0 2048 linear 254:11 2048
[ 0:30.696] ## DMTABLE:  LVMTEST500118vg-LV1_rmeta_9: 0 2048 linear 254:12 2048
[ 0:30.696] <======== Inactive table ========>
[ 0:30.697] ## DMITABLE: LVMTEST500118pv1: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv10: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv11: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv12: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv13: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv14: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv15: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv16: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv2: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv3: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv4: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv5: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv6: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv7: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv8: 
[ 0:30.701] ## DMITABLE: LVMTEST500118pv9: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_0: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_1: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_10: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_11: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_12: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_13: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_14: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_15: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_2: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_3: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_4: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_5: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_6: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_7: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_8: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rimage_9: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_0: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_1: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_10: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_11: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_12: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_13: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_14: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_15: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_2: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_3: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_4: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_5: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_6: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_7: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_8: 
[ 0:30.701] ## DMITABLE: LVMTEST500118vg-LV1_rmeta_9: 
[ 0:30.701] <======== Status ========>
[ 0:30.702] ## DMSTATUS: LVMTEST500118pv1: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv10: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv11: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv12: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv13: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv14: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv15: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv16: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv2: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv3: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv4: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv5: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv6: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv7: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv8: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118pv9: 0 65536 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1: 0 30720 raid raid5_ls 16 AAAAAAAAAAAAAAAA 2048/2048 idle 0 0 -
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_0: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_0: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_1: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_1: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_10: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_10: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_11: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_11: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_12: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_12: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_13: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_13: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_14: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_14: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_15: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_15: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_2: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_2: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_3: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_3: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_4: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_4: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_5: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_5: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_6: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_6: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_7: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_7: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_8: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_8: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_9: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rimage_9: 2048 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_0: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_1: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_10: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_11: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_12: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_13: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_14: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_15: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_2: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_3: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_4: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_5: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_6: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_7: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_8: 0 2048 linear 
[ 0:30.706] ## DMSTATUS: LVMTEST500118vg-LV1_rmeta_9: 0 2048 linear 
[ 0:30.706] <======== Tree ========>
[ 0:30.707] ## DMTREE:   LVMTEST500118vg-LV1 (254:41)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_15 (254:51)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv16 (254:18)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_15 (254:50)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv16 (254:18)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_14 (254:49)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv15 (254:17)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_14 (254:48)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv15 (254:17)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_13 (254:47)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv14 (254:16)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_13 (254:46)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv14 (254:16)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_12 (254:45)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv13 (254:15)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_12 (254:44)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv13 (254:15)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_11 (254:43)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv12 (254:14)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_11 (254:42)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv12 (254:14)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_10 (254:40)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv11 (254:13)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_10 (254:39)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv11 (254:13)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_9 (254:38)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv10 (254:12)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_9 (254:37)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv10 (254:12)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_8 (254:36)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv9 (254:11)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_8 (254:35)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv9 (254:11)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_7 (254:34)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv8 (254:10)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_7 (254:33)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv8 (254:10)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_6 (254:32)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv7 (254:9)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_6 (254:31)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv7 (254:9)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_5 (254:30)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv6 (254:8)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_5 (254:29)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv6 (254:8)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_4 (254:28)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv5 (254:7)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_4 (254:27)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv5 (254:7)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_3 (254:26)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv4 (254:6)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_3 (254:25)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv4 (254:6)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_2 (254:24)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv3 (254:5)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_2 (254:23)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv3 (254:5)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_1 (254:22)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv2 (254:4)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rmeta_1 (254:21)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv2 (254:4)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    |-LVMTEST500118vg-LV1_rimage_0 (254:20)
[ 0:30.711] ## DMTREE:    |  `-LVMTEST500118pv1 (254:3)
[ 0:30.711] ## DMTREE:    |     `- (1:0)
[ 0:30.711] ## DMTREE:    `-LVMTEST500118vg-LV1_rmeta_0 (254:19)
[ 0:30.711] ## DMTREE:       `-LVMTEST500118pv1 (254:3)
[ 0:30.713] ## DMTREE:          `- (1:0)
[ 0:30.713] ## DMTREE:   rhel_hp--dl380eg8--02-home (254:2)
[ 0:30.713] ## DMTREE:    `- (8:2)
[ 0:30.713] ## DMTREE:   rhel_hp--dl380eg8--02-root (254:0)
[ 0:30.713] ## DMTREE:    `- (8:2)
[ 0:30.713] ## DMTREE:   rhel_hp--dl380eg8--02-swap (254:1)
[ 0:30.713] ## DMTREE:    `- (8:2)
[ 0:30.713] <======== Recursive list of /tmp/LVMTEST500118.AxR1K9qRUi/dev ========>
[ 0:30.713] ## LS_LR:	/tmp/LVMTEST500118.AxR1K9qRUi/dev:
[ 0:30.771] ## LS_LR:	total 4
[ 0:30.771] ## LS_LR:	drwxr-xr-x. 2 root root   17 Mar  3 06:23 LVMTEST500118vg
[ 0:30.771] ## LS_LR:	drwxr-xr-x. 2 root root 4096 Mar  3 06:23 mapper
[ 0:30.771] ## LS_LR:	crw-r--r--. 1 root root 1, 3 Mar  3 06:23 testnull
[ 0:30.771] ## LS_LR:	
[ 0:30.771] ## LS_LR:	/tmp/LVMTEST500118.AxR1K9qRUi/dev/LVMTEST500118vg:
[ 0:30.771] ## LS_LR:	total 0
[ 0:30.771] ## LS_LR:	lrwxrwxrwx. 1 root root 60 Mar  3 06:23 LV1 -> /tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper/LVMTEST500118vg-LV1
[ 0:30.771] ## LS_LR:	
[ 0:30.771] ## LS_LR:	/tmp/LVMTEST500118.AxR1K9qRUi/dev/mapper:
[ 0:30.771] ## LS_LR:	total 0
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   3 Mar  3 06:23 LVMTEST500118pv1
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  12 Mar  3 06:23 LVMTEST500118pv10
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  13 Mar  3 06:23 LVMTEST500118pv11
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  14 Mar  3 06:23 LVMTEST500118pv12
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  15 Mar  3 06:23 LVMTEST500118pv13
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  16 Mar  3 06:23 LVMTEST500118pv14
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  17 Mar  3 06:23 LVMTEST500118pv15
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  18 Mar  3 06:23 LVMTEST500118pv16
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   4 Mar  3 06:23 LVMTEST500118pv2
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   5 Mar  3 06:23 LVMTEST500118pv3
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   6 Mar  3 06:23 LVMTEST500118pv4
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   7 Mar  3 06:23 LVMTEST500118pv5
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   8 Mar  3 06:23 LVMTEST500118pv6
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,   9 Mar  3 06:23 LVMTEST500118pv7
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  10 Mar  3 06:23 LVMTEST500118pv8
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  11 Mar  3 06:23 LVMTEST500118pv9
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  41 Mar  3 06:23 LVMTEST500118vg-LV1
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  20 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_0
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  22 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_1
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  40 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_10
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  43 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_11
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  45 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_12
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  47 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_13
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  49 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_14
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  51 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_15
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  24 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_2
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  26 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_3
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  28 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_4
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  30 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_5
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  32 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_6
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  34 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_7
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  36 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_8
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  38 Mar  3 06:23 LVMTEST500118vg-LV1_rimage_9
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  19 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_0
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  21 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_1
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  39 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_10
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  42 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_11
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  44 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_12
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  46 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_13
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  48 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_14
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  50 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_15
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  23 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_2
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  25 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_3
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  27 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_4
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  29 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_5
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  31 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_6
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  33 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_7
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  35 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_8
[ 0:30.771] ## LS_LR:	brw-------. 1 root root 254,  37 Mar  3 06:23 LVMTEST500118vg-LV1_rmeta_9
[ 0:30.771] ## LS_LR:	crw-------. 1 root root  10, 236 Mar  3 06:23 control
[ 0:30.771] <======== Udev DB content ========>
[ 0:30.772] ## UDEV:	P: /devices/virtual/block/dm-0
[ 0:30.943] ## UDEV:	M: dm-0
[ 0:30.943] ## UDEV:	R: 0
[ 0:30.943] ## UDEV:	U: block
[ 0:30.943] ## UDEV:	T: disk
[ 0:30.943] ## UDEV:	D: b 254:0
[ 0:30.943] ## UDEV:	N: dm-0
[ 0:30.943] ## UDEV:	L: 0
[ 0:30.943] ## UDEV:	S: disk/by-uuid/4617d6fe-d894-407c-82dd-d048e4ce4d2e
[ 0:30.943] ## UDEV:	S: rhel_hp-dl380eg8-02/root
[ 0:30.943] ## UDEV:	S: disk/by-id/dm-name-rhel_hp--dl380eg8--02-root
[ 0:30.943] ## UDEV:	S: disk/by-id/dm-uuid-LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrc6LcfxqBnX4ICmdeGDSmbNqMr7xrFpXN1
[ 0:30.943] ## UDEV:	S: mapper/rhel_hp--dl380eg8--02-root
[ 0:30.943] ## UDEV:	Q: 2
[ 0:30.943] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-0
[ 0:30.943] ## UDEV:	E: SUBSYSTEM=block
[ 0:30.943] ## UDEV:	E: DEVNAME=/dev/dm-0
[ 0:30.943] ## UDEV:	E: DEVTYPE=disk
[ 0:30.943] ## UDEV:	E: DISKSEQ=2
[ 0:30.943] ## UDEV:	E: MAJOR=254
[ 0:30.943] ## UDEV:	E: MINOR=0
[ 0:30.943] ## UDEV:	E: USEC_INITIALIZED=13273392
[ 0:30.943] ## UDEV:	E: DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1
[ 0:30.943] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:30.943] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:30.943] ## UDEV:	E: DM_ACTIVATION=1
[ 0:30.943] ## UDEV:	E: DM_NAME=rhel_hp--dl380eg8--02-root
[ 0:30.943] ## UDEV:	E: DM_UUID=LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrc6LcfxqBnX4ICmdeGDSmbNqMr7xrFpXN1
[ 0:30.943] ## UDEV:	E: DM_SUSPENDED=0
[ 0:30.943] ## UDEV:	E: DM_VG_NAME=rhel_hp-dl380eg8-02
[ 0:30.943] ## UDEV:	E: DM_LV_NAME=root
[ 0:30.943] ## UDEV:	E: ID_FS_UUID=4617d6fe-d894-407c-82dd-d048e4ce4d2e
[ 0:30.943] ## UDEV:	E: ID_FS_UUID_ENC=4617d6fe-d894-407c-82dd-d048e4ce4d2e
[ 0:30.943] ## UDEV:	E: ID_FS_SIZE=75094818816
[ 0:30.943] ## UDEV:	E: ID_FS_LASTBLOCK=18350080
[ 0:30.943] ## UDEV:	E: ID_FS_BLOCKSIZE=4096
[ 0:30.943] ## UDEV:	E: ID_FS_TYPE=xfs
[ 0:30.943] ## UDEV:	E: ID_FS_USAGE=filesystem
[ 0:30.943] ## UDEV:	E: SYSTEMD_READY=1
[ 0:30.943] ## UDEV:	E: DEVLINKS=/dev/disk/by-uuid/4617d6fe-d894-407c-82dd-d048e4ce4d2e /dev/rhel_hp-dl380eg8-02/root /dev/disk/by-id/dm-name-rhel_hp--dl380eg8--02-root /dev/disk/by-id/dm-uuid-LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrc6LcfxqBnX4ICmdeGDSmbNqMr7xrFpXN1 /dev/mapper/rhel_hp--dl380eg8--02-root
[ 0:30.943] ## UDEV:	E: TAGS=:systemd:
[ 0:30.943] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:30.943] ## UDEV:	
[ 0:30.943] ## UDEV:	P: /devices/virtual/block/dm-1
[ 0:30.943] ## UDEV:	M: dm-1
[ 0:30.943] ## UDEV:	R: 1
[ 0:30.943] ## UDEV:	U: block
[ 0:30.943] ## UDEV:	T: disk
[ 0:30.943] ## UDEV:	D: b 254:1
[ 0:30.943] ## UDEV:	N: dm-1
[ 0:30.943] ## UDEV:	L: 0
[ 0:30.943] ## UDEV:	S: disk/by-id/dm-uuid-LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrcWcL4fW2oxspbuP1xUy2h8eewTnEu8iDo
[ 0:30.943] ## UDEV:	S: rhel_hp-dl380eg8-02/swap
[ 0:30.943] ## UDEV:	S: disk/by-id/dm-name-rhel_hp--dl380eg8--02-swap
[ 0:30.943] ## UDEV:	S: mapper/rhel_hp--dl380eg8--02-swap
[ 0:30.943] ## UDEV:	S: disk/by-uuid/1e83e70a-06a7-4200-a043-be424fe52840
[ 0:30.943] ## UDEV:	Q: 3
[ 0:30.943] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-1
[ 0:30.943] ## UDEV:	E: SUBSYSTEM=block
[ 0:30.943] ## UDEV:	E: DEVNAME=/dev/dm-1
[ 0:30.943] ## UDEV:	E: DEVTYPE=disk
[ 0:30.943] ## UDEV:	E: DISKSEQ=3
[ 0:30.943] ## UDEV:	E: MAJOR=254
[ 0:30.943] ## UDEV:	E: MINOR=1
[ 0:30.943] ## UDEV:	E: USEC_INITIALIZED=13441256
[ 0:30.943] ## UDEV:	E: DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1
[ 0:30.943] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:30.943] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:30.943] ## UDEV:	E: DM_ACTIVATION=1
[ 0:30.943] ## UDEV:	E: DM_NAME=rhel_hp--dl380eg8--02-swap
[ 0:30.943] ## UDEV:	E: DM_UUID=LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrcWcL4fW2oxspbuP1xUy2h8eewTnEu8iDo
[ 0:30.943] ## UDEV:	E: DM_SUSPENDED=0
[ 0:30.943] ## UDEV:	E: DM_VG_NAME=rhel_hp-dl380eg8-02
[ 0:30.943] ## UDEV:	E: DM_LV_NAME=swap
[ 0:30.943] ## UDEV:	E: ID_FS_UUID=1e83e70a-06a7-4200-a043-be424fe52840
[ 0:30.943] ## UDEV:	E: ID_FS_UUID_ENC=1e83e70a-06a7-4200-a043-be424fe52840
[ 0:30.943] ## UDEV:	E: ID_FS_VERSION=1
[ 0:30.943] ## UDEV:	E: ID_FS_TYPE=swap
[ 0:30.943] ## UDEV:	E: ID_FS_USAGE=other
[ 0:30.943] ## UDEV:	E: SYSTEMD_READY=1
[ 0:30.943] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-uuid-LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrcWcL4fW2oxspbuP1xUy2h8eewTnEu8iDo /dev/rhel_hp-dl380eg8-02/swap /dev/disk/by-id/dm-name-rhel_hp--dl380eg8--02-swap /dev/mapper/rhel_hp--dl380eg8--02-swap /dev/disk/by-uuid/1e83e70a-06a7-4200-a043-be424fe52840
[ 0:30.943] ## UDEV:	E: TAGS=:systemd:
[ 0:30.943] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:30.943] ## UDEV:	
[ 0:30.943] ## UDEV:	P: /devices/virtual/block/dm-10
[ 0:30.943] ## UDEV:	M: dm-10
[ 0:30.943] ## UDEV:	R: 10
[ 0:30.943] ## UDEV:	U: block
[ 0:30.943] ## UDEV:	T: disk
[ 0:30.943] ## UDEV:	D: b 254:10
[ 0:30.943] ## UDEV:	N: dm-10
[ 0:30.943] ## UDEV:	L: 0
[ 0:30.943] ## UDEV:	S: mapper/LVMTEST500118pv8
[ 0:30.943] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv8
[ 0:30.943] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv8
[ 0:30.943] ## UDEV:	Q: 28776
[ 0:30.943] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-10
[ 0:30.943] ## UDEV:	E: SUBSYSTEM=block
[ 0:30.943] ## UDEV:	E: DEVNAME=/dev/dm-10
[ 0:30.943] ## UDEV:	E: DEVTYPE=disk
[ 0:30.943] ## UDEV:	E: DISKSEQ=28776
[ 0:30.943] ## UDEV:	E: MAJOR=254
[ 0:30.943] ## UDEV:	E: MINOR=10
[ 0:30.943] ## UDEV:	E: USEC_INITIALIZED=29990200302
[ 0:30.943] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:30.943] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.015] ## UDEV:	E: DM_NAME=LVMTEST500118pv8
[ 0:31.015] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv8
[ 0:31.015] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.015] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.015] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.015] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118pv8 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv8 /dev/disk/by-id/dm-name-LVMTEST500118pv8
[ 0:31.015] ## UDEV:	E: TAGS=:systemd:
[ 0:31.015] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.015] ## UDEV:	
[ 0:31.015] ## UDEV:	P: /devices/virtual/block/dm-11
[ 0:31.015] ## UDEV:	M: dm-11
[ 0:31.015] ## UDEV:	R: 11
[ 0:31.015] ## UDEV:	U: block
[ 0:31.015] ## UDEV:	T: disk
[ 0:31.015] ## UDEV:	D: b 254:11
[ 0:31.015] ## UDEV:	N: dm-11
[ 0:31.015] ## UDEV:	L: 0
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv9
[ 0:31.015] ## UDEV:	S: mapper/LVMTEST500118pv9
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv9
[ 0:31.015] ## UDEV:	Q: 28777
[ 0:31.015] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-11
[ 0:31.015] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.015] ## UDEV:	E: DEVNAME=/dev/dm-11
[ 0:31.015] ## UDEV:	E: DEVTYPE=disk
[ 0:31.015] ## UDEV:	E: DISKSEQ=28777
[ 0:31.015] ## UDEV:	E: MAJOR=254
[ 0:31.015] ## UDEV:	E: MINOR=11
[ 0:31.015] ## UDEV:	E: USEC_INITIALIZED=29990202104
[ 0:31.015] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.015] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.015] ## UDEV:	E: DM_NAME=LVMTEST500118pv9
[ 0:31.015] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv9
[ 0:31.015] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.015] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.015] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.015] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv9 /dev/mapper/LVMTEST500118pv9 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv9
[ 0:31.015] ## UDEV:	E: TAGS=:systemd:
[ 0:31.015] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.015] ## UDEV:	
[ 0:31.015] ## UDEV:	P: /devices/virtual/block/dm-12
[ 0:31.015] ## UDEV:	M: dm-12
[ 0:31.015] ## UDEV:	R: 12
[ 0:31.015] ## UDEV:	U: block
[ 0:31.015] ## UDEV:	T: disk
[ 0:31.015] ## UDEV:	D: b 254:12
[ 0:31.015] ## UDEV:	N: dm-12
[ 0:31.015] ## UDEV:	L: 0
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv10
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv10
[ 0:31.015] ## UDEV:	S: mapper/LVMTEST500118pv10
[ 0:31.015] ## UDEV:	Q: 28778
[ 0:31.015] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-12
[ 0:31.015] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.015] ## UDEV:	E: DEVNAME=/dev/dm-12
[ 0:31.015] ## UDEV:	E: DEVTYPE=disk
[ 0:31.015] ## UDEV:	E: DISKSEQ=28778
[ 0:31.015] ## UDEV:	E: MAJOR=254
[ 0:31.015] ## UDEV:	E: MINOR=12
[ 0:31.015] ## UDEV:	E: USEC_INITIALIZED=29990203194
[ 0:31.015] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.015] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.015] ## UDEV:	E: DM_NAME=LVMTEST500118pv10
[ 0:31.015] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv10
[ 0:31.015] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.015] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.015] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.015] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv10 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv10 /dev/mapper/LVMTEST500118pv10
[ 0:31.015] ## UDEV:	E: TAGS=:systemd:
[ 0:31.015] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.015] ## UDEV:	
[ 0:31.015] ## UDEV:	P: /devices/virtual/block/dm-13
[ 0:31.015] ## UDEV:	M: dm-13
[ 0:31.015] ## UDEV:	R: 13
[ 0:31.015] ## UDEV:	U: block
[ 0:31.015] ## UDEV:	T: disk
[ 0:31.015] ## UDEV:	D: b 254:13
[ 0:31.015] ## UDEV:	N: dm-13
[ 0:31.015] ## UDEV:	L: 0
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv11
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv11
[ 0:31.015] ## UDEV:	S: mapper/LVMTEST500118pv11
[ 0:31.015] ## UDEV:	Q: 28779
[ 0:31.015] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-13
[ 0:31.015] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.015] ## UDEV:	E: DEVNAME=/dev/dm-13
[ 0:31.015] ## UDEV:	E: DEVTYPE=disk
[ 0:31.015] ## UDEV:	E: DISKSEQ=28779
[ 0:31.015] ## UDEV:	E: MAJOR=254
[ 0:31.015] ## UDEV:	E: MINOR=13
[ 0:31.015] ## UDEV:	E: USEC_INITIALIZED=29990204380
[ 0:31.015] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.015] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.015] ## UDEV:	E: DM_NAME=LVMTEST500118pv11
[ 0:31.015] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv11
[ 0:31.015] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.015] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.015] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.015] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv11 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv11 /dev/mapper/LVMTEST500118pv11
[ 0:31.015] ## UDEV:	E: TAGS=:systemd:
[ 0:31.015] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.015] ## UDEV:	
[ 0:31.015] ## UDEV:	P: /devices/virtual/block/dm-14
[ 0:31.015] ## UDEV:	M: dm-14
[ 0:31.015] ## UDEV:	R: 14
[ 0:31.015] ## UDEV:	U: block
[ 0:31.015] ## UDEV:	T: disk
[ 0:31.015] ## UDEV:	D: b 254:14
[ 0:31.015] ## UDEV:	N: dm-14
[ 0:31.015] ## UDEV:	L: 0
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv12
[ 0:31.015] ## UDEV:	S: mapper/LVMTEST500118pv12
[ 0:31.015] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv12
[ 0:31.015] ## UDEV:	Q: 28780
[ 0:31.015] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-14
[ 0:31.015] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.015] ## UDEV:	E: DEVNAME=/dev/dm-14
[ 0:31.015] ## UDEV:	E: DEVTYPE=disk
[ 0:31.015] ## UDEV:	E: DISKSEQ=28780
[ 0:31.015] ## UDEV:	E: MAJOR=254
[ 0:31.015] ## UDEV:	E: MINOR=14
[ 0:31.015] ## UDEV:	E: USEC_INITIALIZED=29990205824
[ 0:31.015] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.015] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.015] ## UDEV:	E: DM_NAME=LVMTEST500118pv12
[ 0:31.015] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv12
[ 0:31.087] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.087] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.087] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.087] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv12 /dev/mapper/LVMTEST500118pv12 /dev/disk/by-id/dm-name-LVMTEST500118pv12
[ 0:31.087] ## UDEV:	E: TAGS=:systemd:
[ 0:31.087] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.087] ## UDEV:	
[ 0:31.087] ## UDEV:	P: /devices/virtual/block/dm-15
[ 0:31.087] ## UDEV:	M: dm-15
[ 0:31.087] ## UDEV:	R: 15
[ 0:31.087] ## UDEV:	U: block
[ 0:31.087] ## UDEV:	T: disk
[ 0:31.087] ## UDEV:	D: b 254:15
[ 0:31.087] ## UDEV:	N: dm-15
[ 0:31.087] ## UDEV:	L: 0
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv13
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv13
[ 0:31.087] ## UDEV:	S: mapper/LVMTEST500118pv13
[ 0:31.087] ## UDEV:	Q: 28781
[ 0:31.087] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-15
[ 0:31.087] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.087] ## UDEV:	E: DEVNAME=/dev/dm-15
[ 0:31.087] ## UDEV:	E: DEVTYPE=disk
[ 0:31.087] ## UDEV:	E: DISKSEQ=28781
[ 0:31.087] ## UDEV:	E: MAJOR=254
[ 0:31.087] ## UDEV:	E: MINOR=15
[ 0:31.087] ## UDEV:	E: USEC_INITIALIZED=29990206972
[ 0:31.087] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.087] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.087] ## UDEV:	E: DM_NAME=LVMTEST500118pv13
[ 0:31.087] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv13
[ 0:31.087] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.087] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.087] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.087] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv13 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv13 /dev/mapper/LVMTEST500118pv13
[ 0:31.087] ## UDEV:	E: TAGS=:systemd:
[ 0:31.087] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.087] ## UDEV:	
[ 0:31.087] ## UDEV:	P: /devices/virtual/block/dm-16
[ 0:31.087] ## UDEV:	M: dm-16
[ 0:31.087] ## UDEV:	R: 16
[ 0:31.087] ## UDEV:	U: block
[ 0:31.087] ## UDEV:	T: disk
[ 0:31.087] ## UDEV:	D: b 254:16
[ 0:31.087] ## UDEV:	N: dm-16
[ 0:31.087] ## UDEV:	L: 0
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv14
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv14
[ 0:31.087] ## UDEV:	S: mapper/LVMTEST500118pv14
[ 0:31.087] ## UDEV:	Q: 28782
[ 0:31.087] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-16
[ 0:31.087] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.087] ## UDEV:	E: DEVNAME=/dev/dm-16
[ 0:31.087] ## UDEV:	E: DEVTYPE=disk
[ 0:31.087] ## UDEV:	E: DISKSEQ=28782
[ 0:31.087] ## UDEV:	E: MAJOR=254
[ 0:31.087] ## UDEV:	E: MINOR=16
[ 0:31.087] ## UDEV:	E: USEC_INITIALIZED=29990207932
[ 0:31.087] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.087] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.087] ## UDEV:	E: DM_NAME=LVMTEST500118pv14
[ 0:31.087] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv14
[ 0:31.087] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.087] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.087] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.087] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv14 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv14 /dev/mapper/LVMTEST500118pv14
[ 0:31.087] ## UDEV:	E: TAGS=:systemd:
[ 0:31.087] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.087] ## UDEV:	
[ 0:31.087] ## UDEV:	P: /devices/virtual/block/dm-17
[ 0:31.087] ## UDEV:	M: dm-17
[ 0:31.087] ## UDEV:	R: 17
[ 0:31.087] ## UDEV:	U: block
[ 0:31.087] ## UDEV:	T: disk
[ 0:31.087] ## UDEV:	D: b 254:17
[ 0:31.087] ## UDEV:	N: dm-17
[ 0:31.087] ## UDEV:	L: 0
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv15
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv15
[ 0:31.087] ## UDEV:	S: mapper/LVMTEST500118pv15
[ 0:31.087] ## UDEV:	Q: 28783
[ 0:31.087] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-17
[ 0:31.087] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.087] ## UDEV:	E: DEVNAME=/dev/dm-17
[ 0:31.087] ## UDEV:	E: DEVTYPE=disk
[ 0:31.087] ## UDEV:	E: DISKSEQ=28783
[ 0:31.087] ## UDEV:	E: MAJOR=254
[ 0:31.087] ## UDEV:	E: MINOR=17
[ 0:31.087] ## UDEV:	E: USEC_INITIALIZED=29990209086
[ 0:31.087] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.087] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.087] ## UDEV:	E: DM_NAME=LVMTEST500118pv15
[ 0:31.087] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv15
[ 0:31.087] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.087] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.087] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.087] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv15 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv15 /dev/mapper/LVMTEST500118pv15
[ 0:31.087] ## UDEV:	E: TAGS=:systemd:
[ 0:31.087] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.087] ## UDEV:	
[ 0:31.087] ## UDEV:	P: /devices/virtual/block/dm-18
[ 0:31.087] ## UDEV:	M: dm-18
[ 0:31.087] ## UDEV:	R: 18
[ 0:31.087] ## UDEV:	U: block
[ 0:31.087] ## UDEV:	T: disk
[ 0:31.087] ## UDEV:	D: b 254:18
[ 0:31.087] ## UDEV:	N: dm-18
[ 0:31.087] ## UDEV:	L: 0
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv16
[ 0:31.087] ## UDEV:	S: mapper/LVMTEST500118pv16
[ 0:31.087] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv16
[ 0:31.087] ## UDEV:	Q: 28784
[ 0:31.087] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-18
[ 0:31.087] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.087] ## UDEV:	E: DEVNAME=/dev/dm-18
[ 0:31.087] ## UDEV:	E: DEVTYPE=disk
[ 0:31.087] ## UDEV:	E: DISKSEQ=28784
[ 0:31.087] ## UDEV:	E: MAJOR=254
[ 0:31.087] ## UDEV:	E: MINOR=18
[ 0:31.087] ## UDEV:	E: USEC_INITIALIZED=29990210836
[ 0:31.087] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.087] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.087] ## UDEV:	E: DM_NAME=LVMTEST500118pv16
[ 0:31.087] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv16
[ 0:31.087] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.141] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.141] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.141] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv16 /dev/mapper/LVMTEST500118pv16 /dev/disk/by-id/dm-name-LVMTEST500118pv16
[ 0:31.141] ## UDEV:	E: TAGS=:systemd:
[ 0:31.141] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.141] ## UDEV:	
[ 0:31.141] ## UDEV:	P: /devices/virtual/block/dm-19
[ 0:31.141] ## UDEV:	M: dm-19
[ 0:31.141] ## UDEV:	R: 19
[ 0:31.141] ## UDEV:	U: block
[ 0:31.141] ## UDEV:	T: disk
[ 0:31.141] ## UDEV:	D: b 254:19
[ 0:31.141] ## UDEV:	N: dm-19
[ 0:31.141] ## UDEV:	L: 0
[ 0:31.141] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_0
[ 0:31.141] ## UDEV:	Q: 28796
[ 0:31.141] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-19
[ 0:31.141] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.141] ## UDEV:	E: DEVNAME=/dev/dm-19
[ 0:31.141] ## UDEV:	E: DEVTYPE=disk
[ 0:31.141] ## UDEV:	E: DISKSEQ=28796
[ 0:31.141] ## UDEV:	E: MAJOR=254
[ 0:31.141] ## UDEV:	E: MINOR=19
[ 0:31.141] ## UDEV:	E: USEC_INITIALIZED=29990670220
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.141] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_0
[ 0:31.141] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt9VLfstRkWdjLYSEeOBm2XUe6tyWxOfOw
[ 0:31.141] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.141] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.141] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.141] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_0
[ 0:31.141] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.141] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_0
[ 0:31.141] ## UDEV:	E: TAGS=:systemd:
[ 0:31.141] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.141] ## UDEV:	
[ 0:31.141] ## UDEV:	P: /devices/virtual/block/dm-2
[ 0:31.141] ## UDEV:	M: dm-2
[ 0:31.141] ## UDEV:	R: 2
[ 0:31.141] ## UDEV:	U: block
[ 0:31.141] ## UDEV:	T: disk
[ 0:31.141] ## UDEV:	D: b 254:2
[ 0:31.141] ## UDEV:	N: dm-2
[ 0:31.141] ## UDEV:	L: 0
[ 0:31.141] ## UDEV:	S: disk/by-id/dm-uuid-LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrckxr7HBpmaqAZErFc2RDMo6POwdt2Hebp
[ 0:31.141] ## UDEV:	S: rhel_hp-dl380eg8-02/home
[ 0:31.141] ## UDEV:	S: mapper/rhel_hp--dl380eg8--02-home
[ 0:31.141] ## UDEV:	S: disk/by-uuid/de9cbcb9-5f04-4a30-84dd-d62aaad366a4
[ 0:31.141] ## UDEV:	S: disk/by-id/dm-name-rhel_hp--dl380eg8--02-home
[ 0:31.141] ## UDEV:	Q: 4
[ 0:31.141] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-2
[ 0:31.141] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.141] ## UDEV:	E: DEVNAME=/dev/dm-2
[ 0:31.141] ## UDEV:	E: DEVTYPE=disk
[ 0:31.141] ## UDEV:	E: DISKSEQ=4
[ 0:31.141] ## UDEV:	E: MAJOR=254
[ 0:31.141] ## UDEV:	E: MINOR=2
[ 0:31.141] ## UDEV:	E: USEC_INITIALIZED=23787344
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_LIBRARY_FALLBACK_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.141] ## UDEV:	E: DM_NAME=rhel_hp--dl380eg8--02-home
[ 0:31.141] ## UDEV:	E: DM_UUID=LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrckxr7HBpmaqAZErFc2RDMo6POwdt2Hebp
[ 0:31.141] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.141] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.141] ## UDEV:	E: DM_VG_NAME=rhel_hp-dl380eg8-02
[ 0:31.141] ## UDEV:	E: DM_LV_NAME=home
[ 0:31.141] ## UDEV:	E: ID_FS_UUID=de9cbcb9-5f04-4a30-84dd-d62aaad366a4
[ 0:31.141] ## UDEV:	E: ID_FS_UUID_ENC=de9cbcb9-5f04-4a30-84dd-d62aaad366a4
[ 0:31.141] ## UDEV:	E: ID_FS_SIZE=915152715776
[ 0:31.141] ## UDEV:	E: ID_FS_LASTBLOCK=223535104
[ 0:31.141] ## UDEV:	E: ID_FS_BLOCKSIZE=4096
[ 0:31.141] ## UDEV:	E: ID_FS_TYPE=xfs
[ 0:31.141] ## UDEV:	E: ID_FS_USAGE=filesystem
[ 0:31.141] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.141] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-uuid-LVM-8lOzWifObwdf5noG0STmWocuXo5ofCrckxr7HBpmaqAZErFc2RDMo6POwdt2Hebp /dev/rhel_hp-dl380eg8-02/home /dev/mapper/rhel_hp--dl380eg8--02-home /dev/disk/by-uuid/de9cbcb9-5f04-4a30-84dd-d62aaad366a4 /dev/disk/by-id/dm-name-rhel_hp--dl380eg8--02-home
[ 0:31.141] ## UDEV:	E: TAGS=:systemd:
[ 0:31.141] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.141] ## UDEV:	
[ 0:31.141] ## UDEV:	P: /devices/virtual/block/dm-20
[ 0:31.141] ## UDEV:	M: dm-20
[ 0:31.141] ## UDEV:	R: 20
[ 0:31.141] ## UDEV:	U: block
[ 0:31.141] ## UDEV:	T: disk
[ 0:31.141] ## UDEV:	D: b 254:20
[ 0:31.141] ## UDEV:	N: dm-20
[ 0:31.141] ## UDEV:	L: 0
[ 0:31.141] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_0
[ 0:31.141] ## UDEV:	Q: 28797
[ 0:31.141] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-20
[ 0:31.141] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.141] ## UDEV:	E: DEVNAME=/dev/dm-20
[ 0:31.141] ## UDEV:	E: DEVTYPE=disk
[ 0:31.141] ## UDEV:	E: DISKSEQ=28797
[ 0:31.141] ## UDEV:	E: MAJOR=254
[ 0:31.141] ## UDEV:	E: MINOR=20
[ 0:31.141] ## UDEV:	E: USEC_INITIALIZED=29990671640
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.141] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.141] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_0
[ 0:31.141] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtpD6WQF1CobU3tkwiFxBT1XBcHwerULBu
[ 0:31.141] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.141] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.141] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.141] ## UDEV:	E: DM_LV_NAME=LV1_rimage_0
[ 0:31.213] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.213] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_0
[ 0:31.213] ## UDEV:	E: TAGS=:systemd:
[ 0:31.213] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.213] ## UDEV:	
[ 0:31.213] ## UDEV:	P: /devices/virtual/block/dm-21
[ 0:31.213] ## UDEV:	M: dm-21
[ 0:31.213] ## UDEV:	R: 21
[ 0:31.213] ## UDEV:	U: block
[ 0:31.213] ## UDEV:	T: disk
[ 0:31.213] ## UDEV:	D: b 254:21
[ 0:31.213] ## UDEV:	N: dm-21
[ 0:31.213] ## UDEV:	L: 0
[ 0:31.213] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_1
[ 0:31.213] ## UDEV:	Q: 28798
[ 0:31.213] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-21
[ 0:31.213] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.213] ## UDEV:	E: DEVNAME=/dev/dm-21
[ 0:31.213] ## UDEV:	E: DEVTYPE=disk
[ 0:31.213] ## UDEV:	E: DISKSEQ=28798
[ 0:31.213] ## UDEV:	E: MAJOR=254
[ 0:31.213] ## UDEV:	E: MINOR=21
[ 0:31.213] ## UDEV:	E: USEC_INITIALIZED=29990672680
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.213] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_1
[ 0:31.213] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt1Kuhl8cjecGLPkGpXK3swZjCtmzaoffu
[ 0:31.213] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.213] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.213] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.213] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_1
[ 0:31.213] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.213] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_1
[ 0:31.213] ## UDEV:	E: TAGS=:systemd:
[ 0:31.213] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.213] ## UDEV:	
[ 0:31.213] ## UDEV:	P: /devices/virtual/block/dm-22
[ 0:31.213] ## UDEV:	M: dm-22
[ 0:31.213] ## UDEV:	R: 22
[ 0:31.213] ## UDEV:	U: block
[ 0:31.213] ## UDEV:	T: disk
[ 0:31.213] ## UDEV:	D: b 254:22
[ 0:31.213] ## UDEV:	N: dm-22
[ 0:31.213] ## UDEV:	L: 0
[ 0:31.213] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_1
[ 0:31.213] ## UDEV:	Q: 28799
[ 0:31.213] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-22
[ 0:31.213] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.213] ## UDEV:	E: DEVNAME=/dev/dm-22
[ 0:31.213] ## UDEV:	E: DEVTYPE=disk
[ 0:31.213] ## UDEV:	E: DISKSEQ=28799
[ 0:31.213] ## UDEV:	E: MAJOR=254
[ 0:31.213] ## UDEV:	E: MINOR=22
[ 0:31.213] ## UDEV:	E: USEC_INITIALIZED=29990673821
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.213] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_1
[ 0:31.213] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtqnoSSKda19GKA7WsZG1I9QA04OBf1B0y
[ 0:31.213] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.213] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.213] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.213] ## UDEV:	E: DM_LV_NAME=LV1_rimage_1
[ 0:31.213] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.213] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_1
[ 0:31.213] ## UDEV:	E: TAGS=:systemd:
[ 0:31.213] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.213] ## UDEV:	
[ 0:31.213] ## UDEV:	P: /devices/virtual/block/dm-23
[ 0:31.213] ## UDEV:	M: dm-23
[ 0:31.213] ## UDEV:	R: 23
[ 0:31.213] ## UDEV:	U: block
[ 0:31.213] ## UDEV:	T: disk
[ 0:31.213] ## UDEV:	D: b 254:23
[ 0:31.213] ## UDEV:	N: dm-23
[ 0:31.213] ## UDEV:	L: 0
[ 0:31.213] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_2
[ 0:31.213] ## UDEV:	Q: 28800
[ 0:31.213] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-23
[ 0:31.213] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.213] ## UDEV:	E: DEVNAME=/dev/dm-23
[ 0:31.213] ## UDEV:	E: DEVTYPE=disk
[ 0:31.213] ## UDEV:	E: DISKSEQ=28800
[ 0:31.213] ## UDEV:	E: MAJOR=254
[ 0:31.213] ## UDEV:	E: MINOR=23
[ 0:31.213] ## UDEV:	E: USEC_INITIALIZED=29990675299
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.213] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_2
[ 0:31.213] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt1nvTeTVDkLiF009Z2hQzmZsoHQwmyOuU
[ 0:31.213] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.213] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.213] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.213] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_2
[ 0:31.213] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.213] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_2
[ 0:31.213] ## UDEV:	E: TAGS=:systemd:
[ 0:31.213] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.213] ## UDEV:	
[ 0:31.213] ## UDEV:	P: /devices/virtual/block/dm-24
[ 0:31.213] ## UDEV:	M: dm-24
[ 0:31.213] ## UDEV:	R: 24
[ 0:31.213] ## UDEV:	U: block
[ 0:31.213] ## UDEV:	T: disk
[ 0:31.213] ## UDEV:	D: b 254:24
[ 0:31.213] ## UDEV:	N: dm-24
[ 0:31.213] ## UDEV:	L: 0
[ 0:31.213] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_2
[ 0:31.213] ## UDEV:	Q: 28801
[ 0:31.213] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-24
[ 0:31.213] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.213] ## UDEV:	E: DEVNAME=/dev/dm-24
[ 0:31.213] ## UDEV:	E: DEVTYPE=disk
[ 0:31.213] ## UDEV:	E: DISKSEQ=28801
[ 0:31.213] ## UDEV:	E: MAJOR=254
[ 0:31.213] ## UDEV:	E: MINOR=24
[ 0:31.213] ## UDEV:	E: USEC_INITIALIZED=29990676126
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.213] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.283] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_2
[ 0:31.283] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtKj25JNrKV6SBVwMzsGVHpu3vYfecbUAT
[ 0:31.283] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.283] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.283] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.283] ## UDEV:	E: DM_LV_NAME=LV1_rimage_2
[ 0:31.283] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.283] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_2
[ 0:31.283] ## UDEV:	E: TAGS=:systemd:
[ 0:31.283] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.283] ## UDEV:	
[ 0:31.283] ## UDEV:	P: /devices/virtual/block/dm-25
[ 0:31.283] ## UDEV:	M: dm-25
[ 0:31.283] ## UDEV:	R: 25
[ 0:31.283] ## UDEV:	U: block
[ 0:31.283] ## UDEV:	T: disk
[ 0:31.283] ## UDEV:	D: b 254:25
[ 0:31.283] ## UDEV:	N: dm-25
[ 0:31.283] ## UDEV:	L: 0
[ 0:31.283] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_3
[ 0:31.283] ## UDEV:	Q: 28802
[ 0:31.283] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-25
[ 0:31.283] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.283] ## UDEV:	E: DEVNAME=/dev/dm-25
[ 0:31.283] ## UDEV:	E: DEVTYPE=disk
[ 0:31.283] ## UDEV:	E: DISKSEQ=28802
[ 0:31.283] ## UDEV:	E: MAJOR=254
[ 0:31.283] ## UDEV:	E: MINOR=25
[ 0:31.283] ## UDEV:	E: USEC_INITIALIZED=29990677189
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.283] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_3
[ 0:31.283] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt9o0uj9s6CL0ZYEhKh2xg4Psukriei6bz
[ 0:31.283] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.283] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.283] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.283] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_3
[ 0:31.283] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.283] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_3
[ 0:31.283] ## UDEV:	E: TAGS=:systemd:
[ 0:31.283] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.283] ## UDEV:	
[ 0:31.283] ## UDEV:	P: /devices/virtual/block/dm-26
[ 0:31.283] ## UDEV:	M: dm-26
[ 0:31.283] ## UDEV:	R: 26
[ 0:31.283] ## UDEV:	U: block
[ 0:31.283] ## UDEV:	T: disk
[ 0:31.283] ## UDEV:	D: b 254:26
[ 0:31.283] ## UDEV:	N: dm-26
[ 0:31.283] ## UDEV:	L: 0
[ 0:31.283] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_3
[ 0:31.283] ## UDEV:	Q: 28803
[ 0:31.283] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-26
[ 0:31.283] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.283] ## UDEV:	E: DEVNAME=/dev/dm-26
[ 0:31.283] ## UDEV:	E: DEVTYPE=disk
[ 0:31.283] ## UDEV:	E: DISKSEQ=28803
[ 0:31.283] ## UDEV:	E: MAJOR=254
[ 0:31.283] ## UDEV:	E: MINOR=26
[ 0:31.283] ## UDEV:	E: USEC_INITIALIZED=29990678638
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.283] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_3
[ 0:31.283] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtFC3gDcqmY50hGMoUH5DmBdP1TwoaWdfa
[ 0:31.283] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.283] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.283] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.283] ## UDEV:	E: DM_LV_NAME=LV1_rimage_3
[ 0:31.283] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.283] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_3
[ 0:31.283] ## UDEV:	E: TAGS=:systemd:
[ 0:31.283] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.283] ## UDEV:	
[ 0:31.283] ## UDEV:	P: /devices/virtual/block/dm-27
[ 0:31.283] ## UDEV:	M: dm-27
[ 0:31.283] ## UDEV:	R: 27
[ 0:31.283] ## UDEV:	U: block
[ 0:31.283] ## UDEV:	T: disk
[ 0:31.283] ## UDEV:	D: b 254:27
[ 0:31.283] ## UDEV:	N: dm-27
[ 0:31.283] ## UDEV:	L: 0
[ 0:31.283] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_4
[ 0:31.283] ## UDEV:	Q: 28804
[ 0:31.283] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-27
[ 0:31.283] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.283] ## UDEV:	E: DEVNAME=/dev/dm-27
[ 0:31.283] ## UDEV:	E: DEVTYPE=disk
[ 0:31.283] ## UDEV:	E: DISKSEQ=28804
[ 0:31.283] ## UDEV:	E: MAJOR=254
[ 0:31.283] ## UDEV:	E: MINOR=27
[ 0:31.283] ## UDEV:	E: USEC_INITIALIZED=29990680155
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.283] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.283] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_4
[ 0:31.283] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtzzmLoHtGlsUTSMLMeIMofmzeeUmVCE5k
[ 0:31.283] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.283] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.283] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.283] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_4
[ 0:31.283] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.283] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_4
[ 0:31.283] ## UDEV:	E: TAGS=:systemd:
[ 0:31.283] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.283] ## UDEV:	
[ 0:31.283] ## UDEV:	P: /devices/virtual/block/dm-28
[ 0:31.283] ## UDEV:	M: dm-28
[ 0:31.283] ## UDEV:	R: 28
[ 0:31.283] ## UDEV:	U: block
[ 0:31.283] ## UDEV:	T: disk
[ 0:31.283] ## UDEV:	D: b 254:28
[ 0:31.283] ## UDEV:	N: dm-28
[ 0:31.283] ## UDEV:	L: 0
[ 0:31.283] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_4
[ 0:31.336] ## UDEV:	Q: 28805
[ 0:31.336] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-28
[ 0:31.336] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.336] ## UDEV:	E: DEVNAME=/dev/dm-28
[ 0:31.336] ## UDEV:	E: DEVTYPE=disk
[ 0:31.336] ## UDEV:	E: DISKSEQ=28805
[ 0:31.336] ## UDEV:	E: MAJOR=254
[ 0:31.336] ## UDEV:	E: MINOR=28
[ 0:31.336] ## UDEV:	E: USEC_INITIALIZED=29990681734
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.336] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_4
[ 0:31.336] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmty3RNUNUzcv7DcTiHTUOZMy3koAVhD0sc
[ 0:31.336] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.336] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.336] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.336] ## UDEV:	E: DM_LV_NAME=LV1_rimage_4
[ 0:31.336] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.336] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_4
[ 0:31.336] ## UDEV:	E: TAGS=:systemd:
[ 0:31.336] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.336] ## UDEV:	
[ 0:31.336] ## UDEV:	P: /devices/virtual/block/dm-29
[ 0:31.336] ## UDEV:	M: dm-29
[ 0:31.336] ## UDEV:	R: 29
[ 0:31.336] ## UDEV:	U: block
[ 0:31.336] ## UDEV:	T: disk
[ 0:31.336] ## UDEV:	D: b 254:29
[ 0:31.336] ## UDEV:	N: dm-29
[ 0:31.336] ## UDEV:	L: 0
[ 0:31.336] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_5
[ 0:31.336] ## UDEV:	Q: 28806
[ 0:31.336] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-29
[ 0:31.336] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.336] ## UDEV:	E: DEVNAME=/dev/dm-29
[ 0:31.336] ## UDEV:	E: DEVTYPE=disk
[ 0:31.336] ## UDEV:	E: DISKSEQ=28806
[ 0:31.336] ## UDEV:	E: MAJOR=254
[ 0:31.336] ## UDEV:	E: MINOR=29
[ 0:31.336] ## UDEV:	E: USEC_INITIALIZED=29990682964
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.336] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_5
[ 0:31.336] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtn0CwCkqlpgGgrYbEHSdKT6WDuoRwVm9y
[ 0:31.336] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.336] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.336] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.336] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_5
[ 0:31.336] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.336] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_5
[ 0:31.336] ## UDEV:	E: TAGS=:systemd:
[ 0:31.336] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.336] ## UDEV:	
[ 0:31.336] ## UDEV:	P: /devices/virtual/block/dm-3
[ 0:31.336] ## UDEV:	M: dm-3
[ 0:31.336] ## UDEV:	R: 3
[ 0:31.336] ## UDEV:	U: block
[ 0:31.336] ## UDEV:	T: disk
[ 0:31.336] ## UDEV:	D: b 254:3
[ 0:31.336] ## UDEV:	N: dm-3
[ 0:31.336] ## UDEV:	L: 0
[ 0:31.336] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv1
[ 0:31.336] ## UDEV:	S: mapper/LVMTEST500118pv1
[ 0:31.336] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv1
[ 0:31.336] ## UDEV:	Q: 28769
[ 0:31.336] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-3
[ 0:31.336] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.336] ## UDEV:	E: DEVNAME=/dev/dm-3
[ 0:31.336] ## UDEV:	E: DEVTYPE=disk
[ 0:31.336] ## UDEV:	E: DISKSEQ=28769
[ 0:31.336] ## UDEV:	E: MAJOR=254
[ 0:31.336] ## UDEV:	E: MINOR=3
[ 0:31.336] ## UDEV:	E: USEC_INITIALIZED=29990192084
[ 0:31.336] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.336] ## UDEV:	E: DM_NAME=LVMTEST500118pv1
[ 0:31.336] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv1
[ 0:31.336] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.336] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.336] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.336] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-name-LVMTEST500118pv1 /dev/mapper/LVMTEST500118pv1 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv1
[ 0:31.336] ## UDEV:	E: TAGS=:systemd:
[ 0:31.336] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.336] ## UDEV:	
[ 0:31.336] ## UDEV:	P: /devices/virtual/block/dm-30
[ 0:31.336] ## UDEV:	M: dm-30
[ 0:31.336] ## UDEV:	R: 30
[ 0:31.336] ## UDEV:	U: block
[ 0:31.336] ## UDEV:	T: disk
[ 0:31.336] ## UDEV:	D: b 254:30
[ 0:31.336] ## UDEV:	N: dm-30
[ 0:31.336] ## UDEV:	L: 0
[ 0:31.336] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_5
[ 0:31.336] ## UDEV:	Q: 28807
[ 0:31.336] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-30
[ 0:31.336] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.336] ## UDEV:	E: DEVNAME=/dev/dm-30
[ 0:31.336] ## UDEV:	E: DEVTYPE=disk
[ 0:31.336] ## UDEV:	E: DISKSEQ=28807
[ 0:31.336] ## UDEV:	E: MAJOR=254
[ 0:31.336] ## UDEV:	E: MINOR=30
[ 0:31.336] ## UDEV:	E: USEC_INITIALIZED=29990684635
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.336] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.336] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_5
[ 0:31.336] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt5LvpjI7Lxa8BYrtk6O3jrFwjfXKjPmuU
[ 0:31.336] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.336] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.336] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.336] ## UDEV:	E: DM_LV_NAME=LV1_rimage_5
[ 0:31.336] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.336] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_5
[ 0:31.407] ## UDEV:	E: TAGS=:systemd:
[ 0:31.407] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.407] ## UDEV:	
[ 0:31.407] ## UDEV:	P: /devices/virtual/block/dm-31
[ 0:31.407] ## UDEV:	M: dm-31
[ 0:31.407] ## UDEV:	R: 31
[ 0:31.407] ## UDEV:	U: block
[ 0:31.407] ## UDEV:	T: disk
[ 0:31.407] ## UDEV:	D: b 254:31
[ 0:31.407] ## UDEV:	N: dm-31
[ 0:31.407] ## UDEV:	L: 0
[ 0:31.407] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_6
[ 0:31.407] ## UDEV:	Q: 28808
[ 0:31.407] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-31
[ 0:31.407] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.407] ## UDEV:	E: DEVNAME=/dev/dm-31
[ 0:31.407] ## UDEV:	E: DEVTYPE=disk
[ 0:31.407] ## UDEV:	E: DISKSEQ=28808
[ 0:31.407] ## UDEV:	E: MAJOR=254
[ 0:31.407] ## UDEV:	E: MINOR=31
[ 0:31.407] ## UDEV:	E: USEC_INITIALIZED=29990685668
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.407] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_6
[ 0:31.407] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtjP5gBQ3S1Q3i0wDc6vWibWQpvx1p9tnR
[ 0:31.407] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.407] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.407] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.407] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_6
[ 0:31.407] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.407] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_6
[ 0:31.407] ## UDEV:	E: TAGS=:systemd:
[ 0:31.407] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.407] ## UDEV:	
[ 0:31.407] ## UDEV:	P: /devices/virtual/block/dm-32
[ 0:31.407] ## UDEV:	M: dm-32
[ 0:31.407] ## UDEV:	R: 32
[ 0:31.407] ## UDEV:	U: block
[ 0:31.407] ## UDEV:	T: disk
[ 0:31.407] ## UDEV:	D: b 254:32
[ 0:31.407] ## UDEV:	N: dm-32
[ 0:31.407] ## UDEV:	L: 0
[ 0:31.407] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_6
[ 0:31.407] ## UDEV:	Q: 28809
[ 0:31.407] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-32
[ 0:31.407] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.407] ## UDEV:	E: DEVNAME=/dev/dm-32
[ 0:31.407] ## UDEV:	E: DEVTYPE=disk
[ 0:31.407] ## UDEV:	E: DISKSEQ=28809
[ 0:31.407] ## UDEV:	E: MAJOR=254
[ 0:31.407] ## UDEV:	E: MINOR=32
[ 0:31.407] ## UDEV:	E: USEC_INITIALIZED=29990687628
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.407] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_6
[ 0:31.407] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt6frue4ylGjtrPvQcVzeqnnJmBVHYiJLH
[ 0:31.407] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.407] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.407] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.407] ## UDEV:	E: DM_LV_NAME=LV1_rimage_6
[ 0:31.407] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.407] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_6
[ 0:31.407] ## UDEV:	E: TAGS=:systemd:
[ 0:31.407] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.407] ## UDEV:	
[ 0:31.407] ## UDEV:	P: /devices/virtual/block/dm-33
[ 0:31.407] ## UDEV:	M: dm-33
[ 0:31.407] ## UDEV:	R: 33
[ 0:31.407] ## UDEV:	U: block
[ 0:31.407] ## UDEV:	T: disk
[ 0:31.407] ## UDEV:	D: b 254:33
[ 0:31.407] ## UDEV:	N: dm-33
[ 0:31.407] ## UDEV:	L: 0
[ 0:31.407] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_7
[ 0:31.407] ## UDEV:	Q: 28810
[ 0:31.407] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-33
[ 0:31.407] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.407] ## UDEV:	E: DEVNAME=/dev/dm-33
[ 0:31.407] ## UDEV:	E: DEVTYPE=disk
[ 0:31.407] ## UDEV:	E: DISKSEQ=28810
[ 0:31.407] ## UDEV:	E: MAJOR=254
[ 0:31.407] ## UDEV:	E: MINOR=33
[ 0:31.407] ## UDEV:	E: USEC_INITIALIZED=29990688628
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.407] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_7
[ 0:31.407] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtqUglRG9A9AL8V81RCER6HpY5nWeL89jG
[ 0:31.407] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.407] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.407] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.407] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_7
[ 0:31.407] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.407] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_7
[ 0:31.407] ## UDEV:	E: TAGS=:systemd:
[ 0:31.407] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.407] ## UDEV:	
[ 0:31.407] ## UDEV:	P: /devices/virtual/block/dm-34
[ 0:31.407] ## UDEV:	M: dm-34
[ 0:31.407] ## UDEV:	R: 34
[ 0:31.407] ## UDEV:	U: block
[ 0:31.407] ## UDEV:	T: disk
[ 0:31.407] ## UDEV:	D: b 254:34
[ 0:31.407] ## UDEV:	N: dm-34
[ 0:31.407] ## UDEV:	L: 0
[ 0:31.407] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_7
[ 0:31.407] ## UDEV:	Q: 28811
[ 0:31.407] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-34
[ 0:31.407] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.407] ## UDEV:	E: DEVNAME=/dev/dm-34
[ 0:31.407] ## UDEV:	E: DEVTYPE=disk
[ 0:31.407] ## UDEV:	E: DISKSEQ=28811
[ 0:31.407] ## UDEV:	E: MAJOR=254
[ 0:31.407] ## UDEV:	E: MINOR=34
[ 0:31.407] ## UDEV:	E: USEC_INITIALIZED=29990690344
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.407] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.478] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_7
[ 0:31.478] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtchsDOVA6wUZnaa6VF0sVfj3bvta4bRru
[ 0:31.478] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.478] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.478] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.478] ## UDEV:	E: DM_LV_NAME=LV1_rimage_7
[ 0:31.478] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.478] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_7
[ 0:31.478] ## UDEV:	E: TAGS=:systemd:
[ 0:31.478] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.478] ## UDEV:	
[ 0:31.478] ## UDEV:	P: /devices/virtual/block/dm-35
[ 0:31.478] ## UDEV:	M: dm-35
[ 0:31.478] ## UDEV:	R: 35
[ 0:31.478] ## UDEV:	U: block
[ 0:31.478] ## UDEV:	T: disk
[ 0:31.478] ## UDEV:	D: b 254:35
[ 0:31.478] ## UDEV:	N: dm-35
[ 0:31.478] ## UDEV:	L: 0
[ 0:31.478] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_8
[ 0:31.478] ## UDEV:	Q: 28812
[ 0:31.478] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-35
[ 0:31.478] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.478] ## UDEV:	E: DEVNAME=/dev/dm-35
[ 0:31.478] ## UDEV:	E: DEVTYPE=disk
[ 0:31.478] ## UDEV:	E: DISKSEQ=28812
[ 0:31.478] ## UDEV:	E: MAJOR=254
[ 0:31.478] ## UDEV:	E: MINOR=35
[ 0:31.478] ## UDEV:	E: USEC_INITIALIZED=29990691862
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.478] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_8
[ 0:31.478] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtvHThjBcpzz3SsW1XJcxNj5ITg9qbmsw8
[ 0:31.478] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.478] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.478] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.478] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_8
[ 0:31.478] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.478] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_8
[ 0:31.478] ## UDEV:	E: TAGS=:systemd:
[ 0:31.478] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.478] ## UDEV:	
[ 0:31.478] ## UDEV:	P: /devices/virtual/block/dm-36
[ 0:31.478] ## UDEV:	M: dm-36
[ 0:31.478] ## UDEV:	R: 36
[ 0:31.478] ## UDEV:	U: block
[ 0:31.478] ## UDEV:	T: disk
[ 0:31.478] ## UDEV:	D: b 254:36
[ 0:31.478] ## UDEV:	N: dm-36
[ 0:31.478] ## UDEV:	L: 0
[ 0:31.478] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_8
[ 0:31.478] ## UDEV:	Q: 28813
[ 0:31.478] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-36
[ 0:31.478] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.478] ## UDEV:	E: DEVNAME=/dev/dm-36
[ 0:31.478] ## UDEV:	E: DEVTYPE=disk
[ 0:31.478] ## UDEV:	E: DISKSEQ=28813
[ 0:31.478] ## UDEV:	E: MAJOR=254
[ 0:31.478] ## UDEV:	E: MINOR=36
[ 0:31.478] ## UDEV:	E: USEC_INITIALIZED=29990693748
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.478] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_8
[ 0:31.478] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtCWR2oKVdJld9Vfbzbyod2jo8EQjVhZpc
[ 0:31.478] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.478] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.478] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.478] ## UDEV:	E: DM_LV_NAME=LV1_rimage_8
[ 0:31.478] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.478] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_8
[ 0:31.478] ## UDEV:	E: TAGS=:systemd:
[ 0:31.478] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.478] ## UDEV:	
[ 0:31.478] ## UDEV:	P: /devices/virtual/block/dm-37
[ 0:31.478] ## UDEV:	M: dm-37
[ 0:31.478] ## UDEV:	R: 37
[ 0:31.478] ## UDEV:	U: block
[ 0:31.478] ## UDEV:	T: disk
[ 0:31.478] ## UDEV:	D: b 254:37
[ 0:31.478] ## UDEV:	N: dm-37
[ 0:31.478] ## UDEV:	L: 0
[ 0:31.478] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_9
[ 0:31.478] ## UDEV:	Q: 28814
[ 0:31.478] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-37
[ 0:31.478] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.478] ## UDEV:	E: DEVNAME=/dev/dm-37
[ 0:31.478] ## UDEV:	E: DEVTYPE=disk
[ 0:31.478] ## UDEV:	E: DISKSEQ=28814
[ 0:31.478] ## UDEV:	E: MAJOR=254
[ 0:31.478] ## UDEV:	E: MINOR=37
[ 0:31.478] ## UDEV:	E: USEC_INITIALIZED=29990695609
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.478] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.478] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_9
[ 0:31.478] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtHLzF2FgdKKO9QnZCW0Tm65J2iFuP0cqi
[ 0:31.478] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.478] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.478] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.478] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_9
[ 0:31.478] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.478] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_9
[ 0:31.478] ## UDEV:	E: TAGS=:systemd:
[ 0:31.478] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.478] ## UDEV:	
[ 0:31.478] ## UDEV:	P: /devices/virtual/block/dm-38
[ 0:31.478] ## UDEV:	M: dm-38
[ 0:31.478] ## UDEV:	R: 38
[ 0:31.478] ## UDEV:	U: block
[ 0:31.478] ## UDEV:	T: disk
[ 0:31.478] ## UDEV:	D: b 254:38
[ 0:31.478] ## UDEV:	N: dm-38
[ 0:31.478] ## UDEV:	L: 0
[ 0:31.478] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_9
[ 0:31.478] ## UDEV:	Q: 28815
[ 0:31.478] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-38
[ 0:31.478] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.531] ## UDEV:	E: DEVNAME=/dev/dm-38
[ 0:31.531] ## UDEV:	E: DEVTYPE=disk
[ 0:31.531] ## UDEV:	E: DISKSEQ=28815
[ 0:31.531] ## UDEV:	E: MAJOR=254
[ 0:31.531] ## UDEV:	E: MINOR=38
[ 0:31.531] ## UDEV:	E: USEC_INITIALIZED=29990695991
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.531] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_9
[ 0:31.531] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt8IuOMVo5Dq0bbWuODFimVzGzlTUAFEvz
[ 0:31.531] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.531] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.531] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.531] ## UDEV:	E: DM_LV_NAME=LV1_rimage_9
[ 0:31.531] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.531] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_9
[ 0:31.531] ## UDEV:	E: TAGS=:systemd:
[ 0:31.531] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.531] ## UDEV:	
[ 0:31.531] ## UDEV:	P: /devices/virtual/block/dm-39
[ 0:31.531] ## UDEV:	M: dm-39
[ 0:31.531] ## UDEV:	R: 39
[ 0:31.531] ## UDEV:	U: block
[ 0:31.531] ## UDEV:	T: disk
[ 0:31.531] ## UDEV:	D: b 254:39
[ 0:31.531] ## UDEV:	N: dm-39
[ 0:31.531] ## UDEV:	L: 0
[ 0:31.531] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_10
[ 0:31.531] ## UDEV:	Q: 28816
[ 0:31.531] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-39
[ 0:31.531] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.531] ## UDEV:	E: DEVNAME=/dev/dm-39
[ 0:31.531] ## UDEV:	E: DEVTYPE=disk
[ 0:31.531] ## UDEV:	E: DISKSEQ=28816
[ 0:31.531] ## UDEV:	E: MAJOR=254
[ 0:31.531] ## UDEV:	E: MINOR=39
[ 0:31.531] ## UDEV:	E: USEC_INITIALIZED=29990699428
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.531] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_10
[ 0:31.531] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt1zmjff3ohmUgjtiD5skuJVn485x6iFw1
[ 0:31.531] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.531] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.531] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.531] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_10
[ 0:31.531] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.531] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_10
[ 0:31.531] ## UDEV:	E: TAGS=:systemd:
[ 0:31.531] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.531] ## UDEV:	
[ 0:31.531] ## UDEV:	P: /devices/virtual/block/dm-4
[ 0:31.531] ## UDEV:	M: dm-4
[ 0:31.531] ## UDEV:	R: 4
[ 0:31.531] ## UDEV:	U: block
[ 0:31.531] ## UDEV:	T: disk
[ 0:31.531] ## UDEV:	D: b 254:4
[ 0:31.531] ## UDEV:	N: dm-4
[ 0:31.531] ## UDEV:	L: 0
[ 0:31.531] ## UDEV:	S: disk/by-id/lvm-pv-uuid-0Ajium-L7oM-3oed-uRM5-aebJ-F4MM-05lQR8
[ 0:31.531] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv2
[ 0:31.531] ## UDEV:	S: mapper/LVMTEST500118pv2
[ 0:31.531] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv2
[ 0:31.531] ## UDEV:	Q: 28770
[ 0:31.531] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-4
[ 0:31.531] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.531] ## UDEV:	E: DEVNAME=/dev/dm-4
[ 0:31.531] ## UDEV:	E: DEVTYPE=disk
[ 0:31.531] ## UDEV:	E: DISKSEQ=28770
[ 0:31.531] ## UDEV:	E: MAJOR=254
[ 0:31.531] ## UDEV:	E: MINOR=4
[ 0:31.531] ## UDEV:	E: USEC_INITIALIZED=29990193736
[ 0:31.531] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.531] ## UDEV:	E: DM_NAME=LVMTEST500118pv2
[ 0:31.531] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv2
[ 0:31.531] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.531] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.531] ## UDEV:	E: ID_FS_UUID=0Ajium-L7oM-3oed-uRM5-aebJ-F4MM-05lQR8
[ 0:31.531] ## UDEV:	E: ID_FS_UUID_ENC=0Ajium-L7oM-3oed-uRM5-aebJ-F4MM-05lQR8
[ 0:31.531] ## UDEV:	E: ID_FS_VERSION=LVM2 001
[ 0:31.531] ## UDEV:	E: ID_FS_TYPE=LVM2_member
[ 0:31.531] ## UDEV:	E: ID_FS_USAGE=raid
[ 0:31.531] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.531] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/lvm-pv-uuid-0Ajium-L7oM-3oed-uRM5-aebJ-F4MM-05lQR8 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv2 /dev/mapper/LVMTEST500118pv2 /dev/disk/by-id/dm-name-LVMTEST500118pv2
[ 0:31.531] ## UDEV:	E: TAGS=:systemd:
[ 0:31.531] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.531] ## UDEV:	
[ 0:31.531] ## UDEV:	P: /devices/virtual/block/dm-40
[ 0:31.531] ## UDEV:	M: dm-40
[ 0:31.531] ## UDEV:	R: 40
[ 0:31.531] ## UDEV:	U: block
[ 0:31.531] ## UDEV:	T: disk
[ 0:31.531] ## UDEV:	D: b 254:40
[ 0:31.531] ## UDEV:	N: dm-40
[ 0:31.531] ## UDEV:	L: 0
[ 0:31.531] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_10
[ 0:31.531] ## UDEV:	Q: 28817
[ 0:31.531] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-40
[ 0:31.531] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.531] ## UDEV:	E: DEVNAME=/dev/dm-40
[ 0:31.531] ## UDEV:	E: DEVTYPE=disk
[ 0:31.531] ## UDEV:	E: DISKSEQ=28817
[ 0:31.531] ## UDEV:	E: MAJOR=254
[ 0:31.531] ## UDEV:	E: MINOR=40
[ 0:31.531] ## UDEV:	E: USEC_INITIALIZED=29990699857
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.531] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.531] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_10
[ 0:31.531] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtTKg0whlxOIMgMzsuMlkxJ3KSb8XzSEWi
[ 0:31.585] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.585] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.585] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.585] ## UDEV:	E: DM_LV_NAME=LV1_rimage_10
[ 0:31.585] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.585] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_10
[ 0:31.585] ## UDEV:	E: TAGS=:systemd:
[ 0:31.585] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.585] ## UDEV:	
[ 0:31.585] ## UDEV:	P: /devices/virtual/block/dm-41
[ 0:31.585] ## UDEV:	M: dm-41
[ 0:31.585] ## UDEV:	R: 41
[ 0:31.585] ## UDEV:	U: block
[ 0:31.585] ## UDEV:	T: disk
[ 0:31.585] ## UDEV:	D: b 254:41
[ 0:31.585] ## UDEV:	N: dm-41
[ 0:31.585] ## UDEV:	L: 0
[ 0:31.585] ## UDEV:	S: disk/by-uuid/84c2201e-4589-48a8-ba44-019d481366f2
[ 0:31.585] ## UDEV:	S: disk/by-id/dm-uuid-LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtAkuGMPeDwwMOqVi8hTIbxgptEzZsufbB
[ 0:31.585] ## UDEV:	S: LVMTEST500118vg/LV1
[ 0:31.585] ## UDEV:	S: mapper/LVMTEST500118vg-LV1
[ 0:31.585] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118vg-LV1
[ 0:31.585] ## UDEV:	Q: 28818
[ 0:31.585] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-41
[ 0:31.585] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.585] ## UDEV:	E: DEVNAME=/dev/dm-41
[ 0:31.585] ## UDEV:	E: DEVTYPE=disk
[ 0:31.585] ## UDEV:	E: DISKSEQ=28818
[ 0:31.585] ## UDEV:	E: MAJOR=254
[ 0:31.585] ## UDEV:	E: MINOR=41
[ 0:31.585] ## UDEV:	E: USEC_INITIALIZED=29990733425
[ 0:31.585] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.585] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1
[ 0:31.585] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtAkuGMPeDwwMOqVi8hTIbxgptEzZsufbB
[ 0:31.585] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.585] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.585] ## UDEV:	E: DM_LV_NAME=LV1
[ 0:31.585] ## UDEV:	E: ID_FS_UUID=84c2201e-4589-48a8-ba44-019d481366f2
[ 0:31.585] ## UDEV:	E: ID_FS_UUID_ENC=84c2201e-4589-48a8-ba44-019d481366f2
[ 0:31.585] ## UDEV:	E: ID_FS_VERSION=1.0
[ 0:31.585] ## UDEV:	E: ID_FS_BLOCKSIZE=1024
[ 0:31.585] ## UDEV:	E: ID_FS_LASTBLOCK=10240
[ 0:31.585] ## UDEV:	E: ID_FS_TYPE=ext4
[ 0:31.585] ## UDEV:	E: ID_FS_USAGE=filesystem
[ 0:31.585] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.585] ## UDEV:	E: DEVLINKS=/dev/disk/by-uuid/84c2201e-4589-48a8-ba44-019d481366f2 /dev/disk/by-id/dm-uuid-LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtAkuGMPeDwwMOqVi8hTIbxgptEzZsufbB /dev/LVMTEST500118vg/LV1 /dev/mapper/LVMTEST500118vg-LV1 /dev/disk/by-id/dm-name-LVMTEST500118vg-LV1
[ 0:31.585] ## UDEV:	E: TAGS=:systemd:
[ 0:31.585] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.585] ## UDEV:	
[ 0:31.585] ## UDEV:	P: /devices/virtual/block/dm-42
[ 0:31.585] ## UDEV:	M: dm-42
[ 0:31.585] ## UDEV:	R: 42
[ 0:31.585] ## UDEV:	U: block
[ 0:31.585] ## UDEV:	T: disk
[ 0:31.585] ## UDEV:	D: b 254:42
[ 0:31.585] ## UDEV:	N: dm-42
[ 0:31.585] ## UDEV:	L: 0
[ 0:31.585] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_11
[ 0:31.585] ## UDEV:	Q: 28824
[ 0:31.585] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-42
[ 0:31.585] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.585] ## UDEV:	E: DEVNAME=/dev/dm-42
[ 0:31.585] ## UDEV:	E: DEVTYPE=disk
[ 0:31.585] ## UDEV:	E: DISKSEQ=28824
[ 0:31.585] ## UDEV:	E: MAJOR=254
[ 0:31.585] ## UDEV:	E: MINOR=42
[ 0:31.585] ## UDEV:	E: USEC_INITIALIZED=29994334470
[ 0:31.585] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.585] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_11
[ 0:31.585] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtYUT44N6SJiqq8cbNIXG5Kxi1XS1MHy7m
[ 0:31.585] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.585] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.585] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.585] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_11
[ 0:31.585] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.585] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_11
[ 0:31.585] ## UDEV:	E: TAGS=:systemd:
[ 0:31.585] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.585] ## UDEV:	
[ 0:31.585] ## UDEV:	P: /devices/virtual/block/dm-43
[ 0:31.585] ## UDEV:	M: dm-43
[ 0:31.585] ## UDEV:	R: 43
[ 0:31.585] ## UDEV:	U: block
[ 0:31.585] ## UDEV:	T: disk
[ 0:31.585] ## UDEV:	D: b 254:43
[ 0:31.585] ## UDEV:	N: dm-43
[ 0:31.585] ## UDEV:	L: 0
[ 0:31.585] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_11
[ 0:31.585] ## UDEV:	Q: 28825
[ 0:31.585] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-43
[ 0:31.585] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.585] ## UDEV:	E: DEVNAME=/dev/dm-43
[ 0:31.585] ## UDEV:	E: DEVTYPE=disk
[ 0:31.585] ## UDEV:	E: DISKSEQ=28825
[ 0:31.585] ## UDEV:	E: MAJOR=254
[ 0:31.585] ## UDEV:	E: MINOR=43
[ 0:31.585] ## UDEV:	E: USEC_INITIALIZED=29994335859
[ 0:31.585] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.585] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.585] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_11
[ 0:31.585] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt70tvr9yqHxN2GWU7yxH3YPL3k4xwA63I
[ 0:31.585] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.585] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.585] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.585] ## UDEV:	E: DM_LV_NAME=LV1_rimage_11
[ 0:31.585] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.585] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_11
[ 0:31.658] ## UDEV:	E: TAGS=:systemd:
[ 0:31.658] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.658] ## UDEV:	
[ 0:31.658] ## UDEV:	P: /devices/virtual/block/dm-44
[ 0:31.658] ## UDEV:	M: dm-44
[ 0:31.658] ## UDEV:	R: 44
[ 0:31.658] ## UDEV:	U: block
[ 0:31.658] ## UDEV:	T: disk
[ 0:31.658] ## UDEV:	D: b 254:44
[ 0:31.658] ## UDEV:	N: dm-44
[ 0:31.658] ## UDEV:	L: 0
[ 0:31.658] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_12
[ 0:31.658] ## UDEV:	Q: 28826
[ 0:31.658] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-44
[ 0:31.658] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.658] ## UDEV:	E: DEVNAME=/dev/dm-44
[ 0:31.658] ## UDEV:	E: DEVTYPE=disk
[ 0:31.658] ## UDEV:	E: DISKSEQ=28826
[ 0:31.658] ## UDEV:	E: MAJOR=254
[ 0:31.658] ## UDEV:	E: MINOR=44
[ 0:31.658] ## UDEV:	E: USEC_INITIALIZED=29994336795
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.658] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_12
[ 0:31.658] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtulWH74owXTdv6w9Lu7vy83W3oYxwff5L
[ 0:31.658] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.658] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.658] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.658] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_12
[ 0:31.658] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.658] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_12
[ 0:31.658] ## UDEV:	E: TAGS=:systemd:
[ 0:31.658] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.658] ## UDEV:	
[ 0:31.658] ## UDEV:	P: /devices/virtual/block/dm-45
[ 0:31.658] ## UDEV:	M: dm-45
[ 0:31.658] ## UDEV:	R: 45
[ 0:31.658] ## UDEV:	U: block
[ 0:31.658] ## UDEV:	T: disk
[ 0:31.658] ## UDEV:	D: b 254:45
[ 0:31.658] ## UDEV:	N: dm-45
[ 0:31.658] ## UDEV:	L: 0
[ 0:31.658] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_12
[ 0:31.658] ## UDEV:	Q: 28827
[ 0:31.658] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-45
[ 0:31.658] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.658] ## UDEV:	E: DEVNAME=/dev/dm-45
[ 0:31.658] ## UDEV:	E: DEVTYPE=disk
[ 0:31.658] ## UDEV:	E: DISKSEQ=28827
[ 0:31.658] ## UDEV:	E: MAJOR=254
[ 0:31.658] ## UDEV:	E: MINOR=45
[ 0:31.658] ## UDEV:	E: USEC_INITIALIZED=29994338024
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.658] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_12
[ 0:31.658] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmt8mxaG5SGE32WHeEosPk8YRzjdhgnimXj
[ 0:31.658] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.658] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.658] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.658] ## UDEV:	E: DM_LV_NAME=LV1_rimage_12
[ 0:31.658] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.658] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_12
[ 0:31.658] ## UDEV:	E: TAGS=:systemd:
[ 0:31.658] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.658] ## UDEV:	
[ 0:31.658] ## UDEV:	P: /devices/virtual/block/dm-46
[ 0:31.658] ## UDEV:	M: dm-46
[ 0:31.658] ## UDEV:	R: 46
[ 0:31.658] ## UDEV:	U: block
[ 0:31.658] ## UDEV:	T: disk
[ 0:31.658] ## UDEV:	D: b 254:46
[ 0:31.658] ## UDEV:	N: dm-46
[ 0:31.658] ## UDEV:	L: 0
[ 0:31.658] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_13
[ 0:31.658] ## UDEV:	Q: 28828
[ 0:31.658] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-46
[ 0:31.658] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.658] ## UDEV:	E: DEVNAME=/dev/dm-46
[ 0:31.658] ## UDEV:	E: DEVTYPE=disk
[ 0:31.658] ## UDEV:	E: DISKSEQ=28828
[ 0:31.658] ## UDEV:	E: MAJOR=254
[ 0:31.658] ## UDEV:	E: MINOR=46
[ 0:31.658] ## UDEV:	E: USEC_INITIALIZED=29994339353
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.658] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_13
[ 0:31.658] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtajpWmgqoHqF78tHKorIemrhNI0NB7sj2
[ 0:31.658] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.658] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.658] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.658] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_13
[ 0:31.658] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.658] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_13
[ 0:31.658] ## UDEV:	E: TAGS=:systemd:
[ 0:31.658] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.658] ## UDEV:	
[ 0:31.658] ## UDEV:	P: /devices/virtual/block/dm-47
[ 0:31.658] ## UDEV:	M: dm-47
[ 0:31.658] ## UDEV:	R: 47
[ 0:31.658] ## UDEV:	U: block
[ 0:31.658] ## UDEV:	T: disk
[ 0:31.658] ## UDEV:	D: b 254:47
[ 0:31.658] ## UDEV:	N: dm-47
[ 0:31.658] ## UDEV:	L: 0
[ 0:31.658] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_13
[ 0:31.658] ## UDEV:	Q: 28829
[ 0:31.658] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-47
[ 0:31.658] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.658] ## UDEV:	E: DEVNAME=/dev/dm-47
[ 0:31.658] ## UDEV:	E: DEVTYPE=disk
[ 0:31.658] ## UDEV:	E: DISKSEQ=28829
[ 0:31.658] ## UDEV:	E: MAJOR=254
[ 0:31.658] ## UDEV:	E: MINOR=47
[ 0:31.658] ## UDEV:	E: USEC_INITIALIZED=29994340712
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.658] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.728] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_13
[ 0:31.728] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtYFS3S7q3tv79eaD0b9V2dvhXfFH5AzCe
[ 0:31.728] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.728] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.728] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.728] ## UDEV:	E: DM_LV_NAME=LV1_rimage_13
[ 0:31.728] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.728] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_13
[ 0:31.728] ## UDEV:	E: TAGS=:systemd:
[ 0:31.728] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.728] ## UDEV:	
[ 0:31.728] ## UDEV:	P: /devices/virtual/block/dm-48
[ 0:31.728] ## UDEV:	M: dm-48
[ 0:31.728] ## UDEV:	R: 48
[ 0:31.728] ## UDEV:	U: block
[ 0:31.728] ## UDEV:	T: disk
[ 0:31.728] ## UDEV:	D: b 254:48
[ 0:31.728] ## UDEV:	N: dm-48
[ 0:31.728] ## UDEV:	L: 0
[ 0:31.728] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_14
[ 0:31.728] ## UDEV:	Q: 28830
[ 0:31.728] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-48
[ 0:31.728] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.728] ## UDEV:	E: DEVNAME=/dev/dm-48
[ 0:31.728] ## UDEV:	E: DEVTYPE=disk
[ 0:31.728] ## UDEV:	E: DISKSEQ=28830
[ 0:31.728] ## UDEV:	E: MAJOR=254
[ 0:31.728] ## UDEV:	E: MINOR=48
[ 0:31.728] ## UDEV:	E: USEC_INITIALIZED=29994341327
[ 0:31.728] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.728] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_14
[ 0:31.728] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtdPv93N0GAHAOf7VdiCCnSjCOmpUHtMuq
[ 0:31.728] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.728] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.728] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.728] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_14
[ 0:31.728] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.728] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_14
[ 0:31.728] ## UDEV:	E: TAGS=:systemd:
[ 0:31.728] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.728] ## UDEV:	
[ 0:31.728] ## UDEV:	P: /devices/virtual/block/dm-49
[ 0:31.728] ## UDEV:	M: dm-49
[ 0:31.728] ## UDEV:	R: 49
[ 0:31.728] ## UDEV:	U: block
[ 0:31.728] ## UDEV:	T: disk
[ 0:31.728] ## UDEV:	D: b 254:49
[ 0:31.728] ## UDEV:	N: dm-49
[ 0:31.728] ## UDEV:	L: 0
[ 0:31.728] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_14
[ 0:31.728] ## UDEV:	Q: 28831
[ 0:31.728] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-49
[ 0:31.728] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.728] ## UDEV:	E: DEVNAME=/dev/dm-49
[ 0:31.728] ## UDEV:	E: DEVTYPE=disk
[ 0:31.728] ## UDEV:	E: DISKSEQ=28831
[ 0:31.728] ## UDEV:	E: MAJOR=254
[ 0:31.728] ## UDEV:	E: MINOR=49
[ 0:31.728] ## UDEV:	E: USEC_INITIALIZED=29994342720
[ 0:31.728] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.728] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_14
[ 0:31.728] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtxnwk2sgs9H0iRYfhExHQHhj8FeUZLHDK
[ 0:31.728] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.728] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.728] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.728] ## UDEV:	E: DM_LV_NAME=LV1_rimage_14
[ 0:31.728] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.728] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_14
[ 0:31.728] ## UDEV:	E: TAGS=:systemd:
[ 0:31.728] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.728] ## UDEV:	
[ 0:31.728] ## UDEV:	P: /devices/virtual/block/dm-5
[ 0:31.728] ## UDEV:	M: dm-5
[ 0:31.728] ## UDEV:	R: 5
[ 0:31.728] ## UDEV:	U: block
[ 0:31.728] ## UDEV:	T: disk
[ 0:31.728] ## UDEV:	D: b 254:5
[ 0:31.728] ## UDEV:	N: dm-5
[ 0:31.728] ## UDEV:	L: 0
[ 0:31.728] ## UDEV:	S: mapper/LVMTEST500118pv3
[ 0:31.728] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv3
[ 0:31.728] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv3
[ 0:31.728] ## UDEV:	Q: 28771
[ 0:31.728] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-5
[ 0:31.728] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.728] ## UDEV:	E: DEVNAME=/dev/dm-5
[ 0:31.728] ## UDEV:	E: DEVTYPE=disk
[ 0:31.728] ## UDEV:	E: DISKSEQ=28771
[ 0:31.728] ## UDEV:	E: MAJOR=254
[ 0:31.728] ## UDEV:	E: MINOR=5
[ 0:31.728] ## UDEV:	E: USEC_INITIALIZED=29990194284
[ 0:31.728] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.728] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.728] ## UDEV:	E: DM_NAME=LVMTEST500118pv3
[ 0:31.728] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv3
[ 0:31.728] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.728] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.728] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.728] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118pv3 /dev/disk/by-id/dm-name-LVMTEST500118pv3 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv3
[ 0:31.728] ## UDEV:	E: TAGS=:systemd:
[ 0:31.728] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.728] ## UDEV:	
[ 0:31.728] ## UDEV:	P: /devices/virtual/block/dm-50
[ 0:31.728] ## UDEV:	M: dm-50
[ 0:31.728] ## UDEV:	R: 50
[ 0:31.728] ## UDEV:	U: block
[ 0:31.728] ## UDEV:	T: disk
[ 0:31.728] ## UDEV:	D: b 254:50
[ 0:31.728] ## UDEV:	N: dm-50
[ 0:31.728] ## UDEV:	L: 0
[ 0:31.728] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rmeta_15
[ 0:31.728] ## UDEV:	Q: 28832
[ 0:31.728] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-50
[ 0:31.728] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.728] ## UDEV:	E: DEVNAME=/dev/dm-50
[ 0:31.800] ## UDEV:	E: DEVTYPE=disk
[ 0:31.800] ## UDEV:	E: DISKSEQ=28832
[ 0:31.800] ## UDEV:	E: MAJOR=254
[ 0:31.800] ## UDEV:	E: MINOR=50
[ 0:31.800] ## UDEV:	E: USEC_INITIALIZED=29994344531
[ 0:31.800] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.800] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rmeta_15
[ 0:31.800] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtdCmqb463WIoq8Jf7vmvbwKinVFX0kDSA
[ 0:31.800] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.800] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.800] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.800] ## UDEV:	E: DM_LV_NAME=LV1_rmeta_15
[ 0:31.800] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.800] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rmeta_15
[ 0:31.800] ## UDEV:	E: TAGS=:systemd:
[ 0:31.800] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.800] ## UDEV:	
[ 0:31.800] ## UDEV:	P: /devices/virtual/block/dm-51
[ 0:31.800] ## UDEV:	M: dm-51
[ 0:31.800] ## UDEV:	R: 51
[ 0:31.800] ## UDEV:	U: block
[ 0:31.800] ## UDEV:	T: disk
[ 0:31.800] ## UDEV:	D: b 254:51
[ 0:31.800] ## UDEV:	N: dm-51
[ 0:31.800] ## UDEV:	L: 0
[ 0:31.800] ## UDEV:	S: mapper/LVMTEST500118vg-LV1_rimage_15
[ 0:31.800] ## UDEV:	Q: 28833
[ 0:31.800] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-51
[ 0:31.800] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.800] ## UDEV:	E: DEVNAME=/dev/dm-51
[ 0:31.800] ## UDEV:	E: DEVTYPE=disk
[ 0:31.800] ## UDEV:	E: DISKSEQ=28833
[ 0:31.800] ## UDEV:	E: MAJOR=254
[ 0:31.800] ## UDEV:	E: MINOR=51
[ 0:31.800] ## UDEV:	E: USEC_INITIALIZED=29994346372
[ 0:31.800] ## UDEV:	E: DM_UDEV_DISABLE_SUBSYSTEM_RULES_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_UDEV_DISABLE_DISK_RULES_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_UDEV_DISABLE_OTHER_RULES_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.800] ## UDEV:	E: DM_NAME=LVMTEST500118vg-LV1_rimage_15
[ 0:31.800] ## UDEV:	E: DM_UUID=LVM-o3wPW3IEjhaAwJc1MK07qqnYgO61bnmtvULsrV4VKIQovvvPUaqdvK6zXKegCBvX
[ 0:31.800] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.800] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.800] ## UDEV:	E: DM_VG_NAME=LVMTEST500118vg
[ 0:31.800] ## UDEV:	E: DM_LV_NAME=LV1_rimage_15
[ 0:31.800] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.800] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118vg-LV1_rimage_15
[ 0:31.800] ## UDEV:	E: TAGS=:systemd:
[ 0:31.800] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.800] ## UDEV:	
[ 0:31.800] ## UDEV:	P: /devices/virtual/block/dm-6
[ 0:31.800] ## UDEV:	M: dm-6
[ 0:31.800] ## UDEV:	R: 6
[ 0:31.800] ## UDEV:	U: block
[ 0:31.800] ## UDEV:	T: disk
[ 0:31.800] ## UDEV:	D: b 254:6
[ 0:31.800] ## UDEV:	N: dm-6
[ 0:31.800] ## UDEV:	L: 0
[ 0:31.800] ## UDEV:	S: mapper/LVMTEST500118pv4
[ 0:31.800] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv4
[ 0:31.800] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv4
[ 0:31.800] ## UDEV:	Q: 28772
[ 0:31.800] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-6
[ 0:31.800] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.800] ## UDEV:	E: DEVNAME=/dev/dm-6
[ 0:31.800] ## UDEV:	E: DEVTYPE=disk
[ 0:31.800] ## UDEV:	E: DISKSEQ=28772
[ 0:31.800] ## UDEV:	E: MAJOR=254
[ 0:31.800] ## UDEV:	E: MINOR=6
[ 0:31.800] ## UDEV:	E: USEC_INITIALIZED=29990195543
[ 0:31.800] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.800] ## UDEV:	E: DM_NAME=LVMTEST500118pv4
[ 0:31.800] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv4
[ 0:31.800] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.800] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.800] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.800] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118pv4 /dev/disk/by-id/dm-name-LVMTEST500118pv4 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv4
[ 0:31.800] ## UDEV:	E: TAGS=:systemd:
[ 0:31.800] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.800] ## UDEV:	
[ 0:31.800] ## UDEV:	P: /devices/virtual/block/dm-7
[ 0:31.800] ## UDEV:	M: dm-7
[ 0:31.800] ## UDEV:	R: 7
[ 0:31.800] ## UDEV:	U: block
[ 0:31.800] ## UDEV:	T: disk
[ 0:31.800] ## UDEV:	D: b 254:7
[ 0:31.800] ## UDEV:	N: dm-7
[ 0:31.800] ## UDEV:	L: 0
[ 0:31.800] ## UDEV:	S: mapper/LVMTEST500118pv5
[ 0:31.800] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv5
[ 0:31.800] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv5
[ 0:31.800] ## UDEV:	Q: 28773
[ 0:31.800] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-7
[ 0:31.800] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.800] ## UDEV:	E: DEVNAME=/dev/dm-7
[ 0:31.800] ## UDEV:	E: DEVTYPE=disk
[ 0:31.800] ## UDEV:	E: DISKSEQ=28773
[ 0:31.800] ## UDEV:	E: MAJOR=254
[ 0:31.800] ## UDEV:	E: MINOR=7
[ 0:31.800] ## UDEV:	E: USEC_INITIALIZED=29990196844
[ 0:31.800] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.800] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.800] ## UDEV:	E: DM_NAME=LVMTEST500118pv5
[ 0:31.800] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv5
[ 0:31.800] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.800] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.800] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.800] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118pv5 /dev/disk/by-id/dm-name-LVMTEST500118pv5 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv5
[ 0:31.800] ## UDEV:	E: TAGS=:systemd:
[ 0:31.800] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.800] ## UDEV:	
[ 0:31.800] ## UDEV:	P: /devices/virtual/block/dm-8
[ 0:31.800] ## UDEV:	M: dm-8
[ 0:31.800] ## UDEV:	R: 8
[ 0:31.800] ## UDEV:	U: block
[ 0:31.800] ## UDEV:	T: disk
[ 0:31.800] ## UDEV:	D: b 254:8
[ 0:31.800] ## UDEV:	N: dm-8
[ 0:31.800] ## UDEV:	L: 0
[ 0:31.800] ## UDEV:	S: mapper/LVMTEST500118pv6
[ 0:31.800] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv6
[ 0:31.887] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv6
[ 0:31.887] ## UDEV:	Q: 28774
[ 0:31.887] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-8
[ 0:31.887] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.887] ## UDEV:	E: DEVNAME=/dev/dm-8
[ 0:31.887] ## UDEV:	E: DEVTYPE=disk
[ 0:31.887] ## UDEV:	E: DISKSEQ=28774
[ 0:31.887] ## UDEV:	E: MAJOR=254
[ 0:31.887] ## UDEV:	E: MINOR=8
[ 0:31.887] ## UDEV:	E: USEC_INITIALIZED=29990199222
[ 0:31.887] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.887] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.887] ## UDEV:	E: DM_NAME=LVMTEST500118pv6
[ 0:31.887] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv6
[ 0:31.887] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.887] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.887] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.887] ## UDEV:	E: DEVLINKS=/dev/mapper/LVMTEST500118pv6 /dev/disk/by-id/dm-name-LVMTEST500118pv6 /dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv6
[ 0:31.887] ## UDEV:	E: TAGS=:systemd:
[ 0:31.887] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.887] ## UDEV:	
[ 0:31.887] ## UDEV:	P: /devices/virtual/block/dm-9
[ 0:31.887] ## UDEV:	M: dm-9
[ 0:31.887] ## UDEV:	R: 9
[ 0:31.887] ## UDEV:	U: block
[ 0:31.887] ## UDEV:	T: disk
[ 0:31.887] ## UDEV:	D: b 254:9
[ 0:31.887] ## UDEV:	N: dm-9
[ 0:31.887] ## UDEV:	L: 0
[ 0:31.887] ## UDEV:	S: disk/by-id/dm-uuid-TEST-LVMTEST500118pv7
[ 0:31.887] ## UDEV:	S: mapper/LVMTEST500118pv7
[ 0:31.887] ## UDEV:	S: disk/by-id/dm-name-LVMTEST500118pv7
[ 0:31.887] ## UDEV:	Q: 28775
[ 0:31.887] ## UDEV:	E: DEVPATH=/devices/virtual/block/dm-9
[ 0:31.887] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.887] ## UDEV:	E: DEVNAME=/dev/dm-9
[ 0:31.887] ## UDEV:	E: DEVTYPE=disk
[ 0:31.887] ## UDEV:	E: DISKSEQ=28775
[ 0:31.887] ## UDEV:	E: MAJOR=254
[ 0:31.887] ## UDEV:	E: MINOR=9
[ 0:31.887] ## UDEV:	E: USEC_INITIALIZED=29990198957
[ 0:31.887] ## UDEV:	E: DM_UDEV_PRIMARY_SOURCE_FLAG=1
[ 0:31.887] ## UDEV:	E: DM_ACTIVATION=1
[ 0:31.887] ## UDEV:	E: DM_NAME=LVMTEST500118pv7
[ 0:31.887] ## UDEV:	E: DM_UUID=TEST-LVMTEST500118pv7
[ 0:31.887] ## UDEV:	E: DM_SUSPENDED=0
[ 0:31.887] ## UDEV:	E: DM_UDEV_RULES_VSN=2
[ 0:31.887] ## UDEV:	E: SYSTEMD_READY=1
[ 0:31.887] ## UDEV:	E: DEVLINKS=/dev/disk/by-id/dm-uuid-TEST-LVMTEST500118pv7 /dev/mapper/LVMTEST500118pv7 /dev/disk/by-id/dm-name-LVMTEST500118pv7
[ 0:31.887] ## UDEV:	E: TAGS=:systemd:
[ 0:31.887] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.887] ## UDEV:	
[ 0:31.887] ## UDEV:	P: /devices/virtual/block/loop0
[ 0:31.887] ## UDEV:	M: loop0
[ 0:31.887] ## UDEV:	R: 0
[ 0:31.887] ## UDEV:	U: block
[ 0:31.887] ## UDEV:	T: disk
[ 0:31.887] ## UDEV:	D: b 7:0
[ 0:31.887] ## UDEV:	N: loop0
[ 0:31.887] ## UDEV:	L: 0
[ 0:31.887] ## UDEV:	S: disk/by-diskseq/25865
[ 0:31.887] ## UDEV:	Q: 25880
[ 0:31.887] ## UDEV:	E: DEVPATH=/devices/virtual/block/loop0
[ 0:31.887] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.887] ## UDEV:	E: DEVNAME=/dev/loop0
[ 0:31.887] ## UDEV:	E: DEVTYPE=disk
[ 0:31.887] ## UDEV:	E: DISKSEQ=25880
[ 0:31.887] ## UDEV:	E: MAJOR=7
[ 0:31.887] ## UDEV:	E: MINOR=0
[ 0:31.887] ## UDEV:	E: USEC_INITIALIZED=441626813
[ 0:31.887] ## UDEV:	E: SYSTEMD_READY=0
[ 0:31.887] ## UDEV:	E: DEVLINKS=/dev/disk/by-diskseq/25865
[ 0:31.887] ## UDEV:	E: TAGS=:systemd:
[ 0:31.887] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.887] ## UDEV:	
[ 0:31.887] ## UDEV:	P: /devices/virtual/block/loop1
[ 0:31.887] ## UDEV:	M: loop1
[ 0:31.887] ## UDEV:	R: 1
[ 0:31.887] ## UDEV:	U: block
[ 0:31.887] ## UDEV:	T: disk
[ 0:31.887] ## UDEV:	D: b 7:1
[ 0:31.887] ## UDEV:	N: loop1
[ 0:31.887] ## UDEV:	L: 0
[ 0:31.887] ## UDEV:	S: disk/by-diskseq/21380
[ 0:31.887] ## UDEV:	Q: 21671
[ 0:31.887] ## UDEV:	E: DEVPATH=/devices/virtual/block/loop1
[ 0:31.887] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.887] ## UDEV:	E: DEVNAME=/dev/loop1
[ 0:31.887] ## UDEV:	E: DEVTYPE=disk
[ 0:31.887] ## UDEV:	E: DISKSEQ=21671
[ 0:31.887] ## UDEV:	E: MAJOR=7
[ 0:31.887] ## UDEV:	E: MINOR=1
[ 0:31.887] ## UDEV:	E: USEC_INITIALIZED=457457416
[ 0:31.887] ## UDEV:	E: SYSTEMD_READY=0
[ 0:31.887] ## UDEV:	E: DEVLINKS=/dev/disk/by-diskseq/21380
[ 0:31.887] ## UDEV:	E: TAGS=:systemd:
[ 0:31.887] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.887] ## UDEV:	
[ 0:31.887] ## UDEV:	P: /devices/virtual/block/loop2
[ 0:31.887] ## UDEV:	M: loop2
[ 0:31.887] ## UDEV:	R: 2
[ 0:31.887] ## UDEV:	U: block
[ 0:31.887] ## UDEV:	T: disk
[ 0:31.887] ## UDEV:	D: b 7:2
[ 0:31.887] ## UDEV:	N: loop2
[ 0:31.887] ## UDEV:	L: 0
[ 0:31.887] ## UDEV:	S: disk/by-diskseq/21381
[ 0:31.887] ## UDEV:	Q: 21669
[ 0:31.887] ## UDEV:	E: DEVPATH=/devices/virtual/block/loop2
[ 0:31.887] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.887] ## UDEV:	E: DEVNAME=/dev/loop2
[ 0:31.887] ## UDEV:	E: DEVTYPE=disk
[ 0:31.887] ## UDEV:	E: DISKSEQ=21669
[ 0:31.887] ## UDEV:	E: MAJOR=7
[ 0:31.887] ## UDEV:	E: MINOR=2
[ 0:31.887] ## UDEV:	E: USEC_INITIALIZED=776183912
[ 0:31.887] ## UDEV:	E: SYSTEMD_READY=0
[ 0:31.887] ## UDEV:	E: DEVLINKS=/dev/disk/by-diskseq/21381
[ 0:31.887] ## UDEV:	E: TAGS=:systemd:
[ 0:31.887] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.887] ## UDEV:	
[ 0:31.887] ## UDEV:	P: /devices/virtual/block/loop3
[ 0:31.887] ## UDEV:	M: loop3
[ 0:31.887] ## UDEV:	R: 3
[ 0:31.887] ## UDEV:	U: block
[ 0:31.887] ## UDEV:	T: disk
[ 0:31.887] ## UDEV:	D: b 7:3
[ 0:31.887] ## UDEV:	N: loop3
[ 0:31.887] ## UDEV:	L: 0
[ 0:31.887] ## UDEV:	S: disk/by-diskseq/21382
[ 0:31.887] ## UDEV:	Q: 21670
[ 0:31.887] ## UDEV:	E: DEVPATH=/devices/virtual/block/loop3
[ 0:31.887] ## UDEV:	E: SUBSYSTEM=block
[ 0:31.887] ## UDEV:	E: DEVNAME=/dev/loop3
[ 0:31.887] ## UDEV:	E: DEVTYPE=disk
[ 0:31.887] ## UDEV:	E: DISKSEQ=21670
[ 0:31.887] ## UDEV:	E: MAJOR=7
[ 0:31.887] ## UDEV:	E: MINOR=3
[ 0:31.887] ## UDEV:	E: USEC_INITIALIZED=776186533
[ 0:31.887] ## UDEV:	E: SYSTEMD_READY=0
[ 0:31.887] ## UDEV:	E: DEVLINKS=/dev/disk/by-diskseq/21382
[ 0:31.889] ## UDEV:	E: TAGS=:systemd:
[ 0:31.889] ## UDEV:	E: CURRENT_TAGS=:systemd:
[ 0:31.889] ## UDEV:	
[ 0:31.889] <======== Free space ========>
[ 0:31.889] ## DF_H:	Filesystem                              Size  Used Avail Use% Mounted on
[ 0:31.909] ## DF_H:	devtmpfs                                4.0M     0  4.0M   0% /dev
[ 0:31.909] ## DF_H:	tmpfs                                   7.7G     0  7.7G   0% /dev/shm
[ 0:31.909] ## DF_H:	tmpfs                                   3.1G   18M  3.1G   1% /run
[ 0:31.909] ## DF_H:	/dev/mapper/rhel_hp--dl380eg8--02-root   70G  4.1G   66G   6% /
[ 0:31.909] ## DF_H:	/dev/sda1                               960M  313M  648M  33% /boot
[ 0:31.909] ## DF_H:	/dev/mapper/rhel_hp--dl380eg8--02-home  853G   37G  816G   5% /home
[ 0:31.909] ## DF_H:	tmpfs                                   1.6G  4.0K  1.6G   1% /run/user/0
[ 0:31.909] <======== Script file "lvconvert-raid-reshape-stripes-load-reload.sh" ========>
[ 0:31.911] ## Line: 1 	 #!/usr/bin/env bash
[ 0:31.917] ## Line: 2 	 
[ 0:31.917] ## Line: 3 	 # Copyright (C) 2017 Red Hat, Inc. All rights reserved.
[ 0:31.917] ## Line: 4 	 #
[ 0:31.917] ## Line: 5 	 # This copyrighted material is made available to anyone wishing to use,
[ 0:31.917] ## Line: 6 	 # modify, copy, or redistribute it subject to the terms and conditions
[ 0:31.917] ## Line: 7 	 # of the GNU General Public License v.2.
[ 0:31.917] ## Line: 8 	 #
[ 0:31.917] ## Line: 9 	 # You should have received a copy of the GNU General Public License
[ 0:31.917] ## Line: 10 	 # along with this program; if not, write to the Free Software Foundation,
[ 0:31.917] ## Line: 11 	 # Inc., 51 Franklin Street, Fifth Floor, Boston, MA2110-1301 USA
[ 0:31.917] ## Line: 12 	 
[ 0:31.917] ## Line: 13 	 
[ 0:31.917] ## Line: 14 	 SKIP_WITH_LVMPOLLD=1
[ 0:31.917] ## Line: 15 	 
[ 0:31.917] ## Line: 16 	 . lib/inittest
[ 0:31.917] ## Line: 17 	 
[ 0:31.917] ## Line: 18 	 # Test reshaping under io load
[ 0:31.917] ## Line: 19 	 
[ 0:31.917] ## Line: 20 	 which md5sum || skip
[ 0:31.917] ## Line: 21 	 which mkfs.ext4 || skip
[ 0:31.917] ## Line: 22 	 aux have_raid 1 14 || skip
[ 0:31.917] ## Line: 23 	 
[ 0:31.917] ## Line: 24 	 mount_dir="mnt"
[ 0:31.917] ## Line: 25 	 
[ 0:31.917] ## Line: 26 	 cleanup_mounted_and_teardown()
[ 0:31.917] ## Line: 27 	 {
[ 0:31.917] ## Line: 28 	 	umount "$mount_dir" || true
[ 0:31.917] ## Line: 29 	 	aux teardown
[ 0:31.917] ## Line: 30 	 }
[ 0:31.917] ## Line: 31 	 
[ 0:31.917] ## Line: 32 	 checksum_()
[ 0:31.917] ## Line: 33 	 {
[ 0:31.917] ## Line: 34 	 	md5sum "$1" | cut -f1 -d' '
[ 0:31.917] ## Line: 35 	 }
[ 0:31.917] ## Line: 36 	 
[ 0:31.917] ## Line: 37 	 aux prepare_pvs 16 32
[ 0:31.917] ## Line: 38 	 
[ 0:31.917] ## Line: 39 	 get_devs
[ 0:31.917] ## Line: 40 	 
[ 0:31.917] ## Line: 41 	 vgcreate $SHARED -s 1M "$vg" "${DEVICES[@]}"
[ 0:31.917] ## Line: 42 	 
[ 0:31.917] ## Line: 43 	 trap 'cleanup_mounted_and_teardown' EXIT
[ 0:31.917] ## Line: 44 	 
[ 0:31.917] ## Line: 45 	 # Create 10-way striped raid5 (11 legs total)
[ 0:31.917] ## Line: 46 	 lvcreate --yes --type raid5_ls --stripesize 64K --stripes 10 -L4 -n$lv1 $vg
[ 0:31.917] ## Line: 47 	 check lv_first_seg_field $vg/$lv1 segtype "raid5_ls"
[ 0:31.917] ## Line: 48 	 check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
[ 0:31.917] ## Line: 49 	 check lv_first_seg_field $vg/$lv1 data_stripes 10
[ 0:31.917] ## Line: 50 	 check lv_first_seg_field $vg/$lv1 stripes 11
[ 0:31.917] ## Line: 51 	 wipefs -a "$DM_DEV_DIR/$vg/$lv1"
[ 0:31.917] ## Line: 52 	 mkfs -t ext4 "$DM_DEV_DIR/$vg/$lv1"
[ 0:31.917] ## Line: 53 	 
[ 0:31.917] ## Line: 54 	 mkdir -p "$mount_dir"
[ 0:31.917] ## Line: 55 	 mount "$DM_DEV_DIR/$vg/$lv1" "$mount_dir"
[ 0:31.917] ## Line: 56 	 
[ 0:31.917] ## Line: 57 	 echo 3 >/proc/sys/vm/drop_caches
[ 0:31.917] ## Line: 58 	 # FIXME: This is filling up ram disk. Use sane amount of data please! Rate limit the data written!
[ 0:31.917] ## Line: 59 	 dd if=/dev/urandom of="$mount_dir/random" bs=1M count=4 conv=fdatasync
[ 0:31.917] ## Line: 60 	 checksum_ "$mount_dir/random" >MD5
[ 0:31.917] ## Line: 61 	 
[ 0:31.917] ## Line: 62 	 # FIXME: wait_for_sync - is this really testing anything under load?
[ 0:31.917] ## Line: 63 	 aux wait_for_sync $vg $lv1
[ 0:31.917] ## Line: 64 	 aux delay_dev "$dev2" 0 200
[ 0:31.917] ## Line: 65 	 
[ 0:31.917] ## Line: 66 	 # Reshape it to 15 data stripes
[ 0:31.917] ## Line: 67 	 lvconvert --yes --stripes 15 $vg/$lv1
[ 0:31.917] ## Line: 68 	 check lv_first_seg_field $vg/$lv1 segtype "raid5_ls"
[ 0:31.917] ## Line: 69 	 check lv_first_seg_field $vg/$lv1 stripesize "64.00k"
[ 0:31.917] ## Line: 70 	 check lv_first_seg_field $vg/$lv1 data_stripes 15
[ 0:31.917] ## Line: 71 	 check lv_first_seg_field $vg/$lv1 stripes 16
[ 0:31.917] ## Line: 72 	 
[ 0:31.917] ## Line: 73 	 # Reload table during reshape to test for data corruption
[ 0:31.917] ## Line: 74 	 case "$(uname -r)" in
[ 0:31.917] ## Line: 75 	   5.[89]*|5.1[012].*|3.10.0-862*|4.18.0-*.el8*)
[ 0:31.917] ## Line: 76 	 	should not echo "Skipping table reload test on on unfixed kernel!!!" ;;
[ 0:31.917] ## Line: 77 	   *)
[ 0:31.917] ## Line: 78 	 for i in {0..5}
[ 0:31.917] ## Line: 79 	 do
[ 0:31.917] ## Line: 80 	 	dmsetup table $vg-$lv1|dmsetup load $vg-$lv1
[ 0:31.917] ## Line: 81 	 	dmsetup suspend --noflush $vg-$lv1
[ 0:31.917] ## Line: 82 	 	dmsetup resume $vg-$lv1
[ 0:31.917] ## Line: 83 	 	sleep .5
[ 0:31.917] ## Line: 84 	 done
[ 0:31.917] ## Line: 85 	 
[ 0:31.917] ## Line: 86 	 esac
[ 0:31.917] ## Line: 87 	 
[ 0:31.917] ## Line: 88 	 aux delay_dev "$dev2" 0
[ 0:31.917] ## Line: 89 	 
[ 0:31.917] ## Line: 90 	 kill -9 %% || true
[ 0:31.917] ## Line: 91 	 wait
[ 0:31.917] ## Line: 92 	 
[ 0:31.917] ## Line: 93 	 checksum_ "$mount_dir/random" >MD5_new
[ 0:31.917] ## Line: 94 	 
[ 0:31.917] ## Line: 95 	 umount "$mount_dir"
[ 0:31.917] ## Line: 96 	 
[ 0:31.917] ## Line: 97 	 fsck -fn "$DM_DEV_DIR/$vg/$lv1"
[ 0:31.917] ## Line: 98 	 
[ 0:31.917] ## Line: 99 	 # Compare checksum is matching
[ 0:31.917] ## Line: 100 	 cat MD5 MD5_new
[ 0:31.917] ## Line: 101 	 diff MD5 MD5_new
[ 0:31.917] ## Line: 102 	 
[ 0:31.917] ## Line: 103 	 vgremove -ff $vg
[ 0:31.917] cleanup_mounted_and_teardown
[ 0:31.918] #lvconvert-raid-reshape-stripes-load-reload.sh:1+ cleanup_mounted_and_teardown
[ 0:31.918] #lvconvert-raid-reshape-stripes-load-reload.sh:28+ umount mnt
[ 0:31.918] umount: mnt: not mounted.
[ 0:31.922] #lvconvert-raid-reshape-stripes-load-reload.sh:28+ true
[ 0:31.922] #lvconvert-raid-reshape-stripes-load-reload.sh:29+ aux teardown
[ 0:31.922] ## teardown.......## removing stray mapped devices with names beginning with LVMTEST500118: 
[ 0:32.094] .6,17239,30029053038,-;brd: module unloaded
[ 0:33.728] .ok

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2
  2024-03-03 13:16 ` Xiao Ni
@ 2024-03-04  1:07   ` Yu Kuai
  2024-03-04  1:23     ` Yu Kuai
  0 siblings, 1 reply; 19+ messages in thread
From: Yu Kuai @ 2024-03-04  1:07 UTC (permalink / raw)
  To: Xiao Ni, Yu Kuai
  Cc: zkabelac, agk, snitzer, mpatocka, dm-devel, song, heinzm, neilb,
	jbrassow, linux-kernel, linux-raid, yi.zhang, yangerkun,
	yukuai (C)

Hi,

在 2024/03/03 21:16, Xiao Ni 写道:
> Hi all
> 
> There is a error report from lvm regression tests. The case is
> lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
> tried to fix dmraid regression problems too. In my patch set,  after
> reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
> sync_thread for reshape directly), this problem doesn't appear.

How often did you see this tes failed? I'm running the tests for over
two days now, for 30+ rounds, and this test never fail in my VM.

Thanks,
Kuai

> 
> I put the log in the attachment.
> 
> On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>
>> From: Yu Kuai <yukuai3@huawei.com>
>>
>> link to part1: https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
>>
>> part1 contains fixes for deadlocks for stopping sync_thread
>>
>> This set contains fixes:
>>   - reshape can start unexpected, cause data corruption, patch 1,5,6;
>>   - deadlocks that reshape concurrent with IO, patch 8;
>>   - a lockdep warning, patch 9;
>>
>> I'm runing lvm2 tests with following scripts with a few rounds now,
>>
>> for t in `ls test/shell`; do
>>          if cat test/shell/$t | grep raid &> /dev/null; then
>>                  make check T=shell/$t
>>          fi
>> done
>>
>> There are no deadlock and no fs corrupt now, however, there are still four
>> failed tests:
>>
>> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
>> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
>> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
>> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
>>
>> And failed reasons are the same:
>>
>> ## ERROR: The test started dmeventd (147856) unexpectedly
>>
>> I have no clue yet, and it seems other folks doesn't have this issue.
>>
>> Yu Kuai (9):
>>    md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
>>    md: export helpers to stop sync_thread
>>    md: export helper md_is_rdwr()
>>    md: add a new helper reshape_interrupted()
>>    dm-raid: really frozen sync_thread during suspend
>>    md/dm-raid: don't call md_reap_sync_thread() directly
>>    dm-raid: add a new helper prepare_suspend() in md_personality
>>    dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
>>      concurrent with reshape
>>    dm-raid: fix lockdep waring in "pers->hot_add_disk"
>>
>>   drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
>>   drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
>>   drivers/md/md.h      | 38 +++++++++++++++++-
>>   drivers/md/raid5.c   | 32 ++++++++++++++-
>>   4 files changed, 196 insertions(+), 40 deletions(-)
>>
>> --
>> 2.39.2
>>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2
  2024-03-04  1:07   ` Yu Kuai
@ 2024-03-04  1:23     ` Yu Kuai
  2024-03-04  1:25       ` Xiao Ni
  0 siblings, 1 reply; 19+ messages in thread
From: Yu Kuai @ 2024-03-04  1:23 UTC (permalink / raw)
  To: Yu Kuai, Xiao Ni
  Cc: zkabelac, agk, snitzer, mpatocka, dm-devel, song, heinzm, neilb,
	jbrassow, linux-kernel, linux-raid, yi.zhang, yangerkun,
	yukuai (C)

Hi,

在 2024/03/04 9:07, Yu Kuai 写道:
> Hi,
> 
> 在 2024/03/03 21:16, Xiao Ni 写道:
>> Hi all
>>
>> There is a error report from lvm regression tests. The case is
>> lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
>> tried to fix dmraid regression problems too. In my patch set,  after
>> reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
>> sync_thread for reshape directly), this problem doesn't appear.
> 
> How often did you see this tes failed? I'm running the tests for over
> two days now, for 30+ rounds, and this test never fail in my VM.

Take a quick look, there is still a path from raid10 that
MD_RECOVERY_FROZEN can be cleared, and in theroy this problem can be
triggered. Can you test the following patch on the top of this set?
I'll keep running the test myself.

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index a5f8419e2df1..7ca29469123a 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -4575,7 +4575,8 @@ static int raid10_start_reshape(struct mddev *mddev)
         return 0;

  abort:
-       mddev->recovery = 0;
+       if (mddev->gendisk)
+               mddev->recovery = 0;
         spin_lock_irq(&conf->device_lock);
         conf->geo = conf->prev;
         mddev->raid_disks = conf->geo.raid_disks;

Thanks,
Kuai
> 
> Thanks,
> Kuai
> 
>>
>> I put the log in the attachment.
>>
>> On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>
>>> From: Yu Kuai <yukuai3@huawei.com>
>>>
>>> link to part1: 
>>> https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/ 
>>>
>>>
>>> part1 contains fixes for deadlocks for stopping sync_thread
>>>
>>> This set contains fixes:
>>>   - reshape can start unexpected, cause data corruption, patch 1,5,6;
>>>   - deadlocks that reshape concurrent with IO, patch 8;
>>>   - a lockdep warning, patch 9;
>>>
>>> I'm runing lvm2 tests with following scripts with a few rounds now,
>>>
>>> for t in `ls test/shell`; do
>>>          if cat test/shell/$t | grep raid &> /dev/null; then
>>>                  make check T=shell/$t
>>>          fi
>>> done
>>>
>>> There are no deadlock and no fs corrupt now, however, there are still 
>>> four
>>> failed tests:
>>>
>>> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
>>> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
>>> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
>>> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
>>>
>>> And failed reasons are the same:
>>>
>>> ## ERROR: The test started dmeventd (147856) unexpectedly
>>>
>>> I have no clue yet, and it seems other folks doesn't have this issue.
>>>
>>> Yu Kuai (9):
>>>    md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
>>>    md: export helpers to stop sync_thread
>>>    md: export helper md_is_rdwr()
>>>    md: add a new helper reshape_interrupted()
>>>    dm-raid: really frozen sync_thread during suspend
>>>    md/dm-raid: don't call md_reap_sync_thread() directly
>>>    dm-raid: add a new helper prepare_suspend() in md_personality
>>>    dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
>>>      concurrent with reshape
>>>    dm-raid: fix lockdep waring in "pers->hot_add_disk"
>>>
>>>   drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
>>>   drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
>>>   drivers/md/md.h      | 38 +++++++++++++++++-
>>>   drivers/md/raid5.c   | 32 ++++++++++++++-
>>>   4 files changed, 196 insertions(+), 40 deletions(-)
>>>
>>> -- 
>>> 2.39.2
>>>
> 
> 
> .
> 


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2
  2024-03-04  1:23     ` Yu Kuai
@ 2024-03-04  1:25       ` Xiao Ni
  2024-03-04  8:27         ` Xiao Ni
  0 siblings, 1 reply; 19+ messages in thread
From: Xiao Ni @ 2024-03-04  1:25 UTC (permalink / raw)
  To: Yu Kuai
  Cc: zkabelac, agk, snitzer, mpatocka, dm-devel, song, heinzm, neilb,
	jbrassow, linux-kernel, linux-raid, yi.zhang, yangerkun,
	yukuai (C)

On Mon, Mar 4, 2024 at 9:24 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> Hi,
>
> 在 2024/03/04 9:07, Yu Kuai 写道:
> > Hi,
> >
> > 在 2024/03/03 21:16, Xiao Ni 写道:
> >> Hi all
> >>
> >> There is a error report from lvm regression tests. The case is
> >> lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
> >> tried to fix dmraid regression problems too. In my patch set,  after
> >> reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
> >> sync_thread for reshape directly), this problem doesn't appear.
> >

Hi Kuai
> > How often did you see this tes failed? I'm running the tests for over
> > two days now, for 30+ rounds, and this test never fail in my VM.

I ran 5 times and it failed 2 times just now.

>
> Take a quick look, there is still a path from raid10 that
> MD_RECOVERY_FROZEN can be cleared, and in theroy this problem can be
> triggered. Can you test the following patch on the top of this set?
> I'll keep running the test myself.

Sure, I'll give the result later.

Regards
Xiao
>
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index a5f8419e2df1..7ca29469123a 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -4575,7 +4575,8 @@ static int raid10_start_reshape(struct mddev *mddev)
>          return 0;
>
>   abort:
> -       mddev->recovery = 0;
> +       if (mddev->gendisk)
> +               mddev->recovery = 0;
>          spin_lock_irq(&conf->device_lock);
>          conf->geo = conf->prev;
>          mddev->raid_disks = conf->geo.raid_disks;
>
> Thanks,
> Kuai
> >
> > Thanks,
> > Kuai
> >
> >>
> >> I put the log in the attachment.
> >>
> >> On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >>>
> >>> From: Yu Kuai <yukuai3@huawei.com>
> >>>
> >>> link to part1:
> >>> https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
> >>>
> >>>
> >>> part1 contains fixes for deadlocks for stopping sync_thread
> >>>
> >>> This set contains fixes:
> >>>   - reshape can start unexpected, cause data corruption, patch 1,5,6;
> >>>   - deadlocks that reshape concurrent with IO, patch 8;
> >>>   - a lockdep warning, patch 9;
> >>>
> >>> I'm runing lvm2 tests with following scripts with a few rounds now,
> >>>
> >>> for t in `ls test/shell`; do
> >>>          if cat test/shell/$t | grep raid &> /dev/null; then
> >>>                  make check T=shell/$t
> >>>          fi
> >>> done
> >>>
> >>> There are no deadlock and no fs corrupt now, however, there are still
> >>> four
> >>> failed tests:
> >>>
> >>> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
> >>> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
> >>> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
> >>> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
> >>>
> >>> And failed reasons are the same:
> >>>
> >>> ## ERROR: The test started dmeventd (147856) unexpectedly
> >>>
> >>> I have no clue yet, and it seems other folks doesn't have this issue.
> >>>
> >>> Yu Kuai (9):
> >>>    md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
> >>>    md: export helpers to stop sync_thread
> >>>    md: export helper md_is_rdwr()
> >>>    md: add a new helper reshape_interrupted()
> >>>    dm-raid: really frozen sync_thread during suspend
> >>>    md/dm-raid: don't call md_reap_sync_thread() directly
> >>>    dm-raid: add a new helper prepare_suspend() in md_personality
> >>>    dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
> >>>      concurrent with reshape
> >>>    dm-raid: fix lockdep waring in "pers->hot_add_disk"
> >>>
> >>>   drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
> >>>   drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
> >>>   drivers/md/md.h      | 38 +++++++++++++++++-
> >>>   drivers/md/raid5.c   | 32 ++++++++++++++-
> >>>   4 files changed, 196 insertions(+), 40 deletions(-)
> >>>
> >>> --
> >>> 2.39.2
> >>>
> >
> >
> > .
> >
>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2
  2024-03-04  1:25       ` Xiao Ni
@ 2024-03-04  8:27         ` Xiao Ni
  2024-03-04 11:06           ` Xiao Ni
  0 siblings, 1 reply; 19+ messages in thread
From: Xiao Ni @ 2024-03-04  8:27 UTC (permalink / raw)
  To: Yu Kuai
  Cc: zkabelac, agk, snitzer, mpatocka, dm-devel, song, heinzm, neilb,
	jbrassow, linux-kernel, linux-raid, yi.zhang, yangerkun,
	yukuai (C)

On Mon, Mar 4, 2024 at 9:25 AM Xiao Ni <xni@redhat.com> wrote:
>
> On Mon, Mar 4, 2024 at 9:24 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> >
> > Hi,
> >
> > 在 2024/03/04 9:07, Yu Kuai 写道:
> > > Hi,
> > >
> > > 在 2024/03/03 21:16, Xiao Ni 写道:
> > >> Hi all
> > >>
> > >> There is a error report from lvm regression tests. The case is
> > >> lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
> > >> tried to fix dmraid regression problems too. In my patch set,  after
> > >> reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
> > >> sync_thread for reshape directly), this problem doesn't appear.
> > >
>
> Hi Kuai
> > > How often did you see this tes failed? I'm running the tests for over
> > > two days now, for 30+ rounds, and this test never fail in my VM.
>
> I ran 5 times and it failed 2 times just now.
>
> >
> > Take a quick look, there is still a path from raid10 that
> > MD_RECOVERY_FROZEN can be cleared, and in theroy this problem can be
> > triggered. Can you test the following patch on the top of this set?
> > I'll keep running the test myself.
>
> Sure, I'll give the result later.

Hi all

It's not stable to reproduce this. After applying this raid10 patch it
failed once 28 times. Without the raid10 patch, it failed once 30
times, but it failed frequently this morning.

Regards
Xiao
>
> Regards
> Xiao
> >
> > diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> > index a5f8419e2df1..7ca29469123a 100644
> > --- a/drivers/md/raid10.c
> > +++ b/drivers/md/raid10.c
> > @@ -4575,7 +4575,8 @@ static int raid10_start_reshape(struct mddev *mddev)
> >          return 0;
> >
> >   abort:
> > -       mddev->recovery = 0;
> > +       if (mddev->gendisk)
> > +               mddev->recovery = 0;
> >          spin_lock_irq(&conf->device_lock);
> >          conf->geo = conf->prev;
> >          mddev->raid_disks = conf->geo.raid_disks;
> >
> > Thanks,
> > Kuai
> > >
> > > Thanks,
> > > Kuai
> > >
> > >>
> > >> I put the log in the attachment.
> > >>
> > >> On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> > >>>
> > >>> From: Yu Kuai <yukuai3@huawei.com>
> > >>>
> > >>> link to part1:
> > >>> https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
> > >>>
> > >>>
> > >>> part1 contains fixes for deadlocks for stopping sync_thread
> > >>>
> > >>> This set contains fixes:
> > >>>   - reshape can start unexpected, cause data corruption, patch 1,5,6;
> > >>>   - deadlocks that reshape concurrent with IO, patch 8;
> > >>>   - a lockdep warning, patch 9;
> > >>>
> > >>> I'm runing lvm2 tests with following scripts with a few rounds now,
> > >>>
> > >>> for t in `ls test/shell`; do
> > >>>          if cat test/shell/$t | grep raid &> /dev/null; then
> > >>>                  make check T=shell/$t
> > >>>          fi
> > >>> done
> > >>>
> > >>> There are no deadlock and no fs corrupt now, however, there are still
> > >>> four
> > >>> failed tests:
> > >>>
> > >>> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
> > >>> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
> > >>> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
> > >>> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
> > >>>
> > >>> And failed reasons are the same:
> > >>>
> > >>> ## ERROR: The test started dmeventd (147856) unexpectedly
> > >>>
> > >>> I have no clue yet, and it seems other folks doesn't have this issue.
> > >>>
> > >>> Yu Kuai (9):
> > >>>    md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
> > >>>    md: export helpers to stop sync_thread
> > >>>    md: export helper md_is_rdwr()
> > >>>    md: add a new helper reshape_interrupted()
> > >>>    dm-raid: really frozen sync_thread during suspend
> > >>>    md/dm-raid: don't call md_reap_sync_thread() directly
> > >>>    dm-raid: add a new helper prepare_suspend() in md_personality
> > >>>    dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
> > >>>      concurrent with reshape
> > >>>    dm-raid: fix lockdep waring in "pers->hot_add_disk"
> > >>>
> > >>>   drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
> > >>>   drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
> > >>>   drivers/md/md.h      | 38 +++++++++++++++++-
> > >>>   drivers/md/raid5.c   | 32 ++++++++++++++-
> > >>>   4 files changed, 196 insertions(+), 40 deletions(-)
> > >>>
> > >>> --
> > >>> 2.39.2
> > >>>
> > >
> > >
> > > .
> > >
> >


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2
  2024-03-04  8:27         ` Xiao Ni
@ 2024-03-04 11:06           ` Xiao Ni
  2024-03-04 11:52             ` Yu Kuai
  0 siblings, 1 reply; 19+ messages in thread
From: Xiao Ni @ 2024-03-04 11:06 UTC (permalink / raw)
  To: Yu Kuai
  Cc: zkabelac, agk, snitzer, mpatocka, dm-devel, song, heinzm, neilb,
	jbrassow, linux-kernel, linux-raid, yi.zhang, yangerkun,
	yukuai (C)

On Mon, Mar 4, 2024 at 4:27 PM Xiao Ni <xni@redhat.com> wrote:
>
> On Mon, Mar 4, 2024 at 9:25 AM Xiao Ni <xni@redhat.com> wrote:
> >
> > On Mon, Mar 4, 2024 at 9:24 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> > >
> > > Hi,
> > >
> > > 在 2024/03/04 9:07, Yu Kuai 写道:
> > > > Hi,
> > > >
> > > > 在 2024/03/03 21:16, Xiao Ni 写道:
> > > >> Hi all
> > > >>
> > > >> There is a error report from lvm regression tests. The case is
> > > >> lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
> > > >> tried to fix dmraid regression problems too. In my patch set,  after
> > > >> reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
> > > >> sync_thread for reshape directly), this problem doesn't appear.
> > > >
> >
> > Hi Kuai
> > > > How often did you see this tes failed? I'm running the tests for over
> > > > two days now, for 30+ rounds, and this test never fail in my VM.
> >
> > I ran 5 times and it failed 2 times just now.
> >
> > >
> > > Take a quick look, there is still a path from raid10 that
> > > MD_RECOVERY_FROZEN can be cleared, and in theroy this problem can be
> > > triggered. Can you test the following patch on the top of this set?
> > > I'll keep running the test myself.
> >
> > Sure, I'll give the result later.
>
> Hi all
>
> It's not stable to reproduce this. After applying this raid10 patch it
> failed once 28 times. Without the raid10 patch, it failed once 30
> times, but it failed frequently this morning.

Hi all

After running 152 times with kernel 6.6, the problem can appear too.
So it can return the state of 6.6. This patch set can make this
problem appear quickly.

Best Regards
Xiao


>
> Regards
> Xiao
> >
> > Regards
> > Xiao
> > >
> > > diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> > > index a5f8419e2df1..7ca29469123a 100644
> > > --- a/drivers/md/raid10.c
> > > +++ b/drivers/md/raid10.c
> > > @@ -4575,7 +4575,8 @@ static int raid10_start_reshape(struct mddev *mddev)
> > >          return 0;
> > >
> > >   abort:
> > > -       mddev->recovery = 0;
> > > +       if (mddev->gendisk)
> > > +               mddev->recovery = 0;
> > >          spin_lock_irq(&conf->device_lock);
> > >          conf->geo = conf->prev;
> > >          mddev->raid_disks = conf->geo.raid_disks;
> > >
> > > Thanks,
> > > Kuai
> > > >
> > > > Thanks,
> > > > Kuai
> > > >
> > > >>
> > > >> I put the log in the attachment.
> > > >>
> > > >> On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
> > > >>>
> > > >>> From: Yu Kuai <yukuai3@huawei.com>
> > > >>>
> > > >>> link to part1:
> > > >>> https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
> > > >>>
> > > >>>
> > > >>> part1 contains fixes for deadlocks for stopping sync_thread
> > > >>>
> > > >>> This set contains fixes:
> > > >>>   - reshape can start unexpected, cause data corruption, patch 1,5,6;
> > > >>>   - deadlocks that reshape concurrent with IO, patch 8;
> > > >>>   - a lockdep warning, patch 9;
> > > >>>
> > > >>> I'm runing lvm2 tests with following scripts with a few rounds now,
> > > >>>
> > > >>> for t in `ls test/shell`; do
> > > >>>          if cat test/shell/$t | grep raid &> /dev/null; then
> > > >>>                  make check T=shell/$t
> > > >>>          fi
> > > >>> done
> > > >>>
> > > >>> There are no deadlock and no fs corrupt now, however, there are still
> > > >>> four
> > > >>> failed tests:
> > > >>>
> > > >>> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
> > > >>> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
> > > >>> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
> > > >>> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
> > > >>>
> > > >>> And failed reasons are the same:
> > > >>>
> > > >>> ## ERROR: The test started dmeventd (147856) unexpectedly
> > > >>>
> > > >>> I have no clue yet, and it seems other folks doesn't have this issue.
> > > >>>
> > > >>> Yu Kuai (9):
> > > >>>    md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
> > > >>>    md: export helpers to stop sync_thread
> > > >>>    md: export helper md_is_rdwr()
> > > >>>    md: add a new helper reshape_interrupted()
> > > >>>    dm-raid: really frozen sync_thread during suspend
> > > >>>    md/dm-raid: don't call md_reap_sync_thread() directly
> > > >>>    dm-raid: add a new helper prepare_suspend() in md_personality
> > > >>>    dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
> > > >>>      concurrent with reshape
> > > >>>    dm-raid: fix lockdep waring in "pers->hot_add_disk"
> > > >>>
> > > >>>   drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
> > > >>>   drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
> > > >>>   drivers/md/md.h      | 38 +++++++++++++++++-
> > > >>>   drivers/md/raid5.c   | 32 ++++++++++++++-
> > > >>>   4 files changed, 196 insertions(+), 40 deletions(-)
> > > >>>
> > > >>> --
> > > >>> 2.39.2
> > > >>>
> > > >
> > > >
> > > > .
> > > >
> > >


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2
  2024-03-04 11:06           ` Xiao Ni
@ 2024-03-04 11:52             ` Yu Kuai
  0 siblings, 0 replies; 19+ messages in thread
From: Yu Kuai @ 2024-03-04 11:52 UTC (permalink / raw)
  To: Xiao Ni, Yu Kuai
  Cc: zkabelac, agk, snitzer, mpatocka, dm-devel, song, heinzm, neilb,
	jbrassow, linux-kernel, linux-raid, yi.zhang, yangerkun,
	yukuai (C)

Hi,

在 2024/03/04 19:06, Xiao Ni 写道:
> On Mon, Mar 4, 2024 at 4:27 PM Xiao Ni <xni@redhat.com> wrote:
>>
>> On Mon, Mar 4, 2024 at 9:25 AM Xiao Ni <xni@redhat.com> wrote:
>>>
>>> On Mon, Mar 4, 2024 at 9:24 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>
>>>> Hi,
>>>>
>>>> 在 2024/03/04 9:07, Yu Kuai 写道:
>>>>> Hi,
>>>>>
>>>>> 在 2024/03/03 21:16, Xiao Ni 写道:
>>>>>> Hi all
>>>>>>
>>>>>> There is a error report from lvm regression tests. The case is
>>>>>> lvconvert-raid-reshape-stripes-load-reload.sh. I saw this error when I
>>>>>> tried to fix dmraid regression problems too. In my patch set,  after
>>>>>> reverting ad39c08186f8a0f221337985036ba86731d6aafe (md: Don't register
>>>>>> sync_thread for reshape directly), this problem doesn't appear.
>>>>>
>>>
>>> Hi Kuai
>>>>> How often did you see this tes failed? I'm running the tests for over
>>>>> two days now, for 30+ rounds, and this test never fail in my VM.
>>>
>>> I ran 5 times and it failed 2 times just now.
>>>
>>>>
>>>> Take a quick look, there is still a path from raid10 that
>>>> MD_RECOVERY_FROZEN can be cleared, and in theroy this problem can be
>>>> triggered. Can you test the following patch on the top of this set?
>>>> I'll keep running the test myself.
>>>
>>> Sure, I'll give the result later.
>>
>> Hi all
>>
>> It's not stable to reproduce this. After applying this raid10 patch it
>> failed once 28 times. Without the raid10 patch, it failed once 30
>> times, but it failed frequently this morning.
> 
> Hi all
> 
> After running 152 times with kernel 6.6, the problem can appear too.
> So it can return the state of 6.6. This patch set can make this
> problem appear quickly.

I verified in my VM that after testing 100+ times, this problem can both
triggered with v6.6 and v6.8-rc5 + this set.

I think we can merge this patchset, and figure out why the test can fail
later.

Thanks,
Kuai


> 
> Best Regards
> Xiao
> 
> 
>>
>> Regards
>> Xiao
>>>
>>> Regards
>>> Xiao
>>>>
>>>> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
>>>> index a5f8419e2df1..7ca29469123a 100644
>>>> --- a/drivers/md/raid10.c
>>>> +++ b/drivers/md/raid10.c
>>>> @@ -4575,7 +4575,8 @@ static int raid10_start_reshape(struct mddev *mddev)
>>>>           return 0;
>>>>
>>>>    abort:
>>>> -       mddev->recovery = 0;
>>>> +       if (mddev->gendisk)
>>>> +               mddev->recovery = 0;
>>>>           spin_lock_irq(&conf->device_lock);
>>>>           conf->geo = conf->prev;
>>>>           mddev->raid_disks = conf->geo.raid_disks;
>>>>
>>>> Thanks,
>>>> Kuai
>>>>>
>>>>> Thanks,
>>>>> Kuai
>>>>>
>>>>>>
>>>>>> I put the log in the attachment.
>>>>>>
>>>>>> On Fri, Mar 1, 2024 at 6:03 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>>>>>>>
>>>>>>> From: Yu Kuai <yukuai3@huawei.com>
>>>>>>>
>>>>>>> link to part1:
>>>>>>> https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/
>>>>>>>
>>>>>>>
>>>>>>> part1 contains fixes for deadlocks for stopping sync_thread
>>>>>>>
>>>>>>> This set contains fixes:
>>>>>>>    - reshape can start unexpected, cause data corruption, patch 1,5,6;
>>>>>>>    - deadlocks that reshape concurrent with IO, patch 8;
>>>>>>>    - a lockdep warning, patch 9;
>>>>>>>
>>>>>>> I'm runing lvm2 tests with following scripts with a few rounds now,
>>>>>>>
>>>>>>> for t in `ls test/shell`; do
>>>>>>>           if cat test/shell/$t | grep raid &> /dev/null; then
>>>>>>>                   make check T=shell/$t
>>>>>>>           fi
>>>>>>> done
>>>>>>>
>>>>>>> There are no deadlock and no fs corrupt now, however, there are still
>>>>>>> four
>>>>>>> failed tests:
>>>>>>>
>>>>>>> ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
>>>>>>> ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
>>>>>>> ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
>>>>>>> ###       failed: [ndev-vanilla] shell/lvextend-raid.sh
>>>>>>>
>>>>>>> And failed reasons are the same:
>>>>>>>
>>>>>>> ## ERROR: The test started dmeventd (147856) unexpectedly
>>>>>>>
>>>>>>> I have no clue yet, and it seems other folks doesn't have this issue.
>>>>>>>
>>>>>>> Yu Kuai (9):
>>>>>>>     md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
>>>>>>>     md: export helpers to stop sync_thread
>>>>>>>     md: export helper md_is_rdwr()
>>>>>>>     md: add a new helper reshape_interrupted()
>>>>>>>     dm-raid: really frozen sync_thread during suspend
>>>>>>>     md/dm-raid: don't call md_reap_sync_thread() directly
>>>>>>>     dm-raid: add a new helper prepare_suspend() in md_personality
>>>>>>>     dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
>>>>>>>       concurrent with reshape
>>>>>>>     dm-raid: fix lockdep waring in "pers->hot_add_disk"
>>>>>>>
>>>>>>>    drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
>>>>>>>    drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
>>>>>>>    drivers/md/md.h      | 38 +++++++++++++++++-
>>>>>>>    drivers/md/raid5.c   | 32 ++++++++++++++-
>>>>>>>    4 files changed, 196 insertions(+), 40 deletions(-)
>>>>>>>
>>>>>>> --
>>>>>>> 2.39.2
>>>>>>>
>>>>>
>>>>>
>>>>> .
>>>>>
>>>>
> 
> .
> 


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2024-03-04 11:52 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
2024-03-01  9:56 ` [PATCH -next 1/9] md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume Yu Kuai
2024-03-01  9:56 ` [PATCH -next 2/9] md: export helpers to stop sync_thread Yu Kuai
2024-03-01  9:56 ` [PATCH -next 3/9] md: export helper md_is_rdwr() Yu Kuai
2024-03-01  9:56 ` [PATCH -next 4/9] md: add a new helper reshape_interrupted() Yu Kuai
2024-03-01  9:56 ` [PATCH -next 5/9] dm-raid: really frozen sync_thread during suspend Yu Kuai
2024-03-01  9:56 ` [PATCH -next 6/9] md/dm-raid: don't call md_reap_sync_thread() directly Yu Kuai
2024-03-01  9:56 ` [PATCH -next 7/9] dm-raid: add a new helper prepare_suspend() in md_personality Yu Kuai
2024-03-01  9:56 ` [PATCH -next 8/9] dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io concurrent with reshape Yu Kuai
2024-03-01  9:56 ` [PATCH -next 9/9] dm-raid: fix lockdep waring in "pers->hot_add_disk" Yu Kuai
2024-03-01 22:36 ` [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Song Liu
2024-03-02 15:56   ` Mike Snitzer
2024-03-03 13:16 ` Xiao Ni
2024-03-04  1:07   ` Yu Kuai
2024-03-04  1:23     ` Yu Kuai
2024-03-04  1:25       ` Xiao Ni
2024-03-04  8:27         ` Xiao Ni
2024-03-04 11:06           ` Xiao Ni
2024-03-04 11:52             ` Yu Kuai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).