public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2
@ 2024-03-01  9:56 Yu Kuai
  2024-03-01  9:56 ` [PATCH -next 1/9] md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume Yu Kuai
                   ` (10 more replies)
  0 siblings, 11 replies; 19+ messages in thread
From: Yu Kuai @ 2024-03-01  9:56 UTC (permalink / raw)
  To: zkabelac, xni, agk, snitzer, mpatocka, dm-devel, song, yukuai3,
	heinzm, neilb, jbrassow
  Cc: linux-kernel, linux-raid, yukuai1, yi.zhang, yangerkun

From: Yu Kuai <yukuai3@huawei.com>

link to part1: https://lore.kernel.org/all/CAPhsuW7u1UKHCDOBDhD7DzOVtkGemDz_QnJ4DUq_kSN-Q3G66Q@mail.gmail.com/

part1 contains fixes for deadlocks for stopping sync_thread

This set contains fixes:
 - reshape can start unexpected, cause data corruption, patch 1,5,6;
 - deadlocks that reshape concurrent with IO, patch 8;
 - a lockdep warning, patch 9;

I'm runing lvm2 tests with following scripts with a few rounds now,

for t in `ls test/shell`; do
        if cat test/shell/$t | grep raid &> /dev/null; then
                make check T=shell/$t
        fi
done

There are no deadlock and no fs corrupt now, however, there are still four
failed tests:

###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
###       failed: [ndev-vanilla] shell/lvextend-raid.sh

And failed reasons are the same:

## ERROR: The test started dmeventd (147856) unexpectedly

I have no clue yet, and it seems other folks doesn't have this issue.

Yu Kuai (9):
  md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume
  md: export helpers to stop sync_thread
  md: export helper md_is_rdwr()
  md: add a new helper reshape_interrupted()
  dm-raid: really frozen sync_thread during suspend
  md/dm-raid: don't call md_reap_sync_thread() directly
  dm-raid: add a new helper prepare_suspend() in md_personality
  dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io
    concurrent with reshape
  dm-raid: fix lockdep waring in "pers->hot_add_disk"

 drivers/md/dm-raid.c | 93 ++++++++++++++++++++++++++++++++++----------
 drivers/md/md.c      | 73 ++++++++++++++++++++++++++--------
 drivers/md/md.h      | 38 +++++++++++++++++-
 drivers/md/raid5.c   | 32 ++++++++++++++-
 4 files changed, 196 insertions(+), 40 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2024-03-04 11:52 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-01  9:56 [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Yu Kuai
2024-03-01  9:56 ` [PATCH -next 1/9] md: don't clear MD_RECOVERY_FROZEN for new dm-raid until resume Yu Kuai
2024-03-01  9:56 ` [PATCH -next 2/9] md: export helpers to stop sync_thread Yu Kuai
2024-03-01  9:56 ` [PATCH -next 3/9] md: export helper md_is_rdwr() Yu Kuai
2024-03-01  9:56 ` [PATCH -next 4/9] md: add a new helper reshape_interrupted() Yu Kuai
2024-03-01  9:56 ` [PATCH -next 5/9] dm-raid: really frozen sync_thread during suspend Yu Kuai
2024-03-01  9:56 ` [PATCH -next 6/9] md/dm-raid: don't call md_reap_sync_thread() directly Yu Kuai
2024-03-01  9:56 ` [PATCH -next 7/9] dm-raid: add a new helper prepare_suspend() in md_personality Yu Kuai
2024-03-01  9:56 ` [PATCH -next 8/9] dm-raid456, md/raid456: fix a deadlock for dm-raid456 while io concurrent with reshape Yu Kuai
2024-03-01  9:56 ` [PATCH -next 9/9] dm-raid: fix lockdep waring in "pers->hot_add_disk" Yu Kuai
2024-03-01 22:36 ` [PATCH -next 0/9] dm-raid, md/raid: fix v6.7 regressions part2 Song Liu
2024-03-02 15:56   ` Mike Snitzer
2024-03-03 13:16 ` Xiao Ni
2024-03-04  1:07   ` Yu Kuai
2024-03-04  1:23     ` Yu Kuai
2024-03-04  1:25       ` Xiao Ni
2024-03-04  8:27         ` Xiao Ni
2024-03-04 11:06           ` Xiao Ni
2024-03-04 11:52             ` Yu Kuai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox