Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: Wang Yugui <wangyugui@e16-tech.com>
To: Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: Uncorrectable error during multiple scrub (raid5 recovery).
Date: Sun, 14 Aug 2022 12:58:45 +0800	[thread overview]
Message-ID: <20220814125845.80E1.409509F4@e16-tech.com> (raw)
In-Reply-To: <a113ec67-18fe-276f-d065-307d2bb292b0@gmx.com>

Hi,

> On 2022/8/14 11:10, Wang Yugui wrote:
> > Hi,
> >
> > Uncorrectable error during multiple scrub (raid5 recovery).
> >
> > This reproducer is based on some reproducer [1],
> > but it seems a new problem, so I open a new thread.
> >
> > reproducer:
> >
> > mkfs.btrfs -f -draid5 -mraid1 ${SCRATCH_DEV_POOL}
> > SCRATCH_DEV_ARRAY=($SCRATCH_DEV_POOL)
> > mount ${SCRATCH_DEV_ARRAY[0]} $SCRATCH_MNT # -o compress=zstd,noatime
> 
> Please remove unnecessary comments if it's not explaining anything.
> 
> It just makes the test case much harder to read.
> >
> > /bin/cp -a /usr/bin $SCRATCH_MNT/
> > #(OK)dd if=/dev/urandom bs=1M count=1K of=$SCRATCH_MNT/1G.img
> > du -sh $SCRATCH_MNT
> >
> > for((i=1;i<=15;++i)); do
> >
> > 	#(OK)umount $SCRATCH_MNT; mount ${SCRATCH_DEV_ARRAY[0]} $SCRATCH_MNT # -o compress=zstd,noatime
> > 	sync; sleep 5; sync; sleep 5; sync; sleep 25;
> 
> If you really want to provide a reproducer, either explain why this is
> needed or it will just waste time of everyone who is trying this test.
> 
> >
> > 	# change the device to discard in every loop
> > 	j=$(( i % ${#SCRATCH_DEV_ARRAY[@]} ))
> > 	/usr/sbin/blkdiscard -f ${SCRATCH_DEV_ARRAY[$j]} # --offset 2M
> >
> > 	btrfs scrub start -Bd $SCRATCH_MNT | grep 'summary\|Uncorrectable'
> >
> > done
> >
> > This problem will not happen if we change the test data to simpler one.
> > # about 220M data of '/usr/bin' to single 1G file
> >
> > This problem will not happen if we clear cache with 'umount; mount'
> > between multiple loop.
> > # 'sync; sleep 5; ...' to  'umount; mount'
> >
> > so it seems that some info in memory is wrong after RAID5 recovery?
> >
> > [1]
> > Subject: misc-next and for-next: kernel BUG at fs/btrfs/extent_io.c:2350!
> > during raid5 recovery
> > https://lore.kernel.org/linux-btrfs/9dfb0b60-9178-7bbe-6ba1-10d056a7e84c@gmx.com/T/#t
> 
> That case is in fact not related to RAID56, and we already have the fix
> for it:
> https://lore.kernel.org/linux-btrfs/1d9b69af6ce0a79e54fbaafcc65ead8f71b54b60.1660377678.git.wqu@suse.com/

This problem still happen on  linux 5.20(20220812) with the flowing patches.

Subject: btrfs: scrub: properly report super block errors in system log
Subject: btrfs: scrub: try to fix super block errors
(v2)Subject: btrfs: don't merge pages into bio if their page offset is not

Best Regards
Wang Yugui (wangyugui@e16-tech.com)
2022/08/14



  reply	other threads:[~2022-08-14  4:59 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-14  3:10 Uncorrectable error during multiple scrub (raid5 recovery) Wang Yugui
2022-08-14  4:46 ` Qu Wenruo
2022-08-14  4:58   ` Wang Yugui [this message]
2022-08-14  5:08     ` Qu Wenruo
2022-08-14  5:31 ` Wang Yugui

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220814125845.80E1.409509F4@e16-tech.com \
    --to=wangyugui@e16-tech.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox