Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: David Sterba <dsterba@suse.cz>
To: Qu Wenruo <wqu@suse.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: [PATCH RFC] btrfs: raid56: extra debug for raid6 syndrome generation
Date: Wed, 14 Feb 2024 08:38:55 +0100	[thread overview]
Message-ID: <20240214073855.GO355@twin.jikos.cz> (raw)
In-Reply-To: <ceaa8a9d4a19dbe017012d5cdbd78aafeac31cc9.1706239278.git.wqu@suse.com>

On Fri, Jan 26, 2024 at 01:51:32PM +1030, Qu Wenruo wrote:
> [BUG]
> I have got at least two crash report for RAID6 syndrome generation, no
> matter if it's AVX2 or SSE2, they all seems to have a similar
> calltrace with corrupted RAX:
> 
>  BUG: kernel NULL pointer dereference, address: 0000000000000000
>  #PF: supervisor read access in kernel mode
>  #PF: error_code(0x0000) - not-present page
>  PGD 0 P4D 0
>  Oops: 0000 [#1] PREEMPT SMP PTI
>  Workqueue: btrfs-rmw rmw_rbio_work [btrfs]
>  RIP: 0010:raid6_sse21_gen_syndrome+0x9e/0x130 [raid6_pq]
>  RAX: 0000000000000000 RBX: 0000000000001000 RCX: ffffa0ff4cfa3248
>  RDX: 0000000000000000 RSI: ffffa0f74cfa3238 RDI: 0000000000000000
>  Call Trace:
>   <TASK>
>   rmw_rbio+0x5c8/0xa80 [btrfs]
>   process_one_work+0x1c7/0x3d0
>   worker_thread+0x4d/0x380
>   kthread+0xf3/0x120
>   ret_from_fork+0x2c/0x50
>   </TASK>
> 
> [CAUSE]
> In fact I don't have any clue.
> 
> Recently I also hit this in AVX512 path, and that's even in v5.15
> backport, which doesn't have any of my RAID56 rework.
> 
> Furthermore according to the registers:
> 
>  RAX: 0000000000000000 RBX: 0000000000001000 RCX: ffffa0ff4cfa3248
> 
> The RAX register is showing the number of stripes (including PQ),
> which is not correct (0).
> But the remaining two registers are all sane.
> 
> - RBX is the sectorsize
>   For x86_64 it should always be 4K and matches the output.
> 
> - RCX is the pointers array
>   Which is from rbio->finish_pointers, and it looks like a sane
>   kernel address.
> 
> [WORKAROUND]
> For now, I can only add extra debug ASSERT()s before we call raid6
> gen_syndrome() helper and hopes to catch the problem.
> 
> The debug requires both CONFIG_BTRFS_DEBUG and CONFIG_BTRFS_ASSERT
> enabled.
> 
> My current guess is some use-after-free, but every report is only having
> corrupted RAX but seemingly valid pointers doesn't make much sense.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>

Reviewed-by: David Sterba <dsterba@suse.com>

I haven't seen the crash for some time but with this patch I may add
some info once it happens again.

> ---
>  fs/btrfs/raid56.c | 30 ++++++++++++++++++++++++++++++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
> index 5c4bf3f907c1..6f4a9cfeea44 100644
> --- a/fs/btrfs/raid56.c
> +++ b/fs/btrfs/raid56.c
> @@ -917,6 +917,13 @@ static struct btrfs_raid_bio *alloc_rbio(struct btrfs_fs_info *fs_info,
>  	 */
>  	ASSERT(stripe_nsectors <= BITS_PER_LONG);
>  
> +	/*
> +	 * Real stripes must be between 2 (2 disks RAID5, aka RAID1) and 256
> +	 * (limited by u8).
> +	 */
> +	ASSERT(real_stripes >= 2);
> +	ASSERT(real_stripes <= U8_MAX);
> +
>  	rbio = kzalloc(sizeof(*rbio), GFP_NOFS);
>  	if (!rbio)
>  		return ERR_PTR(-ENOMEM);
> @@ -954,6 +961,7 @@ static struct btrfs_raid_bio *alloc_rbio(struct btrfs_fs_info *fs_info,
>  
>  	ASSERT(btrfs_nr_parity_stripes(bioc->map_type));
>  	rbio->nr_data = real_stripes - btrfs_nr_parity_stripes(bioc->map_type);
> +	ASSERT(rbio->nr_data > 0);
>  
>  	return rbio;
>  }
> @@ -1180,6 +1188,26 @@ static inline void bio_list_put(struct bio_list *bio_list)
>  		bio_put(bio);
>  }
>  
> +static void assert_rbio(struct btrfs_raid_bio *rbio)

                           const strurct btrfs_raid_bio

> +{
> +	if (!IS_ENABLED(CONFIG_BTRFS_DEBUG) ||
> +	    !IS_ENABLED(CONFIG_BTRFS_ASSERT))
> +		return;
> +
> +	/*
> +	 * At least two stripes (2 disks RAID5), and since real_stripes is U8,
> +	 * we won't go beyond 256 disks anyway.
> +	 */
> +	ASSERT(rbio->real_stripes >= 2);
> +	ASSERT(rbio->nr_data > 0);
> +
> +	/*
> +	 * This is another check to make sure nr data stripes is smaller
> +	 * than total stripes.
> +	 */
> +	ASSERT(rbio->nr_data < rbio->real_stripes);
> +}

  reply	other threads:[~2024-02-14  7:39 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-26  3:21 [PATCH RFC] btrfs: raid56: extra debug for raid6 syndrome generation Qu Wenruo
2024-02-14  7:38 ` David Sterba [this message]
2024-02-21 15:04   ` David Sterba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240214073855.GO355@twin.jikos.cz \
    --to=dsterba@suse.cz \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox