linux-spi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Miquel Raynal <miquel.raynal@bootlin.com>
To: Chris Packham <chris.packham@alliedtelesis.co.nz>
Cc: "linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"linux-spi@vger.kernel.org" <linux-spi@vger.kernel.org>,
	"broonie@kernel.org" <broonie@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: ubifs_recover_master_node: failed to recover master node
Date: Wed, 06 Nov 2024 17:12:26 +0100	[thread overview]
Message-ID: <87ses4ibo5.fsf@bootlin.com> (raw)
In-Reply-To: <94ffb58b-3242-4ab4-b09a-686116ced781@alliedtelesis.co.nz> (Chris Packham's message of "Wed, 30 Oct 2024 10:13:45 +1300")


Hi Chris,

On 30/10/2024 at 10:13:45 +13, Chris Packham <chris.packham@alliedtelesis.co.nz> wrote:

> On 29/10/24 13:38, Chris Packham wrote:
>> (resend as plaintext)
>>
>> Hi,
>>
>> I recently added support for the SPI-NAND controller on the RTL9302C
>> SoC[1]. I did most of the work against Linux 6.11 and it's working
>> fine there. I recently rebased against the tip of Linus's tree
>> (6.12-rc5) and found I was getting ubifs errors when mounting:
>>
>> [    1.255191] spi-nand spi1.0: Macronix SPI NAND was found.
>> [    1.261283] spi-nand spi1.0: 256 MiB, block size: 128 KiB, page
>> size: 2048, OOB size: 64
>> [    1.271134] 2 fixed-partitions partitions found on MTD device spi1.0
>> [    1.278247] Creating 2 MTD partitions on "spi1.0":
>> [    1.283631] 0x000000000000-0x00000f000000 : "user"
>> [   20.481108] 0x00000f000000-0x000010000000 : "Reserved"
>> [   72.240347] ubi0: scanning is finished
>> [   72.270577] ubi0: attached mtd3 (name "user", size 240 MiB)
>> [   72.276815] ubi0: PEB size: 131072 bytes (128 KiB), LEB size:
>> 126976 bytes
>> [   72.284537] ubi0: min./max. I/O unit sizes: 2048/2048, sub-page
>> size 2048
>> [   72.292132] ubi0: VID header offset: 2048 (aligned 2048), data
>> offset: 4096
>> [   72.299885] ubi0: good PEBs: 1920, bad PEBs: 0, corrupted PEBs: 0
>> [   72.306689] ubi0: user volume: 1, internal volumes: 1, max. volumes
>> count: 128
>> [   72.314747] ubi0: max/mean erase counter: 1/0, WL threshold: 4096,
>> image sequence number: 252642230
>> [   72.324850] ubi0: available PEBs: 0, total reserved PEBs: 1920,
>> PEBs reserved for bad PEB handling: 40
>> [   72.370123] ubi0: background thread "ubi_bgt0d" started, PID 141
>> [   72.470740] UBIFS (ubi0:0): Mounting in unauthenticated mode
>> [   72.490246] UBIFS (ubi0:0): background thread "ubifs_bgt0_0"
>> started, PID 144
>> [   72.528272] UBIFS error (ubi0:0 pid 143):
>> ubifs_recover_master_node: failed to recover master node
>> [   72.550122] UBIFS (ubi0:0): background thread "ubifs_bgt0_0" stops
>> [   72.710720] UBIFS (ubi0:0): Mounting in unauthenticated mode
>> [   72.717447] UBIFS (ubi0:0): background thread "ubifs_bgt0_0"
>> started, PID 149
>> [   72.777602] UBIFS error (ubi0:0 pid 148):
>> ubifs_recover_master_node: failed to recover master node
>> [   72.787792] UBIFS (ubi0:0): background thread "ubifs_bgt0_0" stops
>>
>> Full dmesg output is at[2]
>>
>> git bisect lead me to commit 11813857864f ("mtd: spi-nand: macronix:
>> Continuous read support"). Reverting the blamed commit from 6.12-rc5
>> seems to avoid the problem. The flash chip on my board is a
>> MX30LF2G28AD-TI. I'm not sure if there is a problem with 11813857864f
>> or with my spi-mem driver that is exposed after support for continuous
>> read is enabled.
>>
> A bit of an update. The ubifs failure is from the is_empty() check in
> get_master_node(). It looks like portions of the LEB are 0 instead of
> 0xff. I've also found if I avoid use the non-DMA path in my driver I
> don't have such a problem. I think there is at least one problem in my
> driver because I don't handle DMAing more than 0xffff bytes.

I am going through my mails in a chronological order :-)

Glad to see you found a lead. I was already a bit suspicious about the
DMA path, glad to see we might narrow down the problem.

Is the 0xffff limitation a hard constraint or is it just a pure software
constraint? If we reach a hard constraint, maybe you should check that
when you decide which path you take.

Miquèl

  parent reply	other threads:[~2024-11-06 16:12 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-29  0:38 ubifs_recover_master_node: failed to recover master node Chris Packham
2024-10-29 21:13 ` Chris Packham
2024-10-29 21:51   ` [PATCH] spi: spi-mem: rtl-snand: Correctly handle DMA transfers Chris Packham
2024-10-30  6:42     ` kernel test robot
2024-10-30 10:20     ` kernel test robot
2024-10-30 19:53     ` kernel test robot
2024-10-30 19:49   ` [PATCH v2] " Chris Packham
2024-11-04 14:06     ` Mark Brown
2024-11-06 16:12   ` Miquel Raynal [this message]
     [not found] <7eaf332e-9439-4d4c-a2ea-d963e41f44f2@alliedtelesis.co.nz>
2024-11-06 15:35 ` ubifs_recover_master_node: failed to recover master node Miquel Raynal
2024-11-06 19:38   ` Chris Packham

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87ses4ibo5.fsf@bootlin.com \
    --to=miquel.raynal@bootlin.com \
    --cc=broonie@kernel.org \
    --cc=chris.packham@alliedtelesis.co.nz \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mtd@lists.infradead.org \
    --cc=linux-spi@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).