public inbox for linux-spi@vger.kernel.org
 help / color / mirror / Atom feed
From: Santhosh Kumar K <s-k6@ti.com>
To: Miquel Raynal <miquel.raynal@bootlin.com>,
	Michael Walle <mwalle@kernel.org>
Cc: Pratyush Yadav <pratyush@kernel.org>, <richard@nod.at>,
	<vigneshr@ti.com>, <broonie@kernel.org>,
	<tudor.ambarus@linaro.org>, <p-mantena@ti.com>,
	<linux-spi@vger.kernel.org>, <linux-mtd@lists.infradead.org>,
	<linux-kernel@vger.kernel.org>, <a-dutta@ti.com>,
	<u-kumar1@ti.com>, <praneeth@ti.com>, <s-k6@ti.com>
Subject: Re: [RFC PATCH 01/10] spi: spi-mem: Introduce support for tuning controller
Date: Wed, 10 Dec 2025 17:04:09 +0530	[thread overview]
Message-ID: <ea6f3dd7-0732-4de9-8bf1-e88a45ad6ac2@ti.com> (raw)
In-Reply-To: <87jyz3ao8b.fsf@bootlin.com>

Hello Michael and Miquel,

On 03/12/25 15:20, Miquel Raynal wrote:
> 
>>>> I think we should start with the requirement to have the pattern flashed
>>>> already and figure out how SPI NOR or SPI NAND can discover that
>>>> (perhaps via NVMEM?).
>>
>> But we should also keep in mind that certain flashes might return
>> tuning data during the dummy cycles. I.e. the PHY might probably be
>> tuned on each read and there is no need for any pre-programmed
>> pattern.
>>
>> I'm not saying it should be implemented, but the current
>> implementation should be that flexible that it will be easy to add
>> that later.
> 
> Conceptually, yes, but in practice, I know no controller capable of
> using just a few cycles every transfer to calibrate themselves
> automatically and reaching such an optimized speed state as the cadence
> controller is capable of ATM.
> 
> Despite the end result being close, I would still consider this other
> way to optimize the I/Os somewhat orthogonal. If someone has some
> knowledge to share about the training patterns sent during the dummy
> cycles, I am all ears though.
> 
>>> For SPI NOR, we do not have an equivalent "write-to-cache" possible, so
>>> we still require a pre-flashed pattern region. At the moment this is
>>> provided via a dedicated "phypattern" partition, and its offset is
>>> obtained through the of_get_* APIs.
>>>
>>> Regarding ways to locate the partition:
>>>
>>> 1. Using NVMEM:
>>>      a. Exposing the phypattern partition as an NVMEM cell and issuing an
>>>         NVMEM read during tuning does not work reliably, because NVMEM
>>>         ends up calling into the MTD read path and we cannot control which
>>>         read_op variant is used for the read.
>>>
>>>      b. Advertising the partition as an NVMEM cell and using NVMEM only
>>>         to fetch the offset is not possible either. NVMEM abstracts the
>>>         private data, including partition offsets, so we can't retrieve
>>>         the offset as well.
>>
>> You can probably extend the NVMEM API in some way - or switching the
>> read_op on the fly.
>>
>>> 2. Using of_get_* APIs:
>>>         Using the standard OF helpers to locate the phypattern partition
>>>         and retrieve its offset is both reliable and straighforward, and
>>>         is the approach currently implemented in v2.
>>
>> I don't like that hardcoded partition name which is basically
>> becoming an ABI then.
>>
>> At least we'd need some kind of phandle to the partition inside the
>> controller node (and get the ACK from the DT maintainers).
> 
> Yes, agreed, this is controller specific, if we need to use an of_ API
> (which is still not needed for SPI NANDs, only for tuning the read SPI
> NOR path), it should not just be a partition hardcoded name but a
> phandle in the controller node.

Yes, using a phandle is a valid idea to avoid relying on a hard-coded
name. But, it does not work well when multiple chip selects are
involved. The controller is not tied to a single flash device - a single
SPI controller may host both NOR and NAND flashes, for example. In such
case, only the NOR would require this phandle, while the NAND would not,
which makes the phandle approach unsuitable. Another example is a
controller hosting two NOR flashes - both would then need their own
phandle references.

An alternative would be to associate the phandle with the flash device
itself rather than with the controller. Let me know your thoughts on
this approach.

Thanks,
Santhosh.

> 
> Thanks,
> Miquèl


  parent reply	other threads:[~2025-12-10 11:34 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-11 19:32 [RFC PATCH 00/10] SPINAND PHY Tuning Series Santhosh Kumar K
2025-08-11 19:32 ` [RFC PATCH 01/10] spi: spi-mem: Introduce support for tuning controller Santhosh Kumar K
2025-08-13 20:26   ` Mark Brown
2025-08-14 11:34     ` Santhosh Kumar K
2025-08-14 12:34       ` Mark Brown
2025-08-22  6:05         ` Santhosh Kumar K
2025-09-10  8:07         ` Miquel Raynal
2025-08-24 17:02     ` Miquel Raynal
2025-09-10  8:21   ` Miquel Raynal
2025-09-20 17:55     ` Santhosh Kumar K
2025-10-28 15:41       ` Miquel Raynal
2025-11-05  8:55         ` Santhosh Kumar K
2025-11-05  9:35           ` Miquel Raynal
2025-11-18 13:42             ` Pratyush Yadav
2025-12-03  8:02               ` Santhosh Kumar K
2025-12-03  8:58                 ` Miquel Raynal
2025-12-10 11:33                   ` Santhosh Kumar K
2025-12-12  6:43                   ` Pratyush Yadav
2025-11-18 13:49       ` Pratyush Yadav
2025-12-03  8:02         ` Santhosh Kumar K
2025-12-03  9:28           ` Michael Walle
2025-12-03  9:50             ` Miquel Raynal
2025-12-03 14:12               ` Michael Walle
2025-12-10 11:36                 ` Santhosh Kumar K
2025-12-10 11:34               ` Santhosh Kumar K [this message]
2025-12-11 14:16                 ` Miquel Raynal
2025-12-04 16:54           ` Mahapatra, Amit Kumar
2025-12-10 11:34             ` Santhosh Kumar K
2025-08-11 19:32 ` [RFC PATCH 02/10] spi: spi-mem: Define spi_mem_tuning_params and spi_mem_get_tuning_params() Santhosh Kumar K
2025-08-11 19:32 ` [RFC PATCH 03/10] mtd: nand: spi: Introduce _execute_tuning for mtd devices Santhosh Kumar K
2025-08-11 19:32 ` [RFC PATCH 04/10] mtd: mtdcore: Call mtd_execute_tuning during mtd_register Santhosh Kumar K
2025-08-11 19:32 ` [RFC PATCH 05/10] spi: cadence-quadspi: Move cqspi_readdata_capture() above all operations Santhosh Kumar K
2025-08-11 19:32 ` [RFC PATCH 06/10] spi: cadence-quadspi: Use BIT() macro for CQSPI_REG_READCAPTURE_BYPASS Santhosh Kumar K
2025-08-11 19:32 ` [RFC PATCH 07/10] spi: cadence-quadspi: Enable PHY for aligned DAC reads Santhosh Kumar K
2025-08-11 19:32 ` [RFC PATCH 08/10] spi: cadence-quadspi: Enable PHY for data writes Santhosh Kumar K
2025-08-11 19:32 ` [RFC PATCH 09/10] spi: cadence-quadspi: Implement PHY for higher frequencies in SDR mode Santhosh Kumar K
2025-08-11 19:32 ` [RFC PATCH 10/10] spi: cadence-quadspi: Define cqspi_get_tuning_params() Santhosh Kumar K

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ea6f3dd7-0732-4de9-8bf1-e88a45ad6ac2@ti.com \
    --to=s-k6@ti.com \
    --cc=a-dutta@ti.com \
    --cc=broonie@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mtd@lists.infradead.org \
    --cc=linux-spi@vger.kernel.org \
    --cc=miquel.raynal@bootlin.com \
    --cc=mwalle@kernel.org \
    --cc=p-mantena@ti.com \
    --cc=praneeth@ti.com \
    --cc=pratyush@kernel.org \
    --cc=richard@nod.at \
    --cc=tudor.ambarus@linaro.org \
    --cc=u-kumar1@ti.com \
    --cc=vigneshr@ti.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox