From: Michal Simek <michal.simek@amd.com>
To: Michael Walle <michael@walle.cc>,
"Mahapatra, Amit Kumar" <amit.kumar-mahapatra@amd.com>,
Tudor Ambarus <tudor.ambarus@linaro.org>,
"broonie@kernel.org" <broonie@kernel.org>,
"pratyush@kernel.org" <pratyush@kernel.org>,
"miquel.raynal@bootlin.com" <miquel.raynal@bootlin.com>,
"richard@nod.at" <richard@nod.at>,
"vigneshr@ti.com" <vigneshr@ti.com>,
"sbinding@opensource.cirrus.com" <sbinding@opensource.cirrus.com>,
"lee@kernel.org" <lee@kernel.org>,
"james.schulman@cirrus.com" <james.schulman@cirrus.com>,
"david.rhodes@cirrus.com" <david.rhodes@cirrus.com>,
"rf@opensource.cirrus.com" <rf@opensource.cirrus.com>,
"perex@perex.cz" <perex@perex.cz>,
"tiwai@suse.com" <tiwai@suse.com>,
Neal Frager <neal.frager@amd.com>
Cc: "linux-spi@vger.kernel.org" <linux-spi@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
"nicolas.ferre@microchip.com" <nicolas.ferre@microchip.com>,
"alexandre.belloni@bootlin.com" <alexandre.belloni@bootlin.com>,
"claudiu.beznea@tuxon.dev" <claudiu.beznea@tuxon.dev>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
"alsa-devel@alsa-project.org" <alsa-devel@alsa-project.org>,
"patches@opensource.cirrus.com" <patches@opensource.cirrus.com>,
"linux-sound@vger.kernel.org" <linux-sound@vger.kernel.org>,
"git (AMD-Xilinx)" <git@amd.com>,
"amitrkcian2002@gmail.com" <amitrkcian2002@gmail.com>,
Conor Dooley <conor.dooley@microchip.com>,
"beanhuo@micron.com" <beanhuo@micron.com>
Subject: Re: [PATCH v11 07/10] mtd: spi-nor: Add stacked memories support in spi-nor
Date: Mon, 5 Aug 2024 13:00:40 +0200 [thread overview]
Message-ID: <1b726ef7-e25c-4a19-aaa8-77fdbdd8bcca@amd.com> (raw)
In-Reply-To: <D37U3QPX0J5J.21CTXMW2KC72G@walle.cc>
Hi,
On 8/5/24 10:27, Michael Walle wrote:
> Hi,
>
>>>>> All I'm saying is that you shouldn't put burden on us (the SPI NOR
>>>>> maintainers) for what seems to me at least as a niche. Thus I was
>>>>> asking for performance numbers and users. Convince me that I'm
>>>>> wrong and that is worth our time.
>>>>
>>>> No. It is not really just feature of our evaluation boards. Customers are using
>>>> it. I was talking to one guy from field and he confirms me that these
>>>> configurations are used by his multiple customers in real products.
>>>
>>> Which begs the question, do we really have to support every feature
>>> in the core (I'd like to hear Tudors and Pratyush opinion here).
>>> Honestly, this just looks like a concatenation of two QSPI
>>> controllers.
>>
>> Based on my understanding for stacked yes. For parallel no.
>
> See below.
>
>>> Why didn't you just use a normal octal controller which
>>> is a protocol also backed by the JEDEC standard.
>>
>> On newer SOC octal IP core is used.
>> Amit please comment.
>>
>>> Is it any faster?
>>
>> Amit: please provide numbers.
>>
>>> Do you get more capacity? Does anyone really use large SPI-NOR
>>> flashes? If so, why?
>>
>> You get twice more capacity based on that configuration. I can't answer the
>> second question because not working with field. But both of that configurations
>> are used by customers. Adding Neal if he wants to add something more to it.
>>
>>> I mean you've put that controller on your SoC,
>>> you must have some convincing arguments why a customer should use
>>> it.
>>
>> I expect recommendation is to use single configuration but if you need bigger
>> space for your application the only way to extend it is to use stacked
>> configuration with two the same flashes next to each other.
>> If you want to have bigger size and also be faster answer is parallel
>> configuration.
>
> But who is using expensive NOR flash for bulk storage anyway?
I expect you understand that even if I know companies which does it I am not
allow to share their names.
But customers don't need to have other free pins to connect for example emmc.
That's why adding one more "expensive flash" can be for them only one option.
Also I bet that price for one more qspi flash is nothing compare to chip itself
and other related expenses for low volume production.
> You're
> only mentioning parallel mode. Also the performance numbers were
> just about the parallel mode. What about stacked mode? Because
> there's a chance that parallel mode works without modification of
> the core (?).
I will let Amit to comment it.
>
>>>>> The first round of patches were really invasive regarding the core
>>>>> code. So if there is a clean layering approach which can be enabled
>>>>> as a module and you are maintaining it I'm fine with that (even if
>>>>> the core code needs some changes then like hooks or so, not sure).
>>>>
>>>> That discussion started with Miquel some years ago when he was trying to to
>>>> solve description in DT which is merged for a while in the kernel.
>>>
>>> What's your point here? From what I can tell the DT binding is wrong
>>> and needs to be reworked anyway.
>>
>> I am just saying that this is not any adhoc new feature but configuration which
>> has been already discussed and some steps made. If DT binding is wrong it can be
>> deprecated and use new one but for that it has be clear which way to go.
>
> Well, AMD could have side stepped all this if they had just
> integrated a normal OSPI flash controller, which would have the same
> requirements regarding the pins (if not even less) and it would have
> been *easy* to integrate it into the already available ecosystem.
> That was what my initial question was about. Why did you choose two
> QSPI ports instead of one OSPI port.
Keep in your mind that ZynqMP is 9years old SoC. Zynq 12+ years with a lot of
internal development happening before. Not sure if ospi even exists at that
time. Also if any IP was available for the price which they were targeting.
I don't think make sense to discuss OSPI in this context because that's not in
these SoCs.
I have never worked with spi that's why don't know historical context to provide
more details.
Thanks,
Michal
next prev parent reply other threads:[~2024-08-05 11:01 UTC|newest]
Thread overview: 78+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-25 9:21 [PATCH v11 00/10] spi: Add support for stacked/parallel memories Amit Kumar Mahapatra
2023-11-25 9:21 ` [PATCH v11 01/10] mfd: tps6594: Use set/get APIs to access spi->chip_select Amit Kumar Mahapatra
2023-12-01 9:57 ` (subset) " Lee Jones
2023-12-01 18:50 ` Mark Brown
2023-12-06 13:45 ` Lee Jones
2023-12-07 13:38 ` [GIT PULL] Immutable branch between MFD and SPI due for the v6.8 merge window Lee Jones
2023-12-07 16:20 ` Mark Brown
2023-11-25 9:21 ` [PATCH v11 02/10] ALSA: hda/cs35l56: Use set/get APIs to access spi->chip_select Amit Kumar Mahapatra
2023-11-25 9:21 ` [PATCH v11 03/10] spi: Add multi-cs memories support in SPI core Amit Kumar Mahapatra
2024-01-12 19:11 ` Guenter Roeck
2024-01-12 19:16 ` Mark Brown
2024-01-12 20:05 ` Guenter Roeck
2024-01-20 17:05 ` Guenter Roeck
2024-01-21 1:04 ` Mark Brown
2024-01-21 16:58 ` Guenter Roeck
2024-01-21 18:06 ` Michael Walle
2024-01-21 19:29 ` Guenter Roeck
2024-01-21 21:17 ` Mark Brown
2024-01-21 21:15 ` Mark Brown
2024-01-21 9:42 ` Linux regression tracking #adding (Thorsten Leemhuis)
2024-01-23 15:45 ` Linux regression tracking #update (Thorsten Leemhuis)
2023-11-25 9:21 ` [PATCH v11 04/10] mtd: spi-nor: Convert macros with inline functions Amit Kumar Mahapatra
2023-11-25 9:21 ` [PATCH v11 05/10] mtd: spi-nor: Add APIs to set/get nor->params Amit Kumar Mahapatra
2023-11-25 9:21 ` [PATCH v11 06/10] mtd: spi-nor: Move write enable inside specific write & erase APIs Amit Kumar Mahapatra
2023-11-25 9:21 ` [PATCH v11 07/10] mtd: spi-nor: Add stacked memories support in spi-nor Amit Kumar Mahapatra
2023-12-06 14:30 ` Tudor Ambarus
2023-12-06 14:43 ` Tudor Ambarus
2023-12-08 17:06 ` Mahapatra, Amit Kumar
2023-12-11 3:44 ` Tudor Ambarus
2023-12-08 17:05 ` Mahapatra, Amit Kumar
2023-12-11 3:33 ` Tudor Ambarus
2023-12-11 6:56 ` Mahapatra, Amit Kumar
2023-12-11 9:35 ` Tudor Ambarus
2023-12-11 13:37 ` Mahapatra, Amit Kumar
2023-12-12 15:02 ` Tudor Ambarus
2023-12-15 7:55 ` Mahapatra, Amit Kumar
2023-12-15 8:09 ` Tudor Ambarus
2023-12-15 10:02 ` Mahapatra, Amit Kumar
2023-12-15 10:33 ` Tudor Ambarus
2023-12-15 11:20 ` Mahapatra, Amit Kumar
2023-12-19 8:26 ` Tudor Ambarus
2023-12-21 6:54 ` Mahapatra, Amit Kumar
2024-02-09 11:06 ` Tudor Ambarus
2024-02-09 16:13 ` Tudor Ambarus
2024-03-13 16:03 ` Mahapatra, Amit Kumar
2024-07-26 12:35 ` Mahapatra, Amit Kumar
[not found] ` < <IA0PR12MB769944254171C39FF4171B52DCB42@IA0PR12MB7699.namprd12.prod.outlook.com>
2024-07-26 12:55 ` Michael Walle
2024-07-31 8:58 ` Michal Simek
2024-07-31 9:19 ` Michael Walle
2024-07-31 13:40 ` Michal Simek
2024-07-31 14:11 ` Michael Walle
2024-08-01 6:22 ` Michal Simek
2024-08-01 6:37 ` Frager, Neal
2024-08-01 9:28 ` Mahapatra, Amit Kumar
[not found] ` < <CH2PR12MB50044242FE253D7B0E3425ABF0B22@CH2PR12MB5004.namprd12.prod.outlook.com>
2024-08-05 8:14 ` Michael Walle
2024-08-05 8:27 ` Michael Walle
2024-08-05 11:00 ` Michal Simek [this message]
2024-08-07 13:21 ` Mahapatra, Amit Kumar
2024-08-12 7:29 ` Miquel Raynal
2024-08-12 7:37 ` Michael Walle
2024-08-12 8:39 ` Miquel Raynal
2024-08-12 8:38 ` Miquel Raynal
2024-08-12 9:45 ` Tudor Ambarus
2024-08-14 7:13 ` Mahapatra, Amit Kumar
2024-08-14 8:46 ` Miquel Raynal
2024-08-14 12:53 ` Mahapatra, Amit Kumar
2024-08-14 14:46 ` Miquel Raynal
2024-08-19 10:28 ` Mahapatra, Amit Kumar
2024-03-13 16:03 ` Mahapatra, Amit Kumar
2023-12-07 17:24 ` Tudor Ambarus
2023-11-25 9:21 ` [PATCH v11 08/10] spi: spi-zynqmp-gqspi: Add stacked memories support in GQSPI driver Amit Kumar Mahapatra
2023-11-25 9:21 ` [PATCH v11 09/10] mtd: spi-nor: Add parallel memories support in spi-nor Amit Kumar Mahapatra
2023-11-25 9:21 ` [PATCH v11 10/10] spi: spi-zynqmp-gqspi: Add parallel memories support in GQSPI driver Amit Kumar Mahapatra
2023-12-07 22:35 ` (subset) [PATCH v11 00/10] spi: Add support for stacked/parallel memories Mark Brown
2023-12-12 12:34 ` Michael Walle
2023-12-15 7:28 ` Mahapatra, Amit Kumar
2023-12-18 22:10 ` Richard Weinberger
2023-12-19 8:12 ` Miquel Raynal
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1b726ef7-e25c-4a19-aaa8-77fdbdd8bcca@amd.com \
--to=michal.simek@amd.com \
--cc=alexandre.belloni@bootlin.com \
--cc=alsa-devel@alsa-project.org \
--cc=amit.kumar-mahapatra@amd.com \
--cc=amitrkcian2002@gmail.com \
--cc=beanhuo@micron.com \
--cc=broonie@kernel.org \
--cc=claudiu.beznea@tuxon.dev \
--cc=conor.dooley@microchip.com \
--cc=david.rhodes@cirrus.com \
--cc=git@amd.com \
--cc=james.schulman@cirrus.com \
--cc=lee@kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mtd@lists.infradead.org \
--cc=linux-sound@vger.kernel.org \
--cc=linux-spi@vger.kernel.org \
--cc=michael@walle.cc \
--cc=miquel.raynal@bootlin.com \
--cc=neal.frager@amd.com \
--cc=nicolas.ferre@microchip.com \
--cc=patches@opensource.cirrus.com \
--cc=perex@perex.cz \
--cc=pratyush@kernel.org \
--cc=rf@opensource.cirrus.com \
--cc=richard@nod.at \
--cc=sbinding@opensource.cirrus.com \
--cc=tiwai@suse.com \
--cc=tudor.ambarus@linaro.org \
--cc=vigneshr@ti.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).