public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Miquel Raynal <miquel.raynal@bootlin.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>,
	linux-kernel@vger.kernel.org,
	Thomas Petazzoni <thomas.petazzoni@bootlin.com>,
	Robert Marko <robert.marko@sartura.hr>,
	Luka Perkov <luka.perkov@sartura.hr>,
	Michael Walle <michael@walle.cc>,
	Randy Dunlap <rdunlap@infradead.org>
Subject: Re: [PATCH v6 3/3] nvmem: core: Expose cells through sysfs
Date: Tue, 1 Aug 2023 18:54:49 +0200	[thread overview]
Message-ID: <20230801185449.5088c8d4@xps-13> (raw)
In-Reply-To: <2023080125-renovate-uptake-86f0@gregkh>

Hi Greg,

gregkh@linuxfoundation.org wrote on Tue, 1 Aug 2023 11:56:40 +0200:

> On Mon, Jul 31, 2023 at 05:33:13PM +0200, Miquel Raynal wrote:
> > Hi Greg,
> > 
> > gregkh@linuxfoundation.org wrote on Mon, 17 Jul 2023 18:59:52 +0200:
> >   
> > > On Mon, Jul 17, 2023 at 06:33:23PM +0200, Miquel Raynal wrote:  
> > > > Hi Greg,
> > > > 
> > > > gregkh@linuxfoundation.org wrote on Mon, 17 Jul 2023 16:32:09 +0200:
> > > >     
> > > > > On Mon, Jul 17, 2023 at 09:51:47AM +0200, Miquel Raynal wrote:    
> > > > > > The binary content of nvmem devices is available to the user so in the
> > > > > > easiest cases, finding the content of a cell is rather easy as it is
> > > > > > just a matter of looking at a known and fixed offset. However, nvmem
> > > > > > layouts have been recently introduced to cope with more advanced
> > > > > > situations, where the offset and size of the cells is not known in
> > > > > > advance or is dynamic. When using layouts, more advanced parsers are
> > > > > > used by the kernel in order to give direct access to the content of each
> > > > > > cell, regardless of its position/size in the underlying
> > > > > > device. Unfortunately, these information are not accessible by users,
> > > > > > unless by fully re-implementing the parser logic in userland.
> > > > > > 
> > > > > > Let's expose the cells and their content through sysfs to avoid these
> > > > > > situations. Of course the relevant NVMEM sysfs Kconfig option must be
> > > > > > enabled for this support to be available.
> > > > > > 
> > > > > > Not all nvmem devices expose cells. Indeed, the .bin_attrs attribute
> > > > > > group member will be filled at runtime only when relevant and will
> > > > > > remain empty otherwise. In this case, as the cells attribute group will
> > > > > > be empty, it will not lead to any additional folder/file creation.
> > > > > > 
> > > > > > Exposed cells are read-only. There is, in practice, everything in the
> > > > > > core to support a write path, but as I don't see any need for that, I
> > > > > > prefer to keep the interface simple (and probably safer). The interface
> > > > > > is documented as being in the "testing" state which means we can later
> > > > > > add a write attribute if though relevant.
> > > > > > 
> > > > > > There is one limitation though: if a layout is built as a module but is
> > > > > > not properly installed in the system and loaded manually with insmod
> > > > > > while the nvmem device driver was built-in, the cells won't appear in
> > > > > > sysfs. But if done like that, the cells won't be usable by the built-in
> > > > > > kernel drivers anyway.      
> > > > > 
> > > > > Wait, what?  That should not be an issue here, if so, then this change
> > > > > is not correct and should be fixed as this is NOT an issue for sysfs
> > > > > (otherwise the whole tree wouldn't work.)
> > > > > 
> > > > > Please fix up your dependancies if this is somehow not working properly.    
> > > > 
> > > > I'm not sure I fully get your point.
> > > > 
> > > > There is no way we can describe any dependency between a storage device
> > > > driver and an nvmem layout. NVMEM is a pure software abstraction, the
> > > > layout that will be chosen depends on the device tree, but if the
> > > > layout has not been installed, there is no existing mechanism in
> > > > the kernel to prevent it from being loaded (how do you know it's
> > > > not on purpose?).    
> > > 
> > > Once a layout has been loaded, the sysfs files should show up, right?
> > > Otherwise what does a "layout" do?  (hint, I have no idea, it's an odd
> > > term to me...)  
> > 
> > Sorry for the latency in responding to these questions, I'll try to
> > clarify the situation.
> > 
> > We have:
> > - device drivers (like NAND flashes, SPI-NOR flashes or EEPROMs) which
> >   typically probe and register their devices into the nvmem
> >   layer to expose their content through NVMEM.
> > - each registration in NVMEM leads to the creation of the relevant
> >   NVMEM cells which can then be used by other device drivers
> >   (typically: a network controller retrieving a MAC address from an
> >   EEPROM through the generic NVMEM abstraction).  
> 
> 
> So is a "cell" here a device in the device model?  Or something else?

It is not a device in the device model, but I am wondering if it should
not be one actually. I discussed with Rafal about another issue in the
current design (dependence over a layout driver which might defer
forever a storage device probe) which might be solved if the core was
handling these layouts differently.

> > We recently covered a slightly new case: the NVMEM cells can be in
> > random places in the storage devices so we need a "dynamic" way to
> > discover them: this is the purpose of the NVMEM layouts. We know cell X
> > is in the device, we just don't know where it is exactly at compile
> > time, the layout driver will discover it dynamically for us at runtime.  
> 
> So you then create the needed device when it is found?

We don't create devices, but we match the layouts with the NVMEM
devices thanks to the of_ logic.

> > While the "static cells" parser is built-in the NVMEM subsystem, you
> > explicitly asked to have the layouts modularized. This means
> > registering a storage device in nvmem while no layout driver has been
> > inserted yet is now a scenario. We cannot describe any dependency
> > between a storage device and a layout driver. We cannot defer the probe
> > either because device drivers which don't get access to their NVMEM
> > cell are responsible of choosing what to do (most of the time, the idea
> > is to fallback to a default value to avoid failing the probe for no
> > reason).
> > 
> > So to answer your original question:
> >   
> > > Once a layout has been loaded, the sysfs files should show up, right?  
> > 
> > No. The layouts are kind of "libraries" that the NVMEM subsystem uses
> > to try exposing cells *when* a new device is registered in NVMEM (not
> > later). The registration of an NVMEM layout does not trigger any new
> > parsing, because that is not how the NVMEM subsystem was designed.  
> 
> So they are a type of "class" right?  Why not just use class devices
> then?
> 
> > I must emphasize that if the layout driver is installed in
> > /lib/modules/ there is no problem, it will be loaded with
> > usermodehelper. But if it is not, we can very well have the layout
> > driver inserted after, and this case, while in practice possible, is
> > irrelevant from a driver standpoint. It does not make any sense to have
> > these cells created "after" because they are mostly used during probes.
> > An easy workaround would be to unregister/register again the underlying
> > storage device driver.  
> 
> We really do not support any situation where a module is NOT in the
> proper place when device discovery happens.

Great, I didn't know. Then there is no issue.

>  So this shouldn't be an
> issue, yet you all mention it?  So how is it happening?

Just transparency, I'm giving all details I can.

I'll try to come with something slightly different than what we have
with the current approach.

Thanks,
Miquèl

  reply	other threads:[~2023-08-01 16:55 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-17  7:51 [PATCH v6 0/3] NVMEM cells in sysfs Miquel Raynal
2023-07-17  7:51 ` [PATCH v6 1/3] ABI: sysfs-nvmem-cells: Expose cells through sysfs Miquel Raynal
2023-07-23 19:39   ` John Thomson
2023-07-31 15:51     ` Miquel Raynal
2023-08-01  9:06       ` Srinivas Kandagatla
2023-08-01 16:50         ` Miquel Raynal
2023-07-17  7:51 ` [PATCH v6 2/3] nvmem: core: Create all cells before adding the nvmem device Miquel Raynal
2023-07-17  7:51 ` [PATCH v6 3/3] nvmem: core: Expose cells through sysfs Miquel Raynal
2023-07-17 12:24   ` Michael Walle
2023-07-17 16:41     ` Miquel Raynal
2023-07-17 14:32   ` Greg Kroah-Hartman
2023-07-17 16:33     ` Miquel Raynal
2023-07-17 16:59       ` Greg Kroah-Hartman
2023-07-31 15:33         ` Miquel Raynal
2023-08-01  9:56           ` Greg Kroah-Hartman
2023-08-01 16:54             ` Miquel Raynal [this message]
2023-07-18 10:26   ` Chen-Yu Tsai
2023-07-31 16:05     ` Miquel Raynal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230801185449.5088c8d4@xps-13 \
    --to=miquel.raynal@bootlin.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luka.perkov@sartura.hr \
    --cc=michael@walle.cc \
    --cc=rdunlap@infradead.org \
    --cc=robert.marko@sartura.hr \
    --cc=srinivas.kandagatla@linaro.org \
    --cc=thomas.petazzoni@bootlin.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox