linux-spi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Jander <david@protonic.nl>
To: Mark Brown <broonie@kernel.org>
Cc: Marc Kleine-Budde <mkl@pengutronix.de>,
	linux-spi@vger.kernel.org, Oleksij Rempel <ore@pengutronix.de>,
	Andrew Lunn <andrew@lunn.ch>,
	Martin Sperl <kernel@martin.sperl.org>
Subject: Re: [RFC] A new SPI API for fast, low-latency regmap peripheral access
Date: Wed, 25 May 2022 16:39:46 +0200	[thread overview]
Message-ID: <20220525163946.48ea40c9@erd992> (raw)
In-Reply-To: <Yo02CLxDoCPYYZD5@sirena.org.uk>

On Tue, 24 May 2022 20:46:16 +0100
Mark Brown <broonie@kernel.org> wrote:

> On Tue, May 24, 2022 at 01:30:48PM +0200, David Jander wrote:
> 
> > > But that turned out be not working properly:
> > >
> > > | https://lore.kernel.org/all/f86eaebb-0359-13be-f4a2-4f2b8832252e@nvidia.com/  
> 
> > Ah, thanks for sharing. Added Martin to CC here.  
> 
> > I have been struggling with this too. There are definitely dragons somewhere.
> > I have tried to do tear-down in the same context if possible, similar to this:  
> 
> There's a potential issue there with ending up spending noticable extra
> time turning the controller on and off, how costly that is might be
> variable.
> 
> > I have not yet discovered exactly why. In the occasions the code didn't hit
> > the race, it seemed to have a notable performance impact though, so removing
> > this context switch may be worth the effort.  
> 
> Or come up with a mechanism for ensuring we only switch to do the
> cleanup when we're not otherwise busy.

I think the PM part might benefit from some optimizations too...

> > From what I understand of this, bus_lock_mutex is used to serialize spi_sync
> > calls for this bus, so there cannot be any concurrency from different threads
> > doing spi_sync() calls to the same bus. This means, that if spi_sync was the
> > only path in existence, bus_lock_mutex would suffice, and all the other  
> 
> The bus lock is there because some devices have an unfortunate need to
> do multiple SPI transfers with no other devices able to generate any
> traffic on the bus in between.  It seems that even more sadly some of
> the users are using it to protect against multiple calls into
> themselves so we can't just do the simple thing and turn the bus locks
> into noops if there's only one client on the bus.  However it *is* quite
> rarely used so I'm thinking that what we could do is default to not
> having it and then arrange to create it on first use, or just update
> the clients to do something during initialisation to cause it to be
> created.  That way only controllers with an affected client would pay
> the cost.
> 
> I don't *think* it's otherwise needed but would need to go through and
> verify that.
> 
> > spinlocks and mutexes could go. Big win. But the async path is what
> > complicates everything. So I have been thinking... what if we could make the
> > sync and the async case two separate paths, with two separate message queues?
> > In fact the sync path doesn't even need a queue really, since it will just
> > handle one message beginning to end, and not return until the message is done.
> > It doesn't need the worker thread either AFAICS. Or am I missing something?
> > In the end, both paths would converge at the io_mutex. I am tempted to try
> > this route. Any thoughts?  
> 
> The sync path like you say doesn't exactly need a queue itself, it's
> partly looking at the queue so it can fall back to pushing work through
> the thread in the case that the controller is busy (hopefully opening up
> opportunities for us to minimise the latency between completing whatever
> is going on already and starting the next message) and partly about
> pushing the work idling the hardware into the thread so that it's
> deferred a bit and we're less likely to end up spending time bouncing
> the controller on and off if there's a sequence of synchronous
> operations going on.  That second bit doesn't need us to actually look
> at the queue though, we just need to kick the thread so it gets run at
> some point and sees that the queue is empty.
> 
> Again I need to think this through properly but we can probably arrange
> things so that 
> 
> >         --> __spi_sync()
> >             --> bus_lock_spinlock
> >                 --> queue_lock
> >                     --> list_add_tail()
> >             --> __spi_pump_messages() (also entered here from WQ)
> >                 --> queue_lock
> >                     --> list_first_entry()  
> 
> the work we do under the first queue_lock here gets shuffled into
> __spin_pump_messages() (possibly replace in_kthread with passing in a
> message?  Would need comments.).  That'd mean we'd at least only be
> taking the queue lock once.
> 
> The other potential issue with eliminating the queue entirely would be
> if there's clients which are mixing async and sync operations but
> expecting them to complete in order (eg, start a bunch of async stuff
> then do a sync operation at the end rather than have a custom
> wait/completion).

I thought it would be easier to look at some code at this point, so I just
sent out an RFC patch series for the discussion. As for the ordering problem
you mention, I think I have solved for that. See here:

https://marc.info/?l=linux-spi&m=165348892007041&w=2

I can understand if you ultimately reject this series though, since I could
not avoid making a small change to the client driver API. I just can't figure
out how to do it without that change. The problem is that
spi_finalize_current_message() relies on ctlr->cur_msg, and we cannot touch
that if we skip the queue. Sorry for that.

Best regards,

-- 
David Jander

  reply	other threads:[~2022-05-25 14:40 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-12 14:34 [RFC] A new SPI API for fast, low-latency regmap peripheral access David Jander
2022-05-12 20:37 ` Mark Brown
2022-05-12 22:12   ` Mark Brown
2022-05-13 12:46   ` David Jander
2022-05-13 19:36     ` Mark Brown
2022-05-16 16:28       ` Marc Kleine-Budde
2022-05-16 17:46         ` Mark Brown
2022-05-17 10:24           ` David Jander
2022-05-17 11:57             ` Mark Brown
2022-05-17 13:09               ` David Jander
2022-05-17 13:43                 ` Mark Brown
2022-05-17 15:16                   ` David Jander
2022-05-17 18:17                     ` Mark Brown
2022-05-19  8:12                       ` David Jander
2022-05-19  8:24                         ` Marc Kleine-Budde
2022-05-19 12:14                         ` Andrew Lunn
2022-05-19 14:33                           ` David Jander
2022-05-19 15:21                             ` Andrew Lunn
2022-05-20 15:22                         ` Mark Brown
2022-05-23 14:48                           ` David Jander
2022-05-23 14:59                             ` Marc Kleine-Budde
2022-05-24 11:30                               ` David Jander
2022-05-24 19:46                                 ` Mark Brown
2022-05-25 14:39                                   ` David Jander [this message]
2022-05-13 12:10 ` Andrew Lunn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220525163946.48ea40c9@erd992 \
    --to=david@protonic.nl \
    --cc=andrew@lunn.ch \
    --cc=broonie@kernel.org \
    --cc=kernel@martin.sperl.org \
    --cc=linux-spi@vger.kernel.org \
    --cc=mkl@pengutronix.de \
    --cc=ore@pengutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).