From: David Jander <david@protonic.nl>
To: Mark Brown <broonie@kernel.org>
Cc: linux-spi@vger.kernel.org, Andrew Lunn <andrew@lunn.ch>,
Marc Kleine-Budde <mkl@pengutronix.de>,
Oleksij Rempel <o.rempel@pengutronix.de>
Subject: Re: [RFC] [PATCH 3/3] drivers: spi: spi.c: Don't use the message queue if possible in spi_sync
Date: Thu, 9 Jun 2022 17:34:21 +0200 [thread overview]
Message-ID: <20220609173421.437fe1c4@erd992> (raw)
In-Reply-To: <YqCIDNHjFP4p9xxs@sirena.org.uk>
On Wed, 8 Jun 2022 12:29:16 +0100
Mark Brown <broonie@kernel.org> wrote:
> On Wed, Jun 08, 2022 at 09:54:09AM +0200, David Jander wrote:
> > Mark Brown <broonie@kernel.org> wrote:
>
> > > Moving idling (and all the was_busy stuff) within the io_mutex would
> > > definitely resolve the issue, the async submission context is the only one
> > > that really needs the spinlock and it doesn't care about idling. I can't
> > > think what you could do with the io_mutex when idling so it seems to
> > > fit.
>
> > Ok, so we could agree on a way to fix this particular issue: put the idling
> > transition into the io_mutex. Thanks.
>
> > Looking forward to read comments on the rest of the code, and the general idea
> > of what I am trying to accomplish.
>
> I think the rest of it is fine or at least I'm finding it difficult to
> see anything beyond the concurrency issues. I think we need to do an
> audit to find any users that are doing a spi_sync() to complete a
> sequence of spi_async() operations but I'm not aware of any and if it
> delivers the performance benefits it's probably worth changing that
> aspect of the driver API.
I just discovered a different issue (hit upon by Oleksij Rempel while
assisting with testing):
Apparently some drivers tend to rely on the fact that master->cur_msg is not
NULL and always points to the message being transferred.
This could be a show-stopper to this patch set, if it cannot be solved.
I am currently analyzing the different cases, to see if and how they could
eventually get fixed. The crux of the issue is the fact that there are two
different API's towards the driver:
1. transfer_one(): This call does not provide a reference to the message that
contains the transfers. So all information stored only in the underlying
spi_message are not accessible to the driver. Apparently some work around
this by accessing master->cur_msg.
2. transfer_one_message(): I suspect this is a newer API. It takes the
spi_message as argument, thus giving the driver access to all information it
needs (like return status, and the complete list of transfers).
One driver in particular spi-atmel.c, still used the deprecated
master->cur_msg->is_dma_mapped flag. This is trivially fixed by replacing this
with master->cur_msg_mapped, which is still available even in the sync path. It
is still kinda ugly though, since it is a rather indirect way of obtaining
information about the current transfer buffers it is handling.
Some drivers look at master->cur_msg in interrupt mode if there was an error
interrupt, to decide whether there is an ongoing transfer and sometimes also
to store the error code in master->cur_msg->status. This could be solved by
storing a reference to the current message in the private struct, but like in
the other cases, there is no way to obtain that spi_message pointer from the
transfer_one() call.
In other words, there seem to be both a spi_transfer based API and a
spi_message based API, but problems occur when the spi_transfer based calls
need to know things about the underlying spi_message, which is sometimes
artificially generated in functions like spi_sync_transfer(), so it always
exists.
Any suggestion which is the most desirable course of action?
Try to fix the API inconsistency of wanting to access spi_message when all you
asked for is a spi_transfer, try to work around it or just toss this patch,
give up and call it a failed attempt because the impact is too big?
Best regards,
--
David Jander
next prev parent reply other threads:[~2022-06-09 15:34 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-25 14:29 [RFC] [PATCH 0/3] Optimize spi_sync path David Jander
2022-05-25 14:29 ` [RFC] [PATCH 1/3] drivers: spi: API: spi_finalize_current_message -> spi_finalize_message David Jander
2022-05-25 14:29 ` [RFC] [PATCH 2/3] drivers: spi: spi.c: Move ctlr->cur_msg_prepared to struct spi_message David Jander
2022-05-25 14:29 ` [RFC] [PATCH 3/3] drivers: spi: spi.c: Don't use the message queue if possible in spi_sync David Jander
2022-05-25 14:46 ` David Jander
2022-06-07 18:30 ` Mark Brown
2022-06-08 7:54 ` David Jander
2022-06-08 11:29 ` Mark Brown
2022-06-09 15:34 ` David Jander [this message]
2022-06-09 16:31 ` Mark Brown
2022-06-10 7:27 ` David Jander
2022-06-10 13:41 ` Mark Brown
2022-06-10 18:17 ` Mark Brown
2022-06-13 9:05 ` David Jander
2022-06-13 11:56 ` Mark Brown
2022-07-15 7:47 ` Thomas Kopp
2022-07-15 9:02 ` Thomas Kopp
2022-06-08 13:43 ` Andy Shevchenko
2022-06-08 14:55 ` David Jander
2022-05-30 12:06 ` [RFC] [PATCH 0/3] Optimize spi_sync path Mark Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220609173421.437fe1c4@erd992 \
--to=david@protonic.nl \
--cc=andrew@lunn.ch \
--cc=broonie@kernel.org \
--cc=linux-spi@vger.kernel.org \
--cc=mkl@pengutronix.de \
--cc=o.rempel@pengutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).