public inbox for linux-mmc@vger.kernel.org
 help / color / mirror / Atom feed
From: Arnd Bergmann <arnd@arndb.de>
To: David Daney <ddaney@caviumnetworks.com>
Cc: Ulf Hansson <ulf.hansson@linaro.org>,
	Matt Redfearn <matt.redfearn@imgtec.com>,
	linux-mmc <linux-mmc@vger.kernel.org>,
	Aleksey Makarov <aleksey.makarov@caviumnetworks.com>,
	Chandrakala Chavva <cchavva@caviumnetworks.com>,
	David Daney <david.daney@cavium.com>,
	Aleksey Makarov <aleksey.makarov@auriga.com>,
	Leonid Rosenboim <lrosenboim@caviumnetworks.com>,
	Peter Swain <pswain@cavium.com>,
	Aaron Williams <aaron.williams@cavium.com>
Subject: Re: [RESEND PATCH v7 2/2] mmc: OCTEON: Add host driver for OCTEON MMC controller
Date: Fri, 22 Apr 2016 22:23:58 +0200	[thread overview]
Message-ID: <4807095.U0fzMFt2B3@wuerfel> (raw)
In-Reply-To: <571A6412.6070303@caviumnetworks.com>

On Friday 22 April 2016 10:49:06 David Daney wrote:
> On 04/22/2016 09:42 AM, Arnd Bergmann wrote:
> > On Friday 22 April 2016 15:54:56 Ulf Hansson wrote:
> >>
> >>>
> >>> My suggestion with using
> >>>
> >>>          wait_event(host->wq, !cmpxchg(host->current_req, NULL, mrq));
> >>>
> >>> should sufficiently solve the problem, but the suggestion of using
> >>> a kthread (even though not needed for taking a mutex) would still
> >>> have some advantages and one disadvantage:
> >>>
> >>> + We never need to spin in the irq context (also achievable using
> >>>    a threaded handler)
> >>> + The request callback always returns immediately after queuing up
> >>>    the request to the kthread, rather than blocking for a potentially
> >>>    long time while waiting for an operation in another slot to complete
> >>> + it very easily avoids the problem of serialization between
> >>>    the slots, and ensures that each slot gets an equal chance to
> >>>    send the next request.
> >>> - you get a slightly higher latency for waking up the kthread in
> >>>    oder to do a simple request (comparable amount of latency that
> >>>    is introduced by an irq thread).
> >>>
> >>
> >> Currently I can't think of anything better, so I guess something along
> >> these lines is worth a try.
> >>
> >> No matter what, I guess we want to avoid to use a semaphore as long as
> >> possible, right!?
> >
> > Yes, I think that would be good, to avoid curses from whoever tries
> > to eliminate them the next time.
> >
> > I think there is some renewed interest in realtime kernels these
> > days, and semaphores are known to cause problems with priority
> > inversion (precisely because you don't know which thread will
> > release it).
> 
> In this particular case, there can be no priority inversion, as the 
> thing we are waiting for is a hardware event.  The timing of that is not 
> influenced by task scheduling.  The use of wait_event instead of struct 
> semaphore would be purely cosmetic.

My point was just that it's possible someone will try to remove all
the semaphores again soon. You are right that any possible priority
inversion in the existing code would not get changed by using a
different method to wait for the completion of the request.

	Arnd

  reply	other threads:[~2016-04-22 20:24 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-31 15:26 [RESEND PATCH v7 1/2] mmc: OCTEON: Add DT bindings for OCTEON MMC controller Matt Redfearn
2016-03-31 15:26 ` [RESEND PATCH v7 2/2] mmc: OCTEON: Add host driver " Matt Redfearn
2016-04-19 20:46   ` Arnd Bergmann
2016-04-19 21:45     ` David Daney
2016-04-19 22:09       ` Arnd Bergmann
2016-04-19 23:27         ` David Daney
2016-04-19 23:57           ` Arnd Bergmann
2016-04-20  0:02             ` Arnd Bergmann
2016-04-21  8:02           ` Ulf Hansson
2016-04-21 10:15             ` Arnd Bergmann
2016-04-21 12:44               ` Ulf Hansson
2016-04-21 13:19                 ` Arnd Bergmann
2016-04-22 13:54                   ` Ulf Hansson
2016-04-22 16:42                     ` Arnd Bergmann
2016-04-22 17:49                       ` David Daney
2016-04-22 20:23                         ` Arnd Bergmann [this message]
2016-04-14 12:45 ` [RESEND PATCH v7 1/2] mmc: OCTEON: Add DT bindings " Ulf Hansson
2016-04-18  8:53   ` Matt Redfearn
2016-04-18 11:13     ` Ulf Hansson
2016-04-18 11:37       ` Matt Redfearn
2016-04-18 12:08         ` Ulf Hansson
2016-04-18 12:57           ` Matt Redfearn
2016-04-18 22:59             ` David Daney
2016-04-19  9:15             ` Ulf Hansson
2016-04-19 16:13               ` David Daney
2016-04-19 19:33                 ` Ulf Hansson
2016-04-19 20:25                   ` David Daney
2016-04-19 20:56                     ` Arnd Bergmann
2016-04-19 21:50                       ` David Daney
2016-04-20  9:32                     ` Ulf Hansson
2016-04-20 22:32                       ` David Daney
2016-04-20 22:42                         ` Arnd Bergmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4807095.U0fzMFt2B3@wuerfel \
    --to=arnd@arndb.de \
    --cc=aaron.williams@cavium.com \
    --cc=aleksey.makarov@auriga.com \
    --cc=aleksey.makarov@caviumnetworks.com \
    --cc=cchavva@caviumnetworks.com \
    --cc=david.daney@cavium.com \
    --cc=ddaney@caviumnetworks.com \
    --cc=linux-mmc@vger.kernel.org \
    --cc=lrosenboim@caviumnetworks.com \
    --cc=matt.redfearn@imgtec.com \
    --cc=pswain@cavium.com \
    --cc=ulf.hansson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox