From: vinod.koul@intel.com (Vinod Koul)
To: linux-arm-kernel@lists.infradead.org
Subject: Tearing down DMA transfer setup after DMA client has finished
Date: Thu, 8 Dec 2016 22:07:55 +0530 [thread overview]
Message-ID: <20161208163755.GH6408@localhost> (raw)
In-Reply-To: <91b0d10c-1bc2-c3e1-4088-f4ad9adcd6c0@free.fr>
On Thu, Dec 08, 2016 at 11:54:51AM +0100, Mason wrote:
> On 08/12/2016 11:39, Vinod Koul wrote:
>
> > On Wed, Dec 07, 2016 at 04:45:58PM +0000, M?ns Rullg?rd wrote:
> >
> >> Vinod Koul <vinod.koul@intel.com> writes:
> >>
> >>> On Tue, Dec 06, 2016 at 01:14:20PM +0000, M?ns Rullg?rd wrote:
> >>>>
> >>>> That's not going to work very well. Device drivers typically request
> >>>> dma channels in their probe functions or when the device is opened.
> >>>> This means that reserving one of the few channels there will inevitably
> >>>> make some other device fail to operate.
> >>>
> >>> No that doesn't make sense at all, you should get a channel only when you
> >>> want to use it and not in probe!
> >>
> >> Tell that to just about every single driver ever written.
> >
> > Not really, few do yes which is wrong but not _all_ do that.
>
> Vinod,
>
> Could you explain something to me in layman's terms?
>
> I have a NAND Flash Controller driver that depends on the
> DMA driver under discussion.
>
> Suppose I move the dma_request_chan() call from the driver's
> probe function, to the actual DMA transfer function.
>
> I would want dma_request_chan() to put the calling thread
> to sleep until a channel becomes available (possibly with
> a timeout value).
>
> But Maxime told me dma_request_chan() will just return
> -EBUSY if no channels are available.
That is correct
> Am I supposed to busy wait in my driver's DMA function
> until a channel becomes available?
If someone else is using the channels then the bust wait will not help, so
in this case you should fall back to PIO, few drivers do that
> I don't understand how the multiplexing of few memory
> channels to many clients is supposed to happen efficiently?
To make it efficient, disregarding your Sbox HW issue, the solution is
virtual channels. You can delink physical channels and virtual channels. If
one has SW controlled MUX then a channel can service any client. For few
controllers request lines are hard wired so they cant use any channel. But
if you dont have this restriction then driver can queue up many transactions
from different controllers.
--
~Vinod
next prev parent reply other threads:[~2016-12-08 16:37 UTC|newest]
Thread overview: 82+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-23 10:25 Tearing down DMA transfer setup after DMA client has finished Mason
2016-11-23 12:13 ` Måns Rullgård
2016-11-23 12:41 ` Mason
2016-11-23 17:21 ` Måns Rullgård
2016-11-24 10:53 ` Mason
2016-11-24 14:17 ` Måns Rullgård
2016-11-24 15:20 ` Mason
2016-11-24 16:37 ` Måns Rullgård
2016-11-25 4:55 ` Vinod Koul
2016-11-25 11:57 ` Måns Rullgård
2016-11-25 14:05 ` Mason
2016-11-25 14:12 ` Måns Rullgård
2016-11-25 14:28 ` Mason
2016-11-25 14:42 ` Måns Rullgård
2016-11-25 12:45 ` Russell King - ARM Linux
2016-11-25 13:07 ` Måns Rullgård
2016-11-25 13:34 ` Russell King - ARM Linux
2016-11-25 13:50 ` Måns Rullgård
2016-11-25 13:58 ` Russell King - ARM Linux
2016-11-25 14:03 ` Måns Rullgård
2016-11-25 14:17 ` Russell King - ARM Linux
2016-11-25 14:40 ` Måns Rullgård
2016-11-25 14:56 ` Russell King - ARM Linux
2016-11-25 15:08 ` Måns Rullgård
2016-11-25 15:02 ` Mason
2016-11-25 15:12 ` Måns Rullgård
2016-11-25 15:21 ` Mason
2016-11-25 15:28 ` Måns Rullgård
2016-11-25 12:46 ` Mason
2016-11-25 13:11 ` Måns Rullgård
2016-11-25 14:21 ` Mason
2016-11-25 14:37 ` Måns Rullgård
2016-11-25 15:35 ` Mason
2016-11-29 18:25 ` Mason
2016-12-06 5:12 ` Vinod Koul
2016-12-06 12:42 ` Mason
2016-12-06 13:14 ` Måns Rullgård
2016-12-06 15:24 ` Mason
2016-12-06 15:34 ` Måns Rullgård
2016-12-06 22:55 ` Mason
2016-12-07 16:43 ` Vinod Koul
2016-12-07 16:45 ` Måns Rullgård
2016-12-08 10:39 ` Vinod Koul
2016-12-08 10:54 ` Mason
2016-12-08 11:18 ` Geert Uytterhoeven
2016-12-08 11:47 ` Måns Rullgård
2016-12-08 12:03 ` Geert Uytterhoeven
2016-12-08 12:17 ` Mason
2016-12-08 12:21 ` Måns Rullgård
2016-12-08 16:37 ` Vinod Koul [this message]
2016-12-08 16:48 ` Måns Rullgård
2016-12-09 6:59 ` Vinod Koul
2016-12-09 10:25 ` Sebastian Frias
2016-12-09 11:34 ` Måns Rullgård
2016-12-09 11:35 ` 1Måns Rullgård
2016-12-09 17:17 ` Vinod Koul
2016-12-09 17:28 ` Måns Rullgård
2016-12-09 17:53 ` Vinod Koul
2016-12-09 17:34 ` Mason
2016-12-09 17:56 ` Vinod Koul
2016-12-09 18:17 ` Vinod Koul
2016-12-09 18:23 ` Mason
2016-12-12 5:01 ` Vinod Koul
2016-12-15 11:17 ` Mark Brown
2016-12-08 11:44 ` Måns Rullgård
2016-12-08 11:59 ` Geert Uytterhoeven
2016-12-08 12:20 ` Måns Rullgård
2016-12-08 12:31 ` Geert Uytterhoeven
2016-12-08 12:41 ` Mason
2016-12-08 12:44 ` Måns Rullgård
2016-12-08 13:29 ` Mason
2016-12-08 13:39 ` Måns Rullgård
2016-12-08 15:50 ` Vinod Koul
2016-12-08 16:36 ` Måns Rullgård
2016-12-08 15:40 ` Vinod Koul
2016-12-08 15:43 ` Mason
2016-12-08 16:21 ` Vinod Koul
2016-12-08 16:46 ` Måns Rullgård
2016-12-07 16:41 ` Vinod Koul
2016-12-07 16:44 ` Måns Rullgård
2016-12-08 10:37 ` Vinod Koul
2016-12-08 11:44 ` Måns Rullgård
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161208163755.GH6408@localhost \
--to=vinod.koul@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).