public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Vitaly Wool <vwool@ru.mvista.com>
To: David Brownell <david-b@pacbell.net>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [PATCH/RFC 0/2] simple SPI framework, refresh + ads7864 driver
Date: Thu, 06 Oct 2005 08:57:14 +0400	[thread overview]
Message-ID: <4344AEAA.7030309@ru.mvista.com> (raw)
In-Reply-To: <20051005162117.24DDBEE95B@adsl-69-107-32-110.dsl.pltn13.pacbell.net>

David Brownell wrote:

>>Of course we want to use scatter-gather lists.
>>    
>>
>
>The only way "of course" applies is if you're accepting requests
>from the block layer, which talks in terms of "struct scatterlist".
>
>In my investigations of SPI, I don't happen to have come across any
>SPI slave device that would naturally be handled as a block device.
>There's lots of flash (and dataflash); that's MTD, not block.
>  
>
What about SD controllers on SPI bus? I did work with 2. ;-)

>
>  
>
>>	The DMA controller 
>>mentioned above can handle only 0xFFF transfer units at a transfer so we 
>>have to split the large transfers into SG lists.
>>    
>>
>
>Odd, I've seen plenty other drivers that just segment large buffers
>into multiple DMA transfers ... without wanting "struct scatterlist".
>  
>
I didn't claim that scatterlist is to be used. We use 'hardware-driven' 
structures for sg lists.

>  - More often they just break big buffers into lots of little
>    transfers.  Just like PIO, but faster.  (And in fact, they may
>    need to prime the pump with some PIO to align the buffer.)
>  
>
That won't work in some cases, as SPI might generate additional clock 
cycles after each transfer which won't happen un case of real sg transfer.

>  - Sometimes they just reject segments that are too large to
>    handle cleanly at a low level, and require higher level code
>    to provide more byte-sized blocks of I/O.
>  
>
It's possible in some cases but won't work in other. See above.

>If "now" _were_ the point we need to handle scatterlists, I've shown
>a nice efficient way to handle them, already well proven in the context
>of another serial bus protocol (USB).
>
>
>  
>
>>Moreover, that looks like it may imply redundant data copying.
>>    
>>
>
>Absolutely not.  Everything was aimed at zero-copy I/O; why do
>you think I carefully described "DMA mapping" everywhere, rather
>than "memcpy"?
>
>  
>
I'm afraid that copying may be implicit.

>  
>
>>Can you please elaborate what you meant by 'readiness to accept DMA 
>>addresses' for the controller drivers?
>>    
>>
>
>Go look at the parts of the USB stack I mentioned.  That's what I mean.
>
> - In the one case, DMA-aware controller drivers look at each buffer
>   to determine whether they have to manage the mappings themselves.
>   If the caller provided the DMA address, they won't set up mappings.
>
> - In the other case, they always expect their caller to have set
>   up the DMA mappings.  (Where "caller" is infrastructure code,
>   not the actual driver issuing the I/O request.)
>
>The guts of such drivers would only talk in terms of DMA; the way those
>cases differ is how the driver entry/exit points ensure that can be done.
>
>
>  
>
>>As far as I see it now, the whole thing looks wrong. The thing that we 
>>suggest (i. e. abstract handles for memory allocation set to kmalloc by 
>>default) is looking far better IMHO and doesn't require any flags which 
>>usage increases uncertainty in the core.
>>    
>>
>
>You are conflating memory allocation with DMA mapping.  Those notions
>are quite distinct, except for dma_alloc_coherent() where one operation
>does both.
>
>The normal goal for drivers is to accept buffers allocated from anywhere
>that Documentation/DMA-mapping.txt describes as being DMA-safe ... and
>less often, message passing frameworks will do what USB does and accept
>DMA addresses rather than CPU addresses.
>  
>
As for our core implementation it's totally agnostic about what kind of 
addresses is used and what way it should be handled in; it's all left 
for controller driver to decide. I still think it's far better approach 
than lotsa pointers and flags.

Vitaly

  parent reply	other threads:[~2005-10-06  4:57 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-10-05 16:21 [PATCH/RFC 0/2] simple SPI framework, refresh + ads7864 driver David Brownell
2005-10-05 16:24 ` Russell King
2005-10-05 17:27   ` David Brownell
2005-10-06  4:57 ` Vitaly Wool [this message]
2005-10-06 18:13 ` Mark Underwood
2005-10-06 18:20   ` Vitaly Wool
  -- strict thread matches above, loose matches on Subject: below --
2005-10-05 15:18 David Brownell
2005-10-05 15:10 David Brownell
2005-10-13 19:37 ` Lee Revell
2005-10-04 20:18 David Brownell
2005-10-05  8:07 ` Vitaly Wool
2005-10-04 18:02 David Brownell
2005-10-04 19:08 ` Vitaly Wool
2005-10-05  7:56 ` Vitaly Wool

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4344AEAA.7030309@ru.mvista.com \
    --to=vwool@ru.mvista.com \
    --cc=david-b@pacbell.net \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox