public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Greg KH <gregkh@suse.de>
To: Jean-Francois Dagenais <jeff.dagenais@gmail.com>
Cc: hjk@hansjkoch.de, tglx@linutronix.de, linux-pci@vger.kernel.org,
	open list <linux-kernel@vger.kernel.org>
Subject: Re: extra large DMA buffer for PCI-E device under UIO
Date: Fri, 18 Nov 2011 14:08:49 -0800	[thread overview]
Message-ID: <20111118220849.GA25205@suse.de> (raw)
In-Reply-To: <B458B2F0-2DDA-461F-A125-8C6C4CDEB6C5@gmail.com>

On Fri, Nov 18, 2011 at 04:16:23PM -0500, Jean-Francois Dagenais wrote:
> Hello fellow hackers.
> 
> I am maintaining a UIO based driver for a PCI-E data acquisition device.
> 
> I map BAR0 of the device to userspace. I also map two memory areas,
> one is used to feed instructions to the acquisition device, the other
> is used autonomously by the PCI device to write the acquired data.

Nice, have a pointer to your driver anywhere so we can include it in the
main kernel tree to make your life easier?

> The strategy we have been using for those two share memory areas has
> historically been using pci_alloc_coherent on v2.6.35 x86_64 (limited
> to 4MB based on my trials) and later, I made use of the VT-d
> (intel_iommu) to allocate as much as 128MB (an arbitrary limit) which
> appear contiguous to the PCI device. I use vmalloc_user to allocate
> 128M, then write all the physically continuous segments in a
> scatterlist, then use pci_map_sg which works it's way to intel_iommu.
> The device DMA addresses I get back are contiguous over the whole
> 128M. Neat! Our VT-d capable devices still use this strategy.
> 
> This large memory is mission-critical in making the acquisition device
> autonomous (real-time), yet keep the DMA implementation very simple.
> Today, we are re-using this device on a CPU architecture that has no
> IOMMU (intel E6XX/EG20T) and want to avoid creating a scatter-gather
> scheme between my driver and the FPGA (PCI device).
> 
> So I went back to the old pci_alloc_coherent method, which although
> limited to 4 MB, will do for early development phases. Instead of
> 2.6.35, we are doing preliminary development using 2.6.37 and will
> probably use 3.1 or more later.  The cpu/device shared memory maps
> (1MB and 4MB) are allocated using pci_alloc_coherent and handed to UIO
> as physical memory using the dma_addr_t returned by the pci_alloc
> func.
> 
> The 1st memory map is written to by CPU and read from device.
> The 2nd memory map is typically written by the device and read by the
> CPU, but future features may have the device also read this memory.
> 
> My initial testing on the atom E6XX show the PCI device failing when
> trying to read from the first memory map. I suspect PCI-E payload
> sizes which may be somewhat hardcoded in the FPGA firmware... we will
> confirm this soon.

That would be good to find out.

> Now from the get go I have felt lucky to have made this work because
> of my limited research into the intricacies of the kernel's memory
> management. So I ask two things:
> 
> - Is this kosher?

I think so, yes, but others who know the DMA subsystem better than I
should chime in here, as I might be totally wrong.

> - Is there a better/easier/safer way to achieve this? (remember that
> for the second map, the more memory I have, the better. We have a gig
> of ram, if I take, say 256MB, that would be OK too.
> 
> I had thought about cutting out a chunk of ram from the kernel's boot
> args, but had always feared cache/snooping errors. Not to mention I
> had no idea how to "claim" or setup this memory once my driver's probe
> function. Maybe I would still be lucky and it would just work? mmmh...

Yeah, don't do that, it might not work out well.

greg k-h

  reply	other threads:[~2011-11-18 22:10 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-11-18 21:16 extra large DMA buffer for PCI-E device under UIO Jean-Francois Dagenais
2011-11-18 22:08 ` Greg KH [this message]
2011-11-21 15:31   ` Jean-Francois Dagenais
2011-11-21 17:36     ` Greg KH
2011-11-21 18:17       ` Hans J. Koch
     [not found]         ` <4A52B447-8E21-43F6-A38E-711E36F89A34@gmail.com>
2011-11-21 19:29           ` Hans J. Koch
2011-11-22 15:24         ` Jean-Francois Dagenais
2011-11-22 15:35           ` Michael S. Tsirkin
2011-11-22 16:54             ` Jean-Francois Dagenais
2011-11-22 17:27               ` Matthew Wilcox
2011-11-22 17:40                 ` Michael S. Tsirkin
2011-11-22 17:37               ` Michael S. Tsirkin
2011-11-22 17:54                 ` Hans J. Koch
2011-11-22 18:40                   ` Michael S. Tsirkin
2011-11-22 18:52                     ` Hans J. Koch
2011-11-22 19:50                       ` Jean-Francois Dagenais
2011-11-23  8:20                       ` Michael S. Tsirkin
2011-11-22 16:05           ` Hans J. Koch
2011-11-22 19:57   ` Jean-Francois Dagenais
2013-01-23  2:00     ` Jean-François Dagenais
2011-11-18 22:27 ` Hans J. Koch
2011-11-21 15:10   ` Jean-Francois Dagenais
2011-11-21 15:47     ` Rolf Eike Beer
2011-11-21 16:01       ` Jean-Francois Dagenais

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20111118220849.GA25205@suse.de \
    --to=gregkh@suse.de \
    --cc=hjk@hansjkoch.de \
    --cc=jeff.dagenais@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox