kernelnewbies.kernelnewbies.org archive mirror
 help / color / mirror / Atom feed
From: christoph@boehmwalder.at (Christoph Böhmwalder)
To: kernelnewbies@lists.kernelnewbies.org
Subject: pcie dma transfer
Date: Mon, 4 Jun 2018 14:31:24 +0200	[thread overview]
Message-ID: <20180604123124.d7ygnzgk5vp4wpfx@localhost> (raw)
In-Reply-To: <20180604120505.GA18457@kroah.com>

On Mon, Jun 04, 2018 at 02:05:05PM +0200, Greg KH wrote:
> The problem in this design might happen right here.  What happens
> in the device between the interrupt being signaled, and the data being
> copied out of the buffer?  Where do new packets go to?  How does the
> device know it is "safe" to write new data to that memory?  That extra
> housekeeping in the hardware gets very complex very quickly.

That's one of our concerns as well, our solution seems to be way too
complex (as we, for example, still need a way to parse out the
individual packets from the buffer). I think we should focus more on
KISS going forward.

> This all might work, if you have multiple buffers, as that is how some
> drivers work.  Look at how the XHCI design is specified.  The spec is
> open, and it gives you a very good description of how a relativly
> high-speed PCIe device should work, with buffer management and the like.
> You can probably use a lot of that type of design for your new work and
> make things run a lot faster than what you currently have.
> 
> You also have access to loads of very high-speed drivers in Linux today,
> to get design examples from.  Look at the networking driver of the
> 10, 40, and 100Gb cards, as well as the infiband drivers, and even some
> of the PCIe flash block drivers.  Look at what the NVME spec says for
> how those types of high-speed storage devices should be designed for
> other examples.

Thanks for the pointer, I will take a look at that.

As for now, we're looking at implementing a solution using the ringbuffer
method described in LDD, since that seems quite reasonable. That may all
change once we research the other drivers a little bit though.

Thanks for your help and (as always) quick response time!

--
Regards,
Christoph

      reply	other threads:[~2018-06-04 12:31 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-04 11:12 pcie dma transfer Christoph Böhmwalder
2018-06-04 12:05 ` Greg KH
2018-06-04 12:31   ` Christoph Böhmwalder [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180604123124.d7ygnzgk5vp4wpfx@localhost \
    --to=christoph@boehmwalder.at \
    --cc=kernelnewbies@lists.kernelnewbies.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).