netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>,
	"John Fastabend" <john.fastabend@gmail.com>,
	"Willem de Bruijn" <willemdebruijn.kernel@gmail.com>,
	"Björn Töpel" <bjorn.topel@intel.com>,
	"Karlsson, Magnus" <magnus.karlsson@intel.com>,
	"Alexander Duyck" <alexander.duyck@gmail.com>,
	"Mel Gorman" <mgorman@techsingularity.net>,
	"Tom Herbert" <tom@herbertland.com>,
	"Brenden Blanco" <bblanco@plumgrid.com>,
	"Tariq Toukan" <tariqt@mellanox.com>,
	"Saeed Mahameed" <saeedm@mellanox.com>,
	"Jesse Brandeburg" <jesse.brandeburg@intel.com>,
	"Kalman Meth" <METH@il.ibm.com>,
	brouer@redhat.com
Subject: Re: Designing a safe RX-zero-copy Memory Model for Networking
Date: Mon, 12 Dec 2016 16:10:26 +0100	[thread overview]
Message-ID: <20161212161026.0dfd2e13@redhat.com> (raw)
In-Reply-To: <20161212141433.GB19987@rapoport-lnx>

On Mon, 12 Dec 2016 16:14:33 +0200
Mike Rapoport <rppt@linux.vnet.ibm.com> wrote:

> On Mon, Dec 12, 2016 at 10:40:42AM +0100, Jesper Dangaard Brouer wrote:
> > 
> > On Mon, 12 Dec 2016 10:38:13 +0200 Mike Rapoport <rppt@linux.vnet.ibm.com> wrote:
> >   
> > > Hello Jesper,
> > > 
> > > On Mon, Dec 05, 2016 at 03:31:32PM +0100, Jesper Dangaard Brouer wrote:  
> > > > Hi all,
> > > > 
> > > > This is my design for how to safely handle RX zero-copy in the network
> > > > stack, by using page_pool[1] and modifying NIC drivers.  Safely means
> > > > not leaking kernel info in pages mapped to userspace and resilience
> > > > so a malicious userspace app cannot crash the kernel.
> > > > 
> > > > Design target
> > > > =============
> > > > 
> > > > Allow the NIC to function as a normal Linux NIC and be shared in a
> > > > safe manor, between the kernel network stack and an accelerated
> > > > userspace application using RX zero-copy delivery.
> > > > 
> > > > Target is to provide the basis for building RX zero-copy solutions in
> > > > a memory safe manor.  An efficient communication channel for userspace
> > > > delivery is out of scope for this document, but OOM considerations are
> > > > discussed below (`Userspace delivery and OOM`_).    
> > > 
> > > Sorry, if this reply is a bit off-topic.  
> > 
> > It is very much on topic IMHO :-)
> >   
> > > I'm working on implementation of RX zero-copy for virtio and I've dedicated
> > > some thought about making guest memory available for physical NIC DMAs.
> > > I believe this is quite related to your page_pool proposal, at least from
> > > the NIC driver perspective, so I'd like to share some thoughts here.  
> > 
> > Seems quite related. I'm very interested in cooperating with you! I'm
> > not very familiar with virtio, and how packets/pages gets channeled
> > into virtio.  
> 
> They are copied :-)
> Presuming we are dealing only with vhost backend, the received skb
> eventually gets converted to IOVs, which in turn are copied to the guest
> memory. The IOVs point to the guest memory that is allocated by virtio-net
> running in the guest.

Thanks for explaining that. It seems like a lot of overhead. I have to
wrap my head around this... so, the hardware NIC is receiving the
packet/page, in the RX ring, and after converting it to IOVs, it is
conceptually transmitted into the guest, and then the guest-side have a
RX-function to handle this packet. Correctly understood?

 
> > > The idea is to dedicate one (or more) of the NIC's queues to a VM, e.g.
> > > using macvtap, and then propagate guest RX memory allocations to the NIC
> > > using something like new .ndo_set_rx_buffers method.  
> > 
> > I believe the page_pool API/design aligns with this idea/use-case.
> >   
> > > What is your view about interface between the page_pool and the NIC
> > > drivers?  
> > 
> > In my Prove-of-Concept implementation, the NIC driver (mlx5) register
> > a page_pool per RX queue.  This is done for two reasons (1) performance
> > and (2) for supporting use-cases where only one single RX-ring queue is
> > (re)configured to support RX-zero-copy.  There are some associated
> > extra cost of enabling this mode, thus it makes sense to only enable it
> > when needed.
> > 
> > I've not decided how this gets enabled, maybe some new driver NDO.  It
> > could also happen when a XDP program gets loaded, which request this
> > feature.
> > 
> > The macvtap solution is nice and we should support it, but it requires
> > VM to have their MAC-addr registered on the physical switch.  This
> > design is about adding flexibility. Registering an XDP eBPF filter
> > provides the maximum flexibility for matching the destination VM.  
> 
> I'm not very familiar with XDP eBPF, and it's difficult for me to estimate
> what needs to be done in BPF program to do proper conversion of skb to the
> virtio descriptors.

XDP is a step _before_ the SKB is allocated.  The XDP eBPF program can
modify the packet-page data, but I don't think it is needed for your
use-case.  View XDP (primarily) as an early (demux) filter.

XDP is missing a feature your need, which is TX packet into another
net_device (I actually imagine a port mapping table, that point to a
net_device).  This require a new "TX-raw" NDO that takes a page (+
offset and length). 

I imagine, the virtio driver (virtio_net or a new driver?) getting
extended with this new "TX-raw" NDO, that takes "raw" packet-pages.
 Whether zero-copy is possible is determined by checking if page
originates from a page_pool that have enabled zero-copy (and likely
matching against a "protection domain" id number).


> We were not considered using XDP yet, so we've decided to limit the initial
> implementation to macvtap because we can ensure correspondence between a
> NIC queue and virtual NIC, which is not the case with more generic tap
> device. It could be that use of XDP will allow for a generic solution for
> virtio case as well.

You don't need an XDP filter, if you can make the HW do the early demux
binding into a queue.  The check for if memory is zero-copy enabled
would be the same.

> >   
> > > Have you considered using "push" model for setting the NIC's RX memory?  
> > 
> > I don't understand what you mean by a "push" model?  
> 
> Currently, memory allocation in NIC drivers boils down to alloc_page with
> some wrapping code. I see two possible ways to make NIC use of some
> preallocated pages: either NIC driver will call an API (probably different
> from alloc_page) to obtain that memory, or there will be NDO API that
> allows to set the NIC's RX buffers. I named the later case "push".

As you might have guessed, I'm not into the "push" model, because this
means I cannot share the queue with the normal network stack.  Which I
believe is possible as outlined (in email and [2]) and can be done with
out HW filter features (like macvlan).

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

[1] https://prototype-kernel.readthedocs.io/en/latest/networking/XDP/index.html
[2] https://prototype-kernel.readthedocs.io/en/latest/vm/page_pool/design/memory_model_nic.html

  parent reply	other threads:[~2016-12-12 15:10 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-12-05 14:31 Designing a safe RX-zero-copy Memory Model for Networking Jesper Dangaard Brouer
2016-12-12  8:38 ` Mike Rapoport
2016-12-12  9:40   ` Jesper Dangaard Brouer
2016-12-12 14:14     ` Mike Rapoport
2016-12-12 14:49       ` John Fastabend
2016-12-12 17:13         ` Jesper Dangaard Brouer
2016-12-12 18:06           ` Christoph Lameter
2016-12-13 16:10             ` Jesper Dangaard Brouer
2016-12-13 16:36               ` Christoph Lameter
2016-12-13 17:43               ` John Fastabend
2016-12-13 19:53                 ` David Miller
2016-12-13 20:08                   ` John Fastabend
2016-12-14  9:39                     ` Jesper Dangaard Brouer
2016-12-14 16:32                       ` John Fastabend
2016-12-14 16:45                         ` Alexander Duyck
2016-12-14 21:29                           ` Jesper Dangaard Brouer
2016-12-14 22:45                             ` Alexander Duyck
2016-12-15  8:28                               ` Jesper Dangaard Brouer
2016-12-15 15:59                                 ` Alexander Duyck
2016-12-15 16:38                                 ` Christoph Lameter
2016-12-14 21:04                         ` Jesper Dangaard Brouer
2016-12-13 18:39               ` Hannes Frederic Sowa
2016-12-14 17:00                 ` Christoph Lameter
2016-12-14 17:37                   ` David Laight
2016-12-14 19:43                     ` Christoph Lameter
2016-12-14 20:37                       ` Hannes Frederic Sowa
2016-12-14 21:22                         ` Christoph Lameter
2016-12-13  9:42         ` Mike Rapoport
2016-12-12 15:10       ` Jesper Dangaard Brouer [this message]
2016-12-13  8:43         ` Mike Rapoport

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161212161026.0dfd2e13@redhat.com \
    --to=brouer@redhat.com \
    --cc=METH@il.ibm.com \
    --cc=alexander.duyck@gmail.com \
    --cc=bblanco@plumgrid.com \
    --cc=bjorn.topel@intel.com \
    --cc=jesse.brandeburg@intel.com \
    --cc=john.fastabend@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=magnus.karlsson@intel.com \
    --cc=mgorman@techsingularity.net \
    --cc=netdev@vger.kernel.org \
    --cc=rppt@linux.vnet.ibm.com \
    --cc=saeedm@mellanox.com \
    --cc=tariqt@mellanox.com \
    --cc=tom@herbertland.com \
    --cc=willemdebruijn.kernel@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).