netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brice Goglin <Brice.Goglin@inria.fr>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Miller <davem@davemloft.net>,
	nhorman@tuxdriver.com, netdev@vger.kernel.org
Subject: Re: [RFC] Idea about increasing efficency of skb allocation in network devices
Date: Mon, 27 Jul 2009 10:27:05 +0200	[thread overview]
Message-ID: <4A6D64D9.6010601@inria.fr> (raw)
In-Reply-To: <4A6D5E1E.3090907@gmail.com>

Eric Dumazet wrote:
>> Is there an easy way to get this NUMA node from the application socket
>> descriptor?
>>     
>
> Thats not easy, this information can change for every packet (think of
> bonding setups, whith aggregation of devices on different NUMA nodes)
>   

If we return a mask of cpus near the NIC, we could return the mask
containing cpus that are close to any of the devices that were
aggregated in this bonding setup.
If no bonding, it's fine. If bonding, the behavior looks acceptable to me.

> We could add a getsockopt() call to peek this information from the next
> data to be read from socket (returns node id where skb data is sitting,
> hoping that NIC driver hadnt copybreak it (ie : allocate a small skb and
> copy the device provided data on it before feeding packet to network stack))
>
>
>   
>> Also, one question that was raised at the Linux Symposium is: how do you
>> know which processors run the receive queue for a specific connection ?
>> It would be nice to have a way to retrieve such information in the
>> application to avoid inter-node and inter-core/cache traffic.
>>     
>
> All this depends on the fact you have multiqueue devices or not, and
> trafic spreads on all queues or not.
>   

Again, on a per-connection basis, you should know whether your packets
are going through a single queue or to all of them? If going to a single
queue, return a mask of cpus near this exact queue. If going to multiple
queues (or if you don't know), just sumup the cpumask of all queues.

Brice


  reply	other threads:[~2009-07-27  8:27 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-27  0:36 [RFC] Idea about increasing efficency of skb allocation in network devices Neil Horman
2009-07-27  1:02 ` David Miller
2009-07-27  7:10   ` Brice Goglin
2009-07-27  7:58     ` Eric Dumazet
2009-07-27  8:27       ` Brice Goglin [this message]
2009-07-27 10:55       ` Neil Horman
2009-07-29  8:20         ` Brice Goglin
2009-07-29 10:47           ` Neil Horman
2009-07-27 10:52   ` Neil Horman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A6D64D9.6010601@inria.fr \
    --to=brice.goglin@inria.fr \
    --cc=davem@davemloft.net \
    --cc=eric.dumazet@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=nhorman@tuxdriver.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).