netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Badalian Vyacheslav <slavon@bigtelecom.ru>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Linux Netdev List <netdev@vger.kernel.org>
Subject: Re: ixgbe question
Date: Mon, 23 Nov 2009 13:30:42 +0300	[thread overview]
Message-ID: <4B0A6452.2020508@bigtelecom.ru> (raw)
In-Reply-To: <4B0A6218.9040303@gmail.com>

Hello Eric. I paly with this card 3 weeks and maybe help for you :)

By default intel flower use only first cpu. Its strange.
If we add affinity to single cpu core for interrupt its will use this CPU core.
If we add affinity to two or more cpus its applying but don't work.
See ixgbe driver README from intel.com. Its have param for RSS flower. I think its do this :)
Also driver from intel.com have script for split RxTx->Cpu core but you must replace "tx rx" in code to "TxRx".

P.S. Please also see if you can and wont:
On e1000 and x86 kernel + 2xXeon 2core my TC rules load 3 min
On ixgbe and X86_64 kernel + 4xXeon 6core my TC rules load more 15 mins!
Its 64 bit regression?

Tc rules i can send to you if you ask me for it! Thanks!

Slavon


> Hi Peter
> 
> I tried a pktgen stress on 82599EB card and could not split RX load on multiple cpus.
> 
> Setup is :
> 
> One 82599 card with fiber0 looped to fiber1, 10Gb link mode.
> machine is a HPDL380 G6 with dual quadcore E5530 @2.4GHz (16 logical cpus)
> 
> I use one pktgen thread sending to fiber0 one many dst IP, and checked that fiber1
> was using many RX queues :
> 
> grep fiber1 /proc/interrupts 
> 117:       1301      13060          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-0
> 118:        601       1402          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-1
> 119:        634        832          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-2
> 120:        601       1303          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-3
> 121:        620       1246          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-4
> 122:       1287      13088          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-5
> 123:        606       1354          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-6
> 124:        653        827          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-7
> 125:        639        825          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-8
> 126:        596       1199          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-9
> 127:       2013      24800          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-10
> 128:        648       1353          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-11
> 129:        601       1123          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-12
> 130:        625        834          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-13
> 131:        665       1409          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-14
> 132:       2637      31699          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1-TxRx-15
> 133:          1          0          0          0          0          0          0          0          0          0          0          0          0          0          0          0   PCI-MSI-edge      fiber1:lsc
> 
> 
> 
> But only one CPU (CPU1) had a softirq running, 100%, and many frames were dropped
> 
> root@demodl380g6:/usr/src# ifconfig fiber0
> fiber0    Link encap:Ethernet  HWaddr 00:1b:21:4a:fe:54  
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           Packets reçus:4 erreurs:0 :0 overruns:0 frame:0
>           TX packets:309291576 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 lg file transmission:1000 
>           Octets reçus:1368 (1.3 KB) Octets transmis:18557495682 (18.5 GB)
> 
> root@demodl380g6:/usr/src# ifconfig fiber1
> fiber1    Link encap:Ethernet  HWaddr 00:1b:21:4a:fe:55  
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           Packets reçus:55122164 erreurs:0 :254169411 overruns:0 frame:0
>           TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 lg file transmission:1000 
>           Octets reçus:3307330968 (3.3 GB) Octets transmis:1368 (1.3 KB)
> 
> 
> How and when multi queue rx can really start to use several cpus ?
> 
> Thanks
> Eric
> 
> 
> pktgen script :
> 
> pgset()
> {
>     local result
> 
>     echo $1 > $PGDEV
> 
>     result=`cat $PGDEV | fgrep "Result: OK:"`
>     if [ "$result" = "" ]; then
>          cat $PGDEV | fgrep Result:
>     fi
> }
> 
> pg()
> {
>     echo inject > $PGDEV
>     cat $PGDEV
> }
> 
> 
> PGDEV=/proc/net/pktgen/kpktgend_4
> 
>  echo "Adding fiber0"
>  pgset "add_device fiber0@0"
> 
> 
> CLONE_SKB="clone_skb 15"
> 
> PKT_SIZE="pkt_size 60"
> 
> 
> COUNT="count 100000000"
> DELAY="delay 0"
> 
> PGDEV=/proc/net/pktgen/fiber0@0
>   echo "Configuring $PGDEV"
>  pgset "$COUNT"
>  pgset "$CLONE_SKB"
>  pgset "$PKT_SIZE"
>  pgset "$DELAY"
>  pgset "queue_map_min 0"
>  pgset "queue_map_max 7"
>  pgset "dst_min 192.168.0.2"
>  pgset "dst_max 192.168.0.250"
>  pgset "src_min 192.168.0.1"
>  pgset "src_max 192.168.0.1"
>  pgset "dst_mac  00:1b:21:4a:fe:55"
> 
> 
> # Time to run
> PGDEV=/proc/net/pktgen/pgctrl
> 
>  echo "Running... ctrl^C to stop"
>  pgset "start" 
>  echo "Done"
> 
> # Result can be vieved in /proc/net/pktgen/fiber0@0
> 
> for f in fiber0@0
> do
>  cat /proc/net/pktgen/$f
> done
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 


  reply	other threads:[~2009-11-23 10:30 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-11-23  6:46 [PATCH] irq: Add node_affinity CPU masks for smarter irqbalance hints Peter P Waskiewicz Jr
2009-11-23  7:32 ` Yong Zhang
2009-11-23  9:36   ` Peter P Waskiewicz Jr
2009-11-23 10:21     ` ixgbe question Eric Dumazet
2009-11-23 10:30       ` Badalian Vyacheslav [this message]
2009-11-23 10:34       ` Waskiewicz Jr, Peter P
2009-11-23 10:37         ` Eric Dumazet
2009-11-23 14:05           ` Eric Dumazet
2009-11-23 21:26           ` David Miller
2009-11-23 14:10       ` Jesper Dangaard Brouer
2009-11-23 14:38         ` Eric Dumazet
2009-11-23 18:30           ` robert
2009-11-23 16:59             ` Eric Dumazet
2009-11-23 20:54               ` robert
2009-11-23 21:28                 ` David Miller
2009-11-23 22:14                   ` Robert Olsson
2009-11-23 23:28               ` Waskiewicz Jr, Peter P
2009-11-23 23:44                 ` David Miller
2009-11-24  7:46                 ` Eric Dumazet
2009-11-24  8:46                   ` Badalian Vyacheslav
2009-11-24  9:07                   ` Peter P Waskiewicz Jr
2009-11-24  9:55                     ` Eric Dumazet
2009-11-24 10:06                       ` Peter P Waskiewicz Jr
2009-11-24 11:37                         ` [PATCH net-next-2.6] ixgbe: Fix TX stats accounting Eric Dumazet
2009-11-24 13:23                           ` Eric Dumazet
2009-11-25  7:38                             ` Jeff Kirsher
2009-11-25  9:31                               ` Eric Dumazet
2009-11-25  9:38                                 ` Jeff Kirsher
2009-11-24 13:14                         ` ixgbe question John Fastabend
2009-11-29  8:18                           ` David Miller
2009-11-30 13:02                             ` Eric Dumazet
2009-11-30 20:20                               ` John Fastabend
2009-11-26 14:10                       ` Badalian Vyacheslav
2009-11-23 17:05     ` [PATCH] irq: Add node_affinity CPU masks for smarter irqbalance hints Peter Zijlstra
2009-11-23 23:32       ` Waskiewicz Jr, Peter P
2009-11-24  8:38         ` Peter Zijlstra
2009-11-24  8:59           ` Peter P Waskiewicz Jr
2009-11-24  9:08             ` Peter Zijlstra
2009-11-24  9:15               ` Peter P Waskiewicz Jr
2009-11-24 14:43               ` Arjan van de Ven
2009-11-24  9:15             ` Peter Zijlstra
2009-11-24 10:07             ` Thomas Gleixner
2009-11-24 17:55               ` Peter P Waskiewicz Jr
2009-11-25 11:18               ` Peter Zijlstra
2009-11-24  6:07       ` Arjan van de Ven
2009-11-24  8:39         ` Peter Zijlstra
2009-11-24 14:42           ` Arjan van de Ven
2009-11-24 17:39           ` David Miller
2009-11-24 17:56             ` Peter P Waskiewicz Jr
2009-11-24 18:26               ` Eric Dumazet
2009-11-24 18:33                 ` Peter P Waskiewicz Jr
2009-11-24 19:01                   ` Eric Dumazet
2009-11-24 19:53                     ` Peter P Waskiewicz Jr
2009-11-24 18:54                 ` David Miller
2009-11-24 18:58                   ` Eric Dumazet
2009-11-24 20:35                     ` Andi Kleen
2009-11-24 20:46                       ` Eric Dumazet
2009-11-25 10:30                         ` Eric Dumazet
2009-11-25 10:37                           ` Andi Kleen
2009-11-25 11:35                             ` Eric Dumazet
2009-11-25 11:50                               ` Andi Kleen
2009-11-26 11:43                                 ` Eric Dumazet
2009-11-24  5:17     ` Yong Zhang
2009-11-24  8:39       ` Peter P Waskiewicz Jr
  -- strict thread matches above, loose matches on Subject: below --
2008-03-10 21:27 Ixgbe question Ben Greear
2008-03-11  1:01 ` Brandeburg, Jesse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4B0A6452.2020508@bigtelecom.ru \
    --to=slavon@bigtelecom.ru \
    --cc=eric.dumazet@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=peter.p.waskiewicz.jr@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).