From: Radu Rendec <radu.rendec@ines.ro>
To: Jesper Dangaard Brouer <hawk@diku.dk>
Cc: Jarek Poplawski <jarkao2@gmail.com>,
Denys Fedoryschenko <denys@visp.net.lb>,
netdev@vger.kernel.org
Subject: Re: htb parallelism on multi-core platforms
Date: Fri, 24 Apr 2009 12:42:16 +0300 [thread overview]
Message-ID: <1240566136.6554.220.camel@blade.ines.ro> (raw)
In-Reply-To: <Pine.LNX.4.64.0904232148120.13815@ask.diku.dk>
On Thu, 2009-04-23 at 22:19 +0200, Jesper Dangaard Brouer wrote:
> >> It also proves that most of the packet processing work is actually in
> >> htb.
>
> I'm not sure that statement is true.
> Can you run Oprofile on the system? That will tell us exactly where time
> is spend...
I've never used oprofile, but it looks very powerful and simple to use.
I'll compile a 2.6.29 (so that I also benefit from the htb patch you
told me about) then put oprofile on top of it. I'll get back to you by
evening (or maybe Monday noon) with real facts :)
> > ...
> > I thought about using some trick with virtual devs instead, but maybe
> > I'm totally wrong.
>
> I like the idea with virtual devices, as each virtual device could be
> bound to a hardware tx-queue.
Is there any current support for this or do you talk about it as an
approach to use in future development?
The idea looks interesting indeed. If there's current support for it,
I'd like to try it out. If not, perhaps I can help at least with testing
(or even some coding as well).
> Then you just have to construct your HTB trees on each virtual
> device, and assign customers accordingly.
I don't think it's that easy. Let's say we have the same HTB trees on
both virtual devices A and B (each of them is bound to a different
hardware tx queue). If packets for a specific destination ip address
(pseudo)randomly arrive at both A and B, tokens will be extracted from
both A and B trees, resulting in an erroneus overall bandwidth (at worst
double the ceil, if packets reach the ceil on both A and B).
I have to make sure packets belonging to a certain customer (or ip
address) always come through a specific virtual device. Then HTB trees
don't even need to be identical.
However, this is not trivial at all. A single customer can have
different subnets (even from different class-B networks) but share a
single HTB bucket for all of them. Using a simple hash function on the
ip address to determine which virtual device to send through doesn't
seem to be an option since it does not guarantee all packets for a
certain customer will go together.
What I had in mind for parallel shaping was this:
NIC0 -> mux -----> Thread 0: classify/shape -----> NIC2
\/ \/
/\ /\
NIC1 -> mux -----> Thread 1: classify/shape -----> NIC3
Of course the number of input NICs, processing threads and output NICs
would be adjustable. But this idea has 2 major problems:
* shaping data must be shared between processing threads (in order to
extract tokens from the same bucket regardless of the thread that does
the actual processesing)
* it seems to be impossible to do this with (unmodified) HTB
> I just realized, you don't use a multi-queue capably NIC right?
> Then it would be difficult to use the hardware tx-queue idea.
> Have you though of using several physical NICs?
The machine we are preparing for production has this:
2 x Intel Corporation 82571EB Gigabit Ethernet Controller
2 x Intel Corporation 80003ES2LAN Gigabit Ethernet Controller
All 4 NICs use the e1000e driver and I think they are multi-queue
capable. So in theory I can use several NICs and/or multi-queue.
Thanks,
Radu Rendec
next prev parent reply other threads:[~2009-04-24 9:42 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-04-17 10:40 htb parallelism on multi-core platforms Radu Rendec
2009-04-17 11:31 ` David Miller
2009-04-17 11:33 ` Badalian Vyacheslav
2009-04-17 22:41 ` Jarek Poplawski
2009-04-18 0:21 ` Denys Fedoryschenko
2009-04-18 7:56 ` Jarek Poplawski
2009-04-22 14:02 ` Radu Rendec
2009-04-22 21:29 ` Jesper Dangaard Brouer
2009-04-23 8:20 ` Jarek Poplawski
2009-04-23 13:56 ` Radu Rendec
2009-04-23 18:19 ` Jarek Poplawski
2009-04-23 20:19 ` Jesper Dangaard Brouer
2009-04-24 9:42 ` Radu Rendec [this message]
2009-04-28 10:15 ` Jesper Dangaard Brouer
2009-04-29 10:21 ` Radu Rendec
2009-04-29 10:31 ` Jesper Dangaard Brouer
2009-04-29 11:03 ` Radu Rendec
2009-04-29 12:23 ` Jarek Poplawski
2009-04-29 13:15 ` Radu Rendec
2009-04-29 13:38 ` Jarek Poplawski
2009-04-29 16:21 ` Radu Rendec
2009-04-29 22:49 ` Calin Velea
2009-04-29 23:00 ` Re[2]: " Calin Velea
2009-04-30 11:19 ` Radu Rendec
2009-04-30 11:44 ` Jesper Dangaard Brouer
2009-04-30 14:04 ` Re[2]: " Calin Velea
2009-05-08 10:15 ` Paweł Staszewski
2009-05-08 17:55 ` Vladimir Ivashchenko
2009-05-08 18:07 ` Denys Fedoryschenko
2009-04-23 12:31 ` Radu Rendec
2009-04-23 18:43 ` Jarek Poplawski
2009-04-23 19:06 ` Jesper Dangaard Brouer
2009-04-23 19:14 ` Jarek Poplawski
2009-04-23 19:47 ` Jesper Dangaard Brouer
2009-04-23 20:00 ` Jarek Poplawski
2009-04-23 20:09 ` Jeff King
2009-04-24 6:01 ` Jarek Poplawski
[not found] ` <1039493214.20090424135024@gemenii.ro>
2009-04-24 11:19 ` Jarek Poplawski
2009-04-24 11:35 ` Re[2]: " Calin Velea
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1240566136.6554.220.camel@blade.ines.ro \
--to=radu.rendec@ines.ro \
--cc=denys@visp.net.lb \
--cc=hawk@diku.dk \
--cc=jarkao2@gmail.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).