From: jamal <hadi@cyberus.ca>
To: "Waskiewicz Jr, Peter P" <peter.p.waskiewicz.jr@intel.com>
Cc: Patrick McHardy <kaber@trash.net>,
Stephen Hemminger <shemminger@linux-foundation.org>,
netdev@vger.kernel.org, jgarzik@pobox.com,
cramerj <cramerj@intel.com>,
"Kok, Auke-jan H" <auke-jan.h.kok@intel.com>,
"Leech, Christopher" <christopher.leech@intel.com>,
davem@davemloft.net
Subject: RE: [PATCH] IPROUTE: Modify tc for new PRIO multiqueue behavior
Date: Fri, 27 Apr 2007 11:09:58 -0400 [thread overview]
Message-ID: <1177686598.4059.79.camel@localhost> (raw)
In-Reply-To: <D5C1322C3E673F459512FB59E0DDC32902B9710F@orsmsx414.amr.corp.intel.com>
On Thu, 2007-26-04 at 09:30 -0700, Waskiewicz Jr, Peter P wrote:
> > jamal wrote:
> > > On Wed, 2007-25-04 at 10:45 -0700, Waskiewicz Jr, Peter P wrote:
> We have plans to write a new qdisc that has no priority given to any
> skb's being sent to the driver. The reasoning for providing a
> multiqueue mode for PRIO is it's a well-known qdisc, so the hope was
> people could quickly associate with what's going on. The other
> reasoning is we wanted to provide a way to prioritize various network
> flows (ala PRIO), and since hardware doesn't currently exist that
> provides flow prioritization, we decided to allow it to continue
> happening in software.
>
Reading the above validates my fears that we have some strong
differences (refer to my email to Patrick). To be fair to you, i would
have to look at your patches. Now i am actually thinking not to look at
them at all incase they influence me;->
I think the thing for me to do is provide alternative patches and then
we can have smoother discussion.
The way i see it is you dont touch any qdisc code. qdiscs that are
provided by Linux cover a majority of those provided by hardware
(Heck, I have was involved on an ethernet switch chip from your company
that provided strict prion multiqueues in hardware and didnt need to
touch the qdisc code)
> >
> > > The driver should be configurable to be X num of queues via
> > probably
> > > ethtool. It should default to single ring to maintain old behavior.
> >
> >
> > That would probably make sense in either case.
>
> This shouldn't be something enforced by the OS, rather, an
> implementation detail for the driver you write. If you want this to be
> something to be configured at run-time, on the fly, then the OS would
> need to support it. However, I'd rather see people try the multiqueue
> support as-is first to make sure the simple things work as expected,
> then we can get into run-time reconfiguration issues (like queue
> draining if you shrink available queues, etc.). This will also require
> some heavy lifting by the driver to tear down queues, etc.
>
It could be probably a module insertion/boot time operation.
> >
> > > Ok, i see; none of those other intel people put you through
> > the hazing
> > > yet? ;-> This is a netdev matter - so i have taken off lkml
> > >
>
> I appreciate the desire to lower clutter from mailing lists, but I see
> 'tc' as a kernel configuration utility, and as such, people should know
> what we're doing outside of netdev, IMO. But I'm fine with keeping this
> off lkml if that's what people think.
>
All of netdev has to do with the kernel - that doesnt justify cross
posting.
People interested in network related subsystem development will
subscribe to netdev. Interest in scsi =. subscribe to scsi mailing lists
etc.
cheers,
next prev parent reply other threads:[~2007-04-27 15:10 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-04-25 1:39 [PATCH] IPROUTE: Modify tc for new PRIO multiqueue behavior Peter P Waskiewicz Jr
2007-04-25 4:05 ` Stephen Hemminger
2007-04-25 11:36 ` jamal
2007-04-25 17:45 ` Waskiewicz Jr, Peter P
2007-04-26 13:27 ` jamal
2007-04-26 15:57 ` Patrick McHardy
2007-04-26 16:30 ` Waskiewicz Jr, Peter P
2007-04-26 16:44 ` Patrick McHardy
2007-04-26 16:50 ` Waskiewicz Jr, Peter P
2007-04-27 15:09 ` jamal [this message]
2007-04-27 15:45 ` Waskiewicz Jr, Peter P
2007-04-30 12:56 ` jamal
2007-05-01 18:27 ` Waskiewicz Jr, Peter P
2007-05-01 22:11 ` jamal
2007-05-01 23:04 ` Waskiewicz Jr, Peter P
2007-05-02 12:43 ` jamal
2007-05-03 21:03 ` Waskiewicz Jr, Peter P
2007-05-03 23:54 ` jamal
2007-05-04 15:48 ` Waskiewicz Jr, Peter P
2007-05-04 20:01 ` Stephen Hemminger
2007-05-04 20:06 ` David Miller
2007-05-04 20:43 ` Waskiewicz Jr, Peter P
2007-05-04 21:00 ` David Miller
2007-05-04 21:22 ` Johannes Berg
2007-05-08 9:33 ` Zhu Yi
2007-05-08 9:45 ` Johannes Berg
2007-05-08 13:28 ` jamal
2007-05-08 15:35 ` Waskiewicz Jr, Peter P
2007-05-08 23:28 ` jamal
2007-05-10 3:02 ` Zhu Yi
2007-05-10 12:35 ` jamal
2007-05-11 1:58 ` Zhu Yi
2007-05-11 2:23 ` jamal
2007-05-10 18:22 ` Waskiewicz Jr, Peter P
2007-05-10 20:00 ` jamal
2007-05-09 14:16 ` Johannes Berg
2007-04-27 14:58 ` jamal
2007-04-27 15:43 ` Jeff Garzik
2007-04-27 15:46 ` Waskiewicz Jr, Peter P
2007-04-26 18:49 ` Jan Engelhardt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1177686598.4059.79.camel@localhost \
--to=hadi@cyberus.ca \
--cc=auke-jan.h.kok@intel.com \
--cc=christopher.leech@intel.com \
--cc=cramerj@intel.com \
--cc=davem@davemloft.net \
--cc=jgarzik@pobox.com \
--cc=kaber@trash.net \
--cc=netdev@vger.kernel.org \
--cc=peter.p.waskiewicz.jr@intel.com \
--cc=shemminger@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).