netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jarek Poplawski <jarkao2@gmail.com>
To: David Miller <davem@davemloft.net>
Cc: hadi@cyberus.ca, alexander.duyck@gmail.com,
	jeffrey.t.kirsher@intel.com, jeff@garzik.org,
	netdev@vger.kernel.org, alexander.h.duyck@intel.com
Subject: Re: [PATCH 3/3] pkt_sched: restore multiqueue prio scheduler
Date: Mon, 25 Aug 2008 06:06:40 +0000	[thread overview]
Message-ID: <20080825060640.GA2633@ff.dom.local> (raw)
In-Reply-To: <20080824.174949.118585414.davem@davemloft.net>

On Sun, Aug 24, 2008 at 05:49:49PM -0700, David Miller wrote:
> From: Jarek Poplawski <jarkao2@gmail.com>
> Date: Sun, 24 Aug 2008 21:19:05 +0200
> 
> > On Sun, Aug 24, 2008 at 09:39:23AM -0400, jamal wrote:
> > ...
> > > With current controls being per qdisc instead of per netdevice,
> > > the hol fear is unfounded. 
> > > You send and when hw cant keep up, you block just the one hwqueue.
> > > While hwqueue is blocked, you can accumulate packets in the prio qdisc
> > > (hence my statement it may not be necessary to accumulate packets in
> > > driver).
> > 
> > Jamal, maybe I miss something, but this could be like this only with
> > default pfifo_fast qdiscs, which really are per dev hwqueue. Other
> > qdiscs, including prio, are per device, so with prio, if a band with
> > the highest priority is blocked it would be requeued blocking other
> > bands (hwqueues in Alexander's case).
> 
> It only blocks if the highest priority band's HW queue is blocked, and
> that's what you want to happen.
> 
> Think about it, if the highest priority HW queue is full, queueing
> packets to the lower priority queues won't make anything happen.
> 
> As the highest priority queue opens up and begins to have space,
> we'll feed it high priority packets from the prio qdisc, and so
> on and so forth.

It seems the priority can really be misleading here. Do you mean these
hwqueues are internally prioritized too? This would be strange to me,
because why would we need this independent locking per hwqueue if
everything has to wait for the most prioritized hwqueue anyway? And,
if so, current dev_pick_tx() with simple_tx_hash() would always harm
some flows directing them to lower priority hwqueues?!

But, even if it's true, let's take a look at fifo: a packet at the
head of the qdisc's queue could be hashed to the last hwqueue. If
it's stopped for some reason, this packed would be constantly
requeued blocking all other packets, while their hwqueues are ready
and empty!

Jarek P.

  reply	other threads:[~2008-08-25  6:06 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-08-22  0:51 [PATCH 1/3] LRO: fix return code propogation Jeff Kirsher
2008-08-22  0:51 ` [PATCH 2/3] netlink: nal_parse_nested_compat was not parsing nested attributes Jeff Kirsher
2008-08-22 10:18   ` David Miller
2008-08-22 17:40     ` [PATCH 2/3] netlink: nla_parse_nested_compat " Duyck, Alexander H
2008-08-27 14:52       ` Thomas Graf
2008-08-27 18:09         ` Duyck, Alexander H
2008-08-22  0:51 ` [PATCH 3/3] pkt_sched: restore multiqueue prio scheduler Jeff Kirsher
2008-08-22 10:16   ` David Miller
2008-08-22 14:30     ` jamal
2008-08-22 22:19       ` Jarek Poplawski
2008-08-23  0:01         ` Alexander Duyck
2008-08-23  0:40           ` David Miller
2008-08-23  1:37             ` Alexander Duyck
2008-08-23  5:12               ` Herbert Xu
2008-08-23  6:35                 ` Alexander Duyck
2008-08-23  7:07                   ` Herbert Xu
2008-08-23  8:23                   ` David Miller
2008-08-23  8:15               ` David Miller
2008-08-23  0:33         ` David Miller
2008-08-23  8:47           ` Jarek Poplawski
2008-08-23 16:31             ` Alexander Duyck
2008-08-23 16:49               ` jamal
2008-08-23 19:09                 ` Alexander Duyck
2008-08-24  7:53                   ` Jarek Poplawski
2008-08-24 13:39                     ` jamal
2008-08-24 19:19                       ` Jarek Poplawski
2008-08-24 19:27                         ` Jarek Poplawski
2008-08-24 19:59                           ` Jarek Poplawski
2008-08-24 20:18                             ` Jarek Poplawski
2008-08-25  0:50                           ` David Miller
2008-08-25  3:03                             ` Alexander Duyck
2008-08-25  6:16                               ` Jarek Poplawski
2008-08-25  9:36                                 ` Jarek Poplawski
2008-08-25  0:49                         ` David Miller
2008-08-25  6:06                           ` Jarek Poplawski [this message]
2008-08-25  7:48                             ` David Miller
2008-08-25  7:57                               ` Jarek Poplawski
2008-08-25  8:02                                 ` David Miller
2008-08-25  8:25                                   ` Jarek Poplawski
2008-08-25  8:35                                     ` Jarek Poplawski
2008-08-22 10:20 ` [PATCH 1/3] LRO: fix return code propogation David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080825060640.GA2633@ff.dom.local \
    --to=jarkao2@gmail.com \
    --cc=alexander.duyck@gmail.com \
    --cc=alexander.h.duyck@intel.com \
    --cc=davem@davemloft.net \
    --cc=hadi@cyberus.ca \
    --cc=jeff@garzik.org \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).