From: jamal <hadi@cyberus.ca>
To: Krishna Kumar2 <krkumar2@in.ibm.com>
Cc: David Miller <davem@davemloft.net>,
netdev@vger.kernel.org, Herbert Xu <herbert@gondor.apana.org.au>
Subject: Re: [PATCH] sched: Optimize return value of qdisc_restart
Date: Wed, 09 May 2007 11:52:44 -0400 [thread overview]
Message-ID: <1178725965.4058.75.camel@localhost> (raw)
In-Reply-To: <OFBBEFE969.8AD5C0AA-ON652572D6.004FC535-652572D6.00513837@in.ibm.com>
Krishna,
On Wed, 2007-09-05 at 20:17 +0530, Krishna Kumar2 wrote:
> Concurrently is not possible, since everyone needs the queue_lock
> to add/delete. Did I misunderstand your comment ?
>
I think so, more below where you explain it:
> The dev->queue_lock is held by both enqueue'r and dequeue'r (though
> the dequeue'r drops it before calling xmit). But once the dequeue'r
> re-gets the lock, it is guaranteed that no one else has the lock
> Other CPU's trying to add will block on the lock, or if they have
> already added by getting the lock for a short time while my CPU was
That is how concurency is achieved on the queue. If you have N CPUs, N-1
could be queueing.
Important to note, only one - that owns the QDISC_RUNNING can dequeue.
> doing the xmit, then their qdisc_run returns doing nothing as RUNNING
> is true.
>
lack of ownership of QDISC_RUNNING is what makes them enqueuers. The CPU
that owns it is the dequeuer.
> Since I am holding a lock in these two changed areas till I return
> back to __qdisc_run (which clears the RUNNING bit) and then drop the
> lock, there is no way packets can be on the queue while I falsely
> return 0, or no packets on the queue while I falsely return -1.
>
If you relinquish yourself from being a dequeuer by letting go of
RUNNING then it is possible during that short window one of the other
N-1 CPUs could have been enqueueing; that packet will never be dequeued
unless a new packet shows up some X amount of time later.
> I hope my explanation was not confusing.
>
I hope what i described above helps. Off for about a day. CCing Herbert
who last made changes to that area incase i missed something ..
cheers,
jamal
PS:- Please dont use my temporary gmail account to respond; a reply-to
will pick the right address (@cyberus.ca).
next prev parent reply other threads:[~2007-05-09 15:52 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-05-08 7:31 [PATCH] sched: Optimize return value of qdisc_restart Krishna Kumar
2007-05-09 2:05 ` David Miller
2007-05-09 4:35 ` Krishna Kumar2
2007-05-09 6:36 ` David Miller
2007-05-09 7:23 ` Krishna Kumar2
2007-05-09 8:12 ` David Miller
2007-05-09 12:56 ` jamal
2007-05-09 14:47 ` Krishna Kumar2
2007-05-09 15:52 ` jamal [this message]
2007-05-10 5:12 ` Krishna Kumar2
2007-05-10 11:50 ` Herbert Xu
2007-05-10 11:55 ` David Miller
2007-05-10 12:10 ` Herbert Xu
2007-05-10 21:11 ` David Miller
2007-05-10 12:21 ` jamal
2007-05-10 12:50 ` jamal
2007-05-10 12:59 ` Herbert Xu
2007-05-10 13:18 ` jamal
2007-05-10 13:52 ` Herbert Xu
2007-05-10 14:12 ` jamal
2007-05-10 14:26 ` Krishna Kumar2
2007-05-10 14:31 ` Herbert Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1178725965.4058.75.camel@localhost \
--to=hadi@cyberus.ca \
--cc=davem@davemloft.net \
--cc=herbert@gondor.apana.org.au \
--cc=krkumar2@in.ibm.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).