* [NET]: Prevent multiple qdisc runs
@ 2006-06-19 12:15 Herbert Xu
2006-06-19 13:33 ` jamal
2006-06-20 6:57 ` David Miller
0 siblings, 2 replies; 16+ messages in thread
From: Herbert Xu @ 2006-06-19 12:15 UTC (permalink / raw)
To: David S. Miller, netdev
[-- Attachment #1: Type: text/plain, Size: 1415 bytes --]
Hi Dave:
I'm nearly done with the generic segmentation offload stuff (although
only TCPv4 is implemented for now), and I encountered this problem.
[NET]: Prevent multiple qdisc runs
Having two or more qdisc_run's contend against each other is bad because
it can induce packet reordering if the packets have to be requeued. It
appears that this is an unintended consequence of relinquinshing the queue
lock while transmitting. That in turn is needed for devices that spend a
lot of time in their transmit routine.
There are no advantages to be had as devices with queues are inherently
single-threaded (the loopback device is not but then it doesn't have a
queue).
Even if you were to add a queue to a parallel virtual device (e.g., bolt
a tbf filter in front of an ipip tunnel device), you would still want to
process the queue in sequence to ensure that the packets are ordered
correctly.
The solution here is to steal a bit from net_device to prevent this.
BTW, as qdisc_restart is no longer used by anyone as a module inside the
kernel (IIRC it used to with netif_wake_queue), I have not exported the
new __qdisc_run function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
[-- Attachment #2: qdisc-run.patch --]
[-- Type: text/plain, Size: 2082 bytes --]
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index e432b74..39919c8 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -233,6 +233,7 @@ enum netdev_state_t
__LINK_STATE_RX_SCHED,
__LINK_STATE_LINKWATCH_PENDING,
__LINK_STATE_DORMANT,
+ __LINK_STATE_QDISC_RUNNING,
};
diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
index b94d1ad..75b5b93 100644
--- a/include/net/pkt_sched.h
+++ b/include/net/pkt_sched.h
@@ -218,12 +218,13 @@ extern struct qdisc_rate_table *qdisc_ge
struct rtattr *tab);
extern void qdisc_put_rtab(struct qdisc_rate_table *tab);
-extern int qdisc_restart(struct net_device *dev);
+extern void __qdisc_run(struct net_device *dev);
static inline void qdisc_run(struct net_device *dev)
{
- while (!netif_queue_stopped(dev) && qdisc_restart(dev) < 0)
- /* NOTHING */;
+ if (!netif_queue_stopped(dev) &&
+ !test_and_set_bit(__LINK_STATE_QDISC_RUNNING, &dev->state))
+ __qdisc_run(dev);
}
extern int tc_classify(struct sk_buff *skb, struct tcf_proto *tp,
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index b1e4c5e..d7aca8e 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -90,7 +90,7 @@ void qdisc_unlock_tree(struct net_device
NOTE: Called under dev->queue_lock with locally disabled BH.
*/
-int qdisc_restart(struct net_device *dev)
+static inline int qdisc_restart(struct net_device *dev)
{
struct Qdisc *q = dev->qdisc;
struct sk_buff *skb;
@@ -179,6 +179,14 @@ requeue:
return q->q.qlen;
}
+void __qdisc_run(struct net_device *dev)
+{
+ while (qdisc_restart(dev) < 0 && !netif_queue_stopped(dev))
+ /* NOTHING */;
+
+ clear_bit(__LINK_STATE_QDISC_RUNNING, &dev->state);
+}
+
static void dev_watchdog(unsigned long arg)
{
struct net_device *dev = (struct net_device *)arg;
@@ -620,6 +628,5 @@ EXPORT_SYMBOL(qdisc_create_dflt);
EXPORT_SYMBOL(qdisc_alloc);
EXPORT_SYMBOL(qdisc_destroy);
EXPORT_SYMBOL(qdisc_reset);
-EXPORT_SYMBOL(qdisc_restart);
EXPORT_SYMBOL(qdisc_lock_tree);
EXPORT_SYMBOL(qdisc_unlock_tree);
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-19 12:15 [NET]: Prevent multiple qdisc runs Herbert Xu
@ 2006-06-19 13:33 ` jamal
2006-06-19 13:42 ` Herbert Xu
2006-06-20 6:57 ` David Miller
1 sibling, 1 reply; 16+ messages in thread
From: jamal @ 2006-06-19 13:33 UTC (permalink / raw)
To: Herbert Xu; +Cc: netdev, David S. Miller
Herbert,
I take it you saw a lot of requeues happening that prompted this? What
were the circumstances? The _only_ times i have seen it happen is when
the (PCI) bus couldnt handle the incoming rate or there was a bug in the
driver.
Also: what happens to the packet that comes in from either local or is
being forwarded and finds the qdisc_is_running flag is set? I couldnt
tell if the intent was to drop it or not. The answer for TCP is probably
simpler than for packets being forwarded.
cheers,
jamal
On Mon, 2006-19-06 at 22:15 +1000, Herbert Xu wrote:
> Hi Dave:
>
> I'm nearly done with the generic segmentation offload stuff (although
> only TCPv4 is implemented for now), and I encountered this problem.
>
> [NET]: Prevent multiple qdisc runs
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-19 13:33 ` jamal
@ 2006-06-19 13:42 ` Herbert Xu
2006-06-19 14:23 ` jamal
0 siblings, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2006-06-19 13:42 UTC (permalink / raw)
To: jamal; +Cc: netdev, David S. Miller
Hi Jamal:
On Mon, Jun 19, 2006 at 09:33:51AM -0400, jamal wrote:
>
> I take it you saw a lot of requeues happening that prompted this? What
> were the circumstances? The _only_ times i have seen it happen is when
> the (PCI) bus couldnt handle the incoming rate or there was a bug in the
> driver.
Actually I discovered the problem only because the generic segmentation
offload stuff that I'm working on needs to deal with the situation where
a super-packet is partially transmitted. Requeueing causes all sorts of
nasty problems so I chose to keep it within the net_device structure.
To do so requires qdisc_run to be serialised against each other. I then
found out that we want this anyway because otherwise the requeued packets
could be reordered.
> Also: what happens to the packet that comes in from either local or is
> being forwarded and finds the qdisc_is_running flag is set? I couldnt
> tell if the intent was to drop it or not. The answer for TCP is probably
> simpler than for packets being forwarded.
The qdisc_is_running only prevents qdisc_run from occuring (because it's
already running), it does not impact on the queueing of the packet.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-19 13:42 ` Herbert Xu
@ 2006-06-19 14:23 ` jamal
2006-06-19 14:29 ` Herbert Xu
0 siblings, 1 reply; 16+ messages in thread
From: jamal @ 2006-06-19 14:23 UTC (permalink / raw)
To: Herbert Xu; +Cc: David S. Miller, netdev
Herbert,
On Mon, 2006-19-06 at 23:42 +1000, Herbert Xu wrote:
> Hi Jamal:
>
> On Mon, Jun 19, 2006 at 09:33:51AM -0400, jamal wrote:
[..]
>
> Actually I discovered the problem only because the generic segmentation
> offload stuff that I'm working on needs to deal with the situation where
> a super-packet is partially transmitted. Requeueing causes all sorts of
> nasty problems so I chose to keep it within the net_device structure.
>
> To do so requires qdisc_run to be serialised against each other. I then
> found out that we want this anyway because otherwise the requeued packets
> could be reordered.
>
Ok, I am trying to visualize but having a hard time:
Re-queueing is done at the front of the queue to maintain ordering
whereas queueing is done at the front (i.e it is a FIFO). i,e
even if p2 comes in and gets queued while p1 is being processed,
requeueing of p1 will put it infront of p2.
Your super-packet issue may be different though ..
> > Also: what happens to the packet that comes in from either local or is
> > being forwarded and finds the qdisc_is_running flag is set? I couldnt
> > tell if the intent was to drop it or not. The answer for TCP is probably
> > simpler than for packets being forwarded.
>
> The qdisc_is_running only prevents qdisc_run from occuring (because it's
> already running), it does not impact on the queueing of the packet.
>
I will wait for your answer on the other part before responding to this.
cheers,
jamal
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-19 14:23 ` jamal
@ 2006-06-19 14:29 ` Herbert Xu
2006-06-19 14:36 ` jamal
0 siblings, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2006-06-19 14:29 UTC (permalink / raw)
To: jamal; +Cc: David S. Miller, netdev
On Mon, Jun 19, 2006 at 10:23:29AM -0400, jamal wrote:
>
> Ok, I am trying to visualize but having a hard time:
> Re-queueing is done at the front of the queue to maintain ordering
> whereas queueing is done at the front (i.e it is a FIFO). i,e
> even if p2 comes in and gets queued while p1 is being processed,
> requeueing of p1 will put it infront of p2.
Correct. When qdisc_run happens we take an skb off the head of the
queue. If it can't be transmitted right away, we try to put it back
in the same spot.
If you have two qdisc_run's happening at the same time then that spot
could be different.
> Your super-packet issue may be different though ..
The reordering issue is not related to super-packets.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-19 14:29 ` Herbert Xu
@ 2006-06-19 14:36 ` jamal
2006-06-19 22:33 ` Herbert Xu
0 siblings, 1 reply; 16+ messages in thread
From: jamal @ 2006-06-19 14:36 UTC (permalink / raw)
To: Herbert Xu; +Cc: netdev, David S. Miller
On Tue, 2006-20-06 at 00:29 +1000, Herbert Xu wrote:
> Correct. When qdisc_run happens we take an skb off the head of the
> queue. If it can't be transmitted right away, we try to put it back
> in the same spot.
>
> If you have two qdisc_run's happening at the same time then that spot
> could be different.
>
Ok, but:
The queue lock will ensure only one of the qdisc runs (assuming
different CPUs) will be able to dequeue at any one iota in time, no?
And if you assume that the cpu that manages to get the tx lock as well
is going to be contending for the qlock in ordewr to requeue, then the
only scenario i can see the race happening is when you have one CPU
faster than the other.
Did i miss something?
cheers,
jamal
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-19 14:36 ` jamal
@ 2006-06-19 22:33 ` Herbert Xu
2006-06-20 14:42 ` jamal
0 siblings, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2006-06-19 22:33 UTC (permalink / raw)
To: jamal; +Cc: netdev, David S. Miller
On Mon, Jun 19, 2006 at 10:36:50AM -0400, jamal wrote:
>
> Ok, but:
> The queue lock will ensure only one of the qdisc runs (assuming
> different CPUs) will be able to dequeue at any one iota in time, no?
> And if you assume that the cpu that manages to get the tx lock as well
> is going to be contending for the qlock in ordewr to requeue, then the
> only scenario i can see the race happening is when you have one CPU
> faster than the other.
> Did i miss something?
First of all you could receive an IRQ in between dropping xmit_lock
and regaining the queue lock. Secondly we now have lockless drivers
where this assumption also does not hold.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-19 12:15 [NET]: Prevent multiple qdisc runs Herbert Xu
2006-06-19 13:33 ` jamal
@ 2006-06-20 6:57 ` David Miller
2006-06-20 7:00 ` Herbert Xu
1 sibling, 1 reply; 16+ messages in thread
From: David Miller @ 2006-06-20 6:57 UTC (permalink / raw)
To: herbert; +Cc: netdev
From: Herbert Xu <herbert@gondor.apana.org.au>
Date: Mon, 19 Jun 2006 22:15:19 +1000
> [NET]: Prevent multiple qdisc runs
I have no real objection to this semantically.
But this is yet another atomic operation on the transmit
path :-( This problem, however, is inevitable because of
how we do things and thus isn't the fault of your change.
I'm going to apply this patch to 2.6.18, however... we should split
up the dev->state handling into seperate cacheline synchronizers.
Sharing RX and TX locking bits in the same word is not all that
efficient.
Thanks.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-20 6:57 ` David Miller
@ 2006-06-20 7:00 ` Herbert Xu
0 siblings, 0 replies; 16+ messages in thread
From: Herbert Xu @ 2006-06-20 7:00 UTC (permalink / raw)
To: David Miller; +Cc: netdev
On Mon, Jun 19, 2006 at 11:57:19PM -0700, David Miller wrote:
>
> But this is yet another atomic operation on the transmit
> path :-( This problem, however, is inevitable because of
> how we do things and thus isn't the fault of your change.
>
> I'm going to apply this patch to 2.6.18, however... we should split
> up the dev->state handling into seperate cacheline synchronizers.
> Sharing RX and TX locking bits in the same word is not all that
> efficient.
Good point. This particular bit doesn't even need to be atomic since
it's sitting inside a spinlock.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-19 22:33 ` Herbert Xu
@ 2006-06-20 14:42 ` jamal
2006-06-20 23:52 ` Herbert Xu
0 siblings, 1 reply; 16+ messages in thread
From: jamal @ 2006-06-20 14:42 UTC (permalink / raw)
To: Herbert Xu; +Cc: Robert Olsson, David S. Miller, netdev
Herbert,
Thanks for your patience.
On Tue, 2006-20-06 at 08:33 +1000, Herbert Xu wrote:
> First of all you could receive an IRQ in between dropping xmit_lock
> and regaining the queue lock.
Indeed you could. Sorry, I overlooked that in my earlier email. This
issue has been there forever though - and i dont mean to dilute its
existence by saying the chances of it happening are very very slim. I
claim though that you will be _unable to reproduce this in an
experimental setup_ i.e thats how complex it is.
> Secondly we now have lockless drivers where this assumption also
> does not hold.
Ok, forgot about lockless drivers;
The chances are certainly much higher with lockless driver for a very
simple reason. We used to have lock ordering that is now changed for
lockless drivers. i.e we had:
1) grab qlock,
2) dq
3) grab txlock,
4) release qlock,
5) transmit,
6) release txlock
to the new sequence #1,#2,#4,#3,#5,#6
and at times that same replacement txlock being also used in the rx path
to guard the tx DMA.
A possible solution is to alias the tx lock to be dev->txlock
(DaveM had pointed out he didnt like this approach, I cant remember the
details.)
Heres where i am coming from (you may have suspected it already):
My concern is i am not sure what the performance implications are on
this change (yes, there goes that soup^Wperformance nazi again) or what
the impact on how good the qos granularity is any longer[1].
If it is to make lock-less drivers happy, then someone oughta validate
if this performance benefit that lockless drivers give still exists. I
almost feel like we gained the 5% from lockless driving and lost 10% for
everyone else trying to fix the sins of lockless driving. So i am unsure
of the net gain.
I apologize for hand-waving with % numbers above and using gut feeling
instead of experimental facts - I dont have time to chase it. I have
CCed Robert who may have time to see if this impacts forwarding
performance for one. I will have more peace of mind to find out there is
no impact.
cheers,
jamal
[1] By having both the forwarding path and tx softirq from multiple CPUs
enter this qdiscrun path, the chances that a packet will be dequeued
successfully and sent out within reasonable time are higher.
The tx_collision vs tx success are a good measure of how lucky you get.
This improves timeliness and granularity of qos for one. What your patch
does is reduce the granularity/possibility that we may enter
that region sooner.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-20 14:42 ` jamal
@ 2006-06-20 23:52 ` Herbert Xu
2006-06-22 19:31 ` jamal
0 siblings, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2006-06-20 23:52 UTC (permalink / raw)
To: jamal; +Cc: Robert Olsson, David S. Miller, netdev
On Tue, Jun 20, 2006 at 10:42:06AM -0400, jamal wrote:
>
> I apologize for hand-waving with % numbers above and using gut feeling
> instead of experimental facts - I dont have time to chase it. I have
> CCed Robert who may have time to see if this impacts forwarding
> performance for one. I will have more peace of mind to find out there is
> no impact.
Well my gut feeling is that multiple qdisc_run's on the same dev can't
be good for perfomance. The reason is that SMP is only good when the
CPUs work on different tasks. If you get two or more CPUs to work on
qdisc_run at the same time they can still only supply one skb to the
device at any time. What's worse is that they will now have to fight
over the two spin locks involved which means that their cache lines
will bounce back and forth.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-20 23:52 ` Herbert Xu
@ 2006-06-22 19:31 ` jamal
2006-06-22 22:43 ` Herbert Xu
0 siblings, 1 reply; 16+ messages in thread
From: jamal @ 2006-06-22 19:31 UTC (permalink / raw)
To: Herbert Xu; +Cc: netdev, David S. Miller, Robert Olsson
On Wed, 2006-21-06 at 09:52 +1000, Herbert Xu wrote:
> Well my gut feeling is that multiple qdisc_run's on the same dev can't
> be good for perfomance. The reason is that SMP is only good when the
> CPUs work on different tasks. If you get two or more CPUs to work on
> qdisc_run at the same time they can still only supply one skb to the
> device at any time. What's worse is that they will now have to fight
> over the two spin locks involved which means that their cache lines
> will bounce back and forth.
1) If the CPUs collide all the time it is without a doubt
it is a bad thing (you can tell from tx_collission stats).
2) If on the other hand, the iota that a CPU enters that path in the
softirq it gets the txlock then there is benefit to not serialize
at the level you have done with that patch - you are enlarging the
granularity of the serialization so much so that the CPU wont even
get the opportunity to try and grab tx lock because it finds qdisc is
already running.
Your gut feeling is for #1 and my worry is for #2 ;->
I actually think your change is obviously valuable for scenarios where
the bus is slower and therefore transmits take longer - my feeling is it
may not be beneficial for fast buses like PCI-E or high speed PCI/X
where the possibility of getting access tx collision is lower.
The other reason I mentioned earlier as justification to leave the
granularity at the level where it was is for good qos clocking. i.e
to allow incoming packets to be used to clock the tx path - otherwise
you will be dependent on HZ for your egress rate accuracy. I am not sure
if this later point made sense - I could elaborate.
The experiment needed to prove things is not hard: one needs to get a 2
or 4way machine and create a "funneling" effect to one NIC.
For forwarding, the best setup will be to have 3 NICs. packets coming in
on 2 NICs are forwarded to a third. The incoming-packet NICS are tied to
different CPUs. In a 4way, the outgoing as well is tied to its own CPU.
You then pummel the two incoming CPUs with pktgen or otherwise
at something like 1Mpps (which is higher than the wire rate the third
nic can handle).
cheers,
jamal
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-22 19:31 ` jamal
@ 2006-06-22 22:43 ` Herbert Xu
2006-06-23 0:52 ` jamal
0 siblings, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2006-06-22 22:43 UTC (permalink / raw)
To: jamal; +Cc: netdev, David S. Miller, Robert Olsson
On Thu, Jun 22, 2006 at 03:31:22PM -0400, jamal wrote:
>
> Your gut feeling is for #1 and my worry is for #2 ;->
> I actually think your change is obviously valuable for scenarios where
> the bus is slower and therefore transmits take longer - my feeling is it
> may not be beneficial for fast buses like PCI-E or high speed PCI/X
> where the possibility of getting access tx collision is lower.
Sure. However, I still don't see the point of transmitting in parallel
even there. The reason is that there is no work being done here by the
CPU between dequeueing the packet and obtaining the TX lock. As such
the cost of doing it in parallel is going to be dominated by the cache
bouncing.
Obviously it is a little different for lockless drivers where we do
dev_queue_xmit_nit (and now GSO) without any locks. However, you don't
want parallelism there because it breaks packet ordering.
> The other reason I mentioned earlier as justification to leave the
> granularity at the level where it was is for good qos clocking. i.e
> to allow incoming packets to be used to clock the tx path - otherwise
> you will be dependent on HZ for your egress rate accuracy. I am not sure
> if this later point made sense - I could elaborate.
I don't understand where HZ comes in. If you find that qdisc_run is
already running, then the packet you've just queued will most likely
be processed by that qdisc_run immediately unless the device is full.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-22 22:43 ` Herbert Xu
@ 2006-06-23 0:52 ` jamal
2006-06-23 3:35 ` Herbert Xu
0 siblings, 1 reply; 16+ messages in thread
From: jamal @ 2006-06-23 0:52 UTC (permalink / raw)
To: Herbert Xu; +Cc: Robert Olsson, David S. Miller, netdev
On Fri, 2006-23-06 at 08:43 +1000, Herbert Xu wrote:
> Sure. However, I still don't see the point of transmitting in parallel
> even there. The reason is that there is no work being done here by the
> CPU between dequeueing the packet and obtaining the TX lock.
You make a reasonable argument.
I think I will need to run the experiment to gather facts. I will defer
this discussion until i run some tests or Robert beats me to it.
It does feel like the qdisc_is_running though is now a replacement
for the need for dev->txlock which existed to protect multi-cpus from
entering the device transmit path. Is that unintended side effect?
i.e why would dev->txlock be needed anymore in that path?
cheers,
jamal
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-23 0:52 ` jamal
@ 2006-06-23 3:35 ` Herbert Xu
2006-06-24 13:50 ` Jamal Hadi Salim
0 siblings, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2006-06-23 3:35 UTC (permalink / raw)
To: jamal; +Cc: Robert Olsson, David S. Miller, netdev
On Thu, Jun 22, 2006 at 08:52:17PM -0400, jamal wrote:
>
> It does feel like the qdisc_is_running though is now a replacement
> for the need for dev->txlock which existed to protect multi-cpus from
> entering the device transmit path. Is that unintended side effect?
> i.e why would dev->txlock be needed anymore in that path?
It's not totally redundant yet since you can set tx_queue_len to zero.
It also still protects against the asynchronous paths that take xmit_lock.
However, it wouldn't be a bad idea to see if there is a way to reduce the
number of locks required on the xmit path to one.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [NET]: Prevent multiple qdisc runs
2006-06-23 3:35 ` Herbert Xu
@ 2006-06-24 13:50 ` Jamal Hadi Salim
0 siblings, 0 replies; 16+ messages in thread
From: Jamal Hadi Salim @ 2006-06-24 13:50 UTC (permalink / raw)
To: Herbert Xu; +Cc: netdev, David S. Miller, Robert Olsson
On Fri, 2006-23-06 at 13:35 +1000, Herbert Xu wrote:
> On Thu, Jun 22, 2006 at 08:52:17PM -0400, jamal wrote:
> >
> > It does feel like the qdisc_is_running though is now a replacement
> > for the need for dev->txlock which existed to protect multi-cpus from
> > entering the device transmit path. Is that unintended side effect?
> > i.e why would dev->txlock be needed anymore in that path?
>
> It's not totally redundant yet since you can set tx_queue_len to zero.
> It also still protects against the asynchronous paths that take xmit_lock.
The tx_timeout?
> However, it wouldn't be a bad idea to see if there is a way to reduce the
> number of locks required on the xmit path to one.
It also seems to be there - dont know why it took me so long to see
this. I am actually gut-feeling now that we may get better performance
with LLTX drivers with this approach;-> talk about a flip-flop.
The qualification for LLTX seems to be for a driver to have a
private tx lock which is the case of about every ethernet driver out
there.
cheers,
jamal
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2006-06-24 13:51 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-06-19 12:15 [NET]: Prevent multiple qdisc runs Herbert Xu
2006-06-19 13:33 ` jamal
2006-06-19 13:42 ` Herbert Xu
2006-06-19 14:23 ` jamal
2006-06-19 14:29 ` Herbert Xu
2006-06-19 14:36 ` jamal
2006-06-19 22:33 ` Herbert Xu
2006-06-20 14:42 ` jamal
2006-06-20 23:52 ` Herbert Xu
2006-06-22 19:31 ` jamal
2006-06-22 22:43 ` Herbert Xu
2006-06-23 0:52 ` jamal
2006-06-23 3:35 ` Herbert Xu
2006-06-24 13:50 ` Jamal Hadi Salim
2006-06-20 6:57 ` David Miller
2006-06-20 7:00 ` Herbert Xu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).