netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* drop all fragments inside tx queue if one gets dropped
@ 2016-04-20  9:52 Alexander Aring
  2016-04-20 20:15 ` Michael Richardson
  0 siblings, 1 reply; 4+ messages in thread
From: Alexander Aring @ 2016-04-20  9:52 UTC (permalink / raw)
  To: netdev; +Cc: linux-wpan

Hi,

On linux-wpan we had a discussion about setting the right tx_queue_len
and came to some issues in 802.15.4 6LoWPAN networks.

Our hardware parameters are:
 
 - Bandwidth: 250kb/s
 - One framebuffer at hardware side for transmit a frame.
 - MTU - 127 bytes (without mac headers)

To provide 6LoWPAN (IPv6) on such interface, we have two interfaces.

One wpan interface (which works on 802.15.4 layer and has a queue) and
another lowpan interface (gets IPv6 and queue 6LoWPAN into wpan
interface, has no queue - it's virtual interface).

If the IPv6 packets needs fragmentation, mostly if payload is 127 bytes.
We have the following situation:

 - 6lowpan interface gets IPv6 packet:
   - generate 6LoWPAN fragments
     - dev_queue_xmit(wpan_dev, frag1)
     - dev_queue_xmit(wpan_dev, frag2)
     - dev_queue_xmit(wpan_dev, frag3)
     - dev_queue_xmit(wpan_dev, ...)

And then a lot of fragments laying inside the tx_queue and waits to
transfer to the transceiver which has only one framebuffer to transmit
one frame and waits for tx completion to transfer the next one.

My question is, if qdisc drops some fragment because the queue is full
or something else. Exists there some way to remove all fragments inside
the queue? If one fragment will be dropped and all related are still
inside the queue then we send mostly garbage.

I want to add a behaviour which drops all related fragments for
6LoWPAN fragmentation at first, if the payload is above 1280 bytes, then
we have also IPv6 fragmentation on it. In future I also like to remove
all related 6LoWPAN fragments which are related according to the IPv6
fragment.

- Alex

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: drop all fragments inside tx queue if one gets dropped
  2016-04-20  9:52 drop all fragments inside tx queue if one gets dropped Alexander Aring
@ 2016-04-20 20:15 ` Michael Richardson
  2016-04-20 20:45   ` Rick Jones
  0 siblings, 1 reply; 4+ messages in thread
From: Michael Richardson @ 2016-04-20 20:15 UTC (permalink / raw)
  To: netdev, linux-wpan; +Cc: Alexander Aring

[-- Attachment #1: Type: text/plain, Size: 1600 bytes --]


{adding some more comments from the -wpan side of things}

Alexander Aring <aar@pengutronix.de> wrote:
    > On linux-wpan we had a discussion about setting the right tx_queue_len
    > and came to some issues in 802.15.4 6LoWPAN networks.

...

    > And then a lot of fragments laying inside the tx_queue and waits to
    > transfer to the transceiver which has only one framebuffer to transmit
    > one frame and waits for tx completion to transfer the next one.

    > My question is, if qdisc drops some fragment because the queue is full
    > or something else. Exists there some way to remove all fragments inside
    > the queue? If one fragment will be dropped and all related are still
    > inside the queue then we send mostly garbage.

The big concern is that if we make tx_queue_len too big, we are effectively
introducing bloat.
If we make it too small, then we might drop one fragment, when we would
prefer to drop the entire packet.

It seems that maybe we ought to have a queue in the upper interface and fill
the lower interface with at most two packets' worth of fragments.

    > I want to add a behaviour which drops all related fragments for 6LoWPAN
    > fragmentation at first, if the payload is above 1280 bytes, then we
    > have also IPv6 fragmentation on it. In future I also like to remove all
    > related 6LoWPAN fragments which are related according to the IPv6
    > fragment.

It would still be useful to be able to do this in general: this kind of
operation would also benefit sending large UDP packets over ethernet when we
have to do IP-layer fragmentation.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: drop all fragments inside tx queue if one gets dropped
  2016-04-20 20:15 ` Michael Richardson
@ 2016-04-20 20:45   ` Rick Jones
  2016-04-21 17:48     ` Michael Richardson
  0 siblings, 1 reply; 4+ messages in thread
From: Rick Jones @ 2016-04-20 20:45 UTC (permalink / raw)
  To: Michael Richardson, netdev, linux-wpan; +Cc: Alexander Aring

For the "everything old is new again" files, back in the 1990s, it was 
noticed that on the likes of a netperf UDP_STREAM test on HP-UX, with 
fragmentation taking place, it was possible to consume 100% of the link 
bandwidth and have 0% effective throughput because the transmit queue 
was kept full with IP datagram fragments which could not possibly be 
reassembled (*) because one or more of the fragments of a datagram were 
dropped because the transmit queue was full.

HP-UX implemented "packet trains" where all the fragments of a 
fragmented datagram were presented to the driver, which then either 
queued them all, or none of them.

I don't recall seeing similar poor behaviour in Linux; I would have 
assumed that the intra-stack flow-control "took care" of it.  Perhaps 
there is something specific to wpan which precludes that?

happy benchmarking,

rick jones

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: drop all fragments inside tx queue if one gets dropped
  2016-04-20 20:45   ` Rick Jones
@ 2016-04-21 17:48     ` Michael Richardson
  0 siblings, 0 replies; 4+ messages in thread
From: Michael Richardson @ 2016-04-21 17:48 UTC (permalink / raw)
  To: Rick Jones; +Cc: netdev, linux-wpan, Alexander Aring

[-- Attachment #1: Type: text/plain, Size: 1473 bytes --]


Rick Jones <rick.jones2@hpe.com> wrote:
    > I don't recall seeing similar poor behaviour in Linux; I would have
    > assumed
    > that the intra-stack flow-control "took care" of it.  Perhaps there is
    > something specific to wpan which precludes that?

The major user of big UDP packets in the 1990s was NFS.
I fondly recall deploying wsize=1024,rsize=1024 for NFS mounts between HPUX,
Apollo and Sun machines across the intersite Ottawa BNR "WAN"

NFS mounts now use TCP by default, and NFS is not a well used protocol
outside of clueful people (everyone else uses CIFS), and modern machines have
way DMA engines in their ethernet that can accomodate more than enough
xmit buffers to perhaps make this moot.  But, I dealt with this very problem
with a Linux NFS server that would get GbE XOFF's by a broken Cisco switch
that wouldn't always XON, and would seem to drop it's queue. (Turning QoS off
on the cisco switch made it tolerable.)

Still, Alex, it would be worth looking at whether the NFS UDP transmitter
does anything clueful to keep from overwhelming the ethernet layer.

wpan deals with sub-IP fragmentation of 1280 (or larger) IPv6 packets into
127 6lowpan fragments for transmission over 802.15.4 interfaces which run at
typically 250kb/s.   Those radios are typically SPI interfaced (often with
bit-banged SPI interfaces), and the radio MAC has only a single transmit
buffer (which is also the receive buffer!).

Packet trains would be nice to have.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-04-21 17:48 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-04-20  9:52 drop all fragments inside tx queue if one gets dropped Alexander Aring
2016-04-20 20:15 ` Michael Richardson
2016-04-20 20:45   ` Rick Jones
2016-04-21 17:48     ` Michael Richardson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).