public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* datagram queue length
@ 2005-08-09 13:55 Jonathan Ellis
  2005-08-09 14:45 ` linux-os (Dick Johnson)
  0 siblings, 1 reply; 3+ messages in thread
From: Jonathan Ellis @ 2005-08-09 13:55 UTC (permalink / raw)
  To: linux-net, linux-kernel

(Posted a few days ago to c.os.l.networking; no replies there.)

I seem to be running into a limit of 64 queued datagrams.  This isn't a
data buffer size; varying the size of the datagram makes no difference
in the observed queue size.  If more datagrams are sent before some are
read, they are silently dropped.  (By "silently," I mean, "tcpdump
doesn't record these as dropped packets.")

This only happens when the sending and receiving processes are on
different machines, btw.

Can anyone tell me where this magic 64 number comes from, so I can
increase it?

Python demo attached.

-Jonathan

# <receive udp requests>
# start this, then immediately start the other
# _on another machine_
import socket, time

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(('', 3001))

time.sleep(5)

while True:
     data, client_addr = sock.recvfrom(8192)
     print data

# <separate process to send stuff>
import socket

for i in range(200):
     sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
     sock.sendto('a' * 100, 0, ('***other machine ip***', 3001))
     sock.close()

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2005-08-09 14:52 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-08-09 13:55 datagram queue length Jonathan Ellis
2005-08-09 14:45 ` linux-os (Dick Johnson)
2005-08-09 14:52   ` Jonathan Ellis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox