* garbage of TCP sock mem in sockstat?
@ 2007-04-08 3:22 Kumiko Ono
2007-04-30 7:38 ` David Miller
0 siblings, 1 reply; 3+ messages in thread
From: Kumiko Ono @ 2007-04-08 3:22 UTC (permalink / raw)
To: netdev
Hi all,
I tried to find out why some amount of memory remains allocated for TCP
socket buffers after an application completes to read the buffers.
Although I looked for similar topics in this ML, I couldn't find it.
While a client create hundreds of new TCP connections to a server and
sends 512 bytes data for each connection at 1000 requests/second, I
monitored the amount of memory allocated for TCP socket buffers in
/proc/net/sockstat. At lower sending rate, e.g., 100 requests/second,
this problem does not happen.
The Linux kernel is 2.6.20.
When a server calls read() for all connections, and not call send() for
echoing messages, the sockstat shows some garbage remaining as follows:
TCP: inuse 13 orphan 0 tw 0 alloc 19 mem 0
TCP: inuse 1226 orphan 0 tw 0 alloc 1232 mem 3
TCP: inuse 2332 orphan 0 tw 0 alloc 2338 mem 3
TCP: inuse 2441 orphan 0 tw 0 alloc 2447 mem 3
TCP: inuse 3654 orphan 0 tw 0 alloc 3660 mem 6
TCP: inuse 4869 orphan 0 tw 0 alloc 4875 mem 6
TCP: inuse 5012 orphan 0 tw 0 alloc 5018 mem 6
TCP: inuse 5012 orphan 0 tw 0 alloc 5018 mem 6
TCP: inuse 5012 orphan 0 tw 0 alloc 5018 mem 6
Even after a day, the sockstat shows the same value on mem.
On the other hand, when a server calls read() and send() for echoing
messages for all connections, the sockstat shows that all the socket
buffers are deallocated after competing echoing as follows:
TCP: inuse 13 orphan 0 tw 0 alloc 19 mem 0
TCP: inuse 1237 orphan 0 tw 0 alloc 1243 mem 0
TCP: inuse 2461 orphan 0 tw 0 alloc 2467 mem 0
TCP: inuse 3688 orphan 0 tw 0 alloc 3694 mem 0
TCP: inuse 4912 orphan 0 tw 0 alloc 4918 mem 268
TCP: inuse 5012 orphan 0 tw 0 alloc 5018 mem 101
TCP: inuse 5012 orphan 0 tw 0 alloc 5018 mem 0
Interestingly, when a server calls send() to send 512 byte data, and
recv() to receive 512 byte data from clients, the sockstat shows
similarly to that of the echoing.
Could anybody tell me why the garbage in the memory for TCP socket
buffers remains? Is this a problem on deallocation of socket buffers, or
just on sockstat? Or I'm missing something?
Regards,
Kumiko
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: garbage of TCP sock mem in sockstat?
2007-04-08 3:22 garbage of TCP sock mem in sockstat? Kumiko Ono
@ 2007-04-30 7:38 ` David Miller
2007-05-02 22:03 ` Ono, Kumiko
0 siblings, 1 reply; 3+ messages in thread
From: David Miller @ 2007-04-30 7:38 UTC (permalink / raw)
To: kumiko; +Cc: netdev
From: Kumiko Ono <kumiko@cs.columbia.edu>
Date: Sat, 07 Apr 2007 23:22:36 -0400
> Could anybody tell me why the garbage in the memory for TCP socket
> buffers remains? Is this a problem on deallocation of socket buffers, or
> just on sockstat? Or I'm missing something?
It is not garbage, it is simply holding on to the receive buffer
allocation in anticipation of future packet receives for that
socket.
The values in the global pool, which you saw via sockstat, are
allocated from on a per-socket basis into a per-socket allocation
quota. Packets attached to that socket have to take from this
quota.
The idea is that once you get a per-socket allocation, you use
that until you need more. When you release, you keep the
per-socket allocation unless we are under global memory
pressure.
This prevents having to go to allocate from the global pool
too often which is very expensive especially on SMP since
it is a shared datastructure and requires locking.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: garbage of TCP sock mem in sockstat?
2007-04-30 7:38 ` David Miller
@ 2007-05-02 22:03 ` Ono, Kumiko
0 siblings, 0 replies; 3+ messages in thread
From: Ono, Kumiko @ 2007-05-02 22:03 UTC (permalink / raw)
To: David Miller; +Cc: netdev
Thanks a lot for your response.
However, it is still unclear, since the allocated memory for TCP socket
buffers, which I saw via sockstat, shows zero when calling send() after
recv() as shown in the previous email.
Do you mean that it is necessary to hold the receive buffer allocation
for future packet when calling only recv(), but not necessary when
calling send() after recv()?
> On the other hand, when a server calls read() and send() for echoing messages for all connections, the sockstat shows that all the socket buffers are deallocated after competing echoing as follows:
>
> TCP: inuse 13 orphan 0 tw 0 alloc 19 mem 0
> TCP: inuse 1237 orphan 0 tw 0 alloc 1243 mem 0
> TCP: inuse 2461 orphan 0 tw 0 alloc 2467 mem 0
> TCP: inuse 3688 orphan 0 tw 0 alloc 3694 mem 0
> TCP: inuse 4912 orphan 0 tw 0 alloc 4918 mem 268
> TCP: inuse 5012 orphan 0 tw 0 alloc 5018 mem 101
> TCP: inuse 5012 orphan 0 tw 0 alloc 5018 mem 0
Regards,
Kumiko
David Miller wrote:
> From: Kumiko Ono <kumiko@cs.columbia.edu>
> Date: Sat, 07 Apr 2007 23:22:36 -0400
>
>> Could anybody tell me why the garbage in the memory for TCP socket
>> buffers remains? Is this a problem on deallocation of socket buffers, or
>> just on sockstat? Or I'm missing something?
>
> It is not garbage, it is simply holding on to the receive buffer
> allocation in anticipation of future packet receives for that
> socket.
>
> The values in the global pool, which you saw via sockstat, are
> allocated from on a per-socket basis into a per-socket allocation
> quota. Packets attached to that socket have to take from this
> quota.
>
> The idea is that once you get a per-socket allocation, you use
> that until you need more. When you release, you keep the
> per-socket allocation unless we are under global memory
> pressure.
>
> This prevents having to go to allocate from the global pool
> too often which is very expensive especially on SMP since
> it is a shared datastructure and requires locking.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2007-05-02 22:03 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-04-08 3:22 garbage of TCP sock mem in sockstat? Kumiko Ono
2007-04-30 7:38 ` David Miller
2007-05-02 22:03 ` Ono, Kumiko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).