linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Memory leak in 3.17.rc6.g09bba1?
@ 2014-11-14  1:28 Rick Jones
  2014-11-14 12:30 ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 2+ messages in thread
From: Rick Jones @ 2014-11-14  1:28 UTC (permalink / raw)
  To: linux-perf-users

I am running a command:

perf top -a -g -e skb:kfree_skb

on a laptop running a 3.18.0-rc2+ kernel from davem's net-next tree. 
While that is running I hit the system as the target of a netperf TCP_CC 
test (ie netserver is running on the system where perf is running, and 
netperf is run on another system, pointing at the first).  I then expand 
the kfree_skb() line and the sk_stream_kill_queues  and 
tcp_rcv_state_process lines "within" that expansion.

If I watch with plain "top" in another window I can see the RES value 
for the perf process steadily increasing and also its CPU utlization. 
The latter finally peaks at 100% (this is a core 2 duo laptop).

After about 1800 seconds of being the target of a netperf TCP_CC test 
the RES value for the perf utility is over 1G.

If I wait long enough, perf will finally segfault.

Is this a known issue?  If I should file a more formal bug report 
somewhere let me know.

happy benchmarking,

rick jones

raj@raj-8510w:~$ net-next/tools/perf/perf --version
perf version 3.17.rc6.g09bba1

netperf -t tcp_cc -H <perfsystem> -l 3600

might need to repeat it a few times to get the segfault?

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Memory leak in 3.17.rc6.g09bba1?
  2014-11-14  1:28 Memory leak in 3.17.rc6.g09bba1? Rick Jones
@ 2014-11-14 12:30 ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 2+ messages in thread
From: Arnaldo Carvalho de Melo @ 2014-11-14 12:30 UTC (permalink / raw)
  To: Rick Jones; +Cc: linux-perf-users

Em Thu, Nov 13, 2014 at 05:28:15PM -0800, Rick Jones escreveu:
> I am running a command:
> 
> perf top -a -g -e skb:kfree_skb
> 
> on a laptop running a 3.18.0-rc2+ kernel from davem's net-next tree. While
> that is running I hit the system as the target of a netperf TCP_CC test (ie
> netserver is running on the system where perf is running, and netperf is run
> on another system, pointing at the first).  I then expand the kfree_skb()
> line and the sk_stream_kill_queues  and tcp_rcv_state_process lines "within"
> that expansion.
> 
> If I watch with plain "top" in another window I can see the RES value for
> the perf process steadily increasing and also its CPU utlization. The latter
> finally peaks at 100% (this is a core 2 duo laptop).
> 
> After about 1800 seconds of being the target of a netperf TCP_CC test the
> RES value for the perf utility is over 1G.
> 
> If I wait long enough, perf will finally segfault.
> 
> Is this a known issue?  If I should file a more formal bug report somewhere
> let me know.

Thanks for reporting the leak, I'll try to reproduce it here and see if
I can fix it.

It should not leak memory, and we know it needs to constrain its use
doing some garbage collecting, and it _definetely_ should not segfault.

It is definetely a usecase we want to support 8-)

- Arnaldo
 
> happy benchmarking,
> 
> rick jones
> 
> raj@raj-8510w:~$ net-next/tools/perf/perf --version
> perf version 3.17.rc6.g09bba1
> 
> netperf -t tcp_cc -H <perfsystem> -l 3600
> 
> might need to repeat it a few times to get the segfault?
> --
> To unsubscribe from this list: send the line "unsubscribe linux-perf-users" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-11-14 12:30 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-11-14  1:28 Memory leak in 3.17.rc6.g09bba1? Rick Jones
2014-11-14 12:30 ` Arnaldo Carvalho de Melo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).