linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Renjith Ponnappan <renjithponnapps@gmail.com>
To: Jiri Olsa <jolsa@redhat.com>
Cc: Arnaldo Carvalho de Melo <arnaldo.melo@gmail.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	jolsa@kernel.org, linux-perf-users@vger.kernel.org
Subject: Re: Perf: Question about continuous background data collection
Date: Wed, 6 Oct 2021 14:29:54 -0700	[thread overview]
Message-ID: <CAPNhM=Ym=RL7QCcaA7GLPreMuO-f7as_77yy_N8eSgD2YLophA@mail.gmail.com> (raw)
In-Reply-To: <YVv4B4syx36Co/0+@krava>

Hello Arnaldo & Jiri,

Thank you for your response.

The perf daemon is the closest implementation for what I was looking
for. Here we run perf in the back-ground, keep overwriting the samples
being collected and use the SIGUSR2 to signal to the perf daemon to
collect the perf data to a file. This fulfills the following
requirements:

1. Run perf in the background to collect data
2. A method to signal perf to collect the samples for the current cycle.

The last part of the requirement which  was:

1. The data-collection in step 2 above, rather than writing the data
to file store it in in-memory datastructure via pointer manipulation.
We can have a list of such samples stored in memory until the next
step. This helps free up the CPU cycles used by perf for writing to
file for applications.
2. A method to signal perf to dump all the collected samples into
separate files. This way the user can collect the stored samples when
the CPU is relatively free.

Let me know whether we have support for storing samples as in-memory samples.

Thanks in advance for your help!

cheers,
Renjith
cheers,
Renjith

On Tue, Oct 5, 2021 at 12:00 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Thu, Sep 30, 2021 at 10:39:21PM -0300, Arnaldo Carvalho de Melo wrote:
> >
> >
> > On September 30, 2021 10:28:28 PM GMT-03:00, Renjith Ponnappan <renjithponnapps@gmail.com> wrote:
> > >Hello Peter/Ingo/Arnaldo,
> > >
> > >First of all, apologies if I bombarded you with an irrelevant question in
> > >your busy day and ignore this if the question is irrelevant.
> > >
> > >I had a question about continuous background data-collection with perf and
> > >hope you are the right person to answer this. If not, it would be great if
> > >you can redirect me to the right person.
> > >
> > >I am trying to build a CPU profiling system (on an embedded ARM Platform
> > >with CPU/memory constraints) which has CPU Samples already collected when
> > >an application CPU starvation scenario occurs based on perf. The
> > >implementation I am trying to use is:
> > >
> > >   1. Run perf in the background collecting samples for the entire system
> >
> >
> > This is already in perf:
> >
> > https://www.spinics.net/lists/linux-perf-users/msg11455.html
> >
> >
> > Reply adding linux-perf-users@vger.kernel.org.
> >
> > - Arnaldo
> >
> >
> > >   with a sleep period of 60 seconds
> > >   2. When an application CPU starvation scenario occurs (detected and
> > >   raised by applications) notify the collection process to store the last
> > >   perf collection as data to be analyzed offline.
> > >
> > >Have you come across such a scenario and any recommendations on this?
> > >
> > >The following are the two implementations I have on the above:
> > >
> > >   1. An external process which instructs perf to record the 60 seconds by
> > >   providing unique filenames each time. This approach was taking around 40%
> > >   CPU of a CPU core, everytime the perf record was getting written (once for
> > >   each 60 seconds cycle). This isn't okay as it could cause aggravation of
> > >   the CPU starvation situation.
> > >   2. I tinkered with Perf Code to add the logic of looping and writing the
> > >   file incase of an event only. This did reduce the CPU to only the case when
> > >   an event was detected.
>
> hi,
> and there's also perf daemon to run perf sessions on background:
> https://lore.kernel.org/lkml/20210130234856.271282-19-jolsa@kernel.org/
>
> jirka
>
> > >
> > >Would like to hear your opinion on whether approach 2 is the right way here
> > >and any suggestion/guidance you may have.
> > >
> > >Thanks in advance for this help!
> > >
> > >*cheers,*
> > >*Renjith*
> >
>

       reply	other threads:[~2021-10-06 21:30 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAPNhM=aGeiKAVFDHWuP=GcAbgUz=JgWwtKfoRgmuxoG0gNfciQ@mail.gmail.com>
     [not found] ` <DA2A1D0F-1877-41D1-BC87-C0AE70D3E18C@gmail.com>
     [not found]   ` <YVv4B4syx36Co/0+@krava>
2021-10-06 21:29     ` Renjith Ponnappan [this message]
     [not found]     ` <CAPNhM=YRUMw2VuHg3e5RH33UTCDG0guiqvCrfwb_kW_PkLvipw@mail.gmail.com>
2021-10-07  9:17       ` Perf: Question about continuous background data collection Arnaldo Carvalho de Melo
2021-10-07 16:52         ` Jiri Olsa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPNhM=Ym=RL7QCcaA7GLPreMuO-f7as_77yy_N8eSgD2YLophA@mail.gmail.com' \
    --to=renjithponnapps@gmail.com \
    --cc=acme@kernel.org \
    --cc=arnaldo.melo@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=jolsa@redhat.com \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).