From: Namhyung Kim <namhyung@kernel.org>
To: Steven Rostedt <rostedt@goodmis.org>
Cc: <kvm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Minchan Kim <minchan@kernel.org>
Subject: Re: [QUESTION] Is there a better way to get ftrace dump on guest?
Date: Fri, 1 Jul 2016 13:27:58 +0900 [thread overview]
Message-ID: <20160701042758.GB32617@sejong> (raw)
In-Reply-To: <20160629005231.GA14525@sejong>
On Wed, Jun 29, 2016 at 09:52:31AM +0900, Namhyung Kim wrote:
> Hi Steve,
>
> On Tue, Jun 28, 2016 at 09:57:27AM -0400, Steven Rostedt wrote:
> > On Tue, 28 Jun 2016 15:33:18 +0900
> > Namhyung Kim <namhyung@kernel.org> wrote:
> >
> > > Send again to correct addresses, sorry!
> > >
> > > On Tue, Jun 28, 2016 at 3:25 PM, Namhyung Kim <namhyung@kernel.org> wrote:
> > > > Hello,
> > > >
> > > > I'm running some guest machines for kernel development. For debugging
> > > > purpose, I use lots of trace_printk() since it's faster than normal
> > > > printk(). When kernel crash happens the trace buffer is printed on
> > > > console (I set ftrace_dump_on_oops) but it takes too much time. I
> > > > don't want to reduce the size of ring buffer as I want to collect the
> > > > debug info as much as possible. And I also want to see trace from all
> > > > cpu so 'ftrace_dump_on_oop = 2' is not an option.
> > > >
> > > > I know the kexec/kdump (and the crash tool) can dump and analyze the
> > > > trace buffer later. But it's cumbersome to do it everytime and more
> > > > importantly, I don't want to spend the memory for the crashkernel.
> > > >
> > > > So what is the best way to handle this? I'd like to know how others
> > > > setup the debugging environment..
> >
> > Heh, I'd say something helpful but you basically already shot down all
> > of my advice, because what I do is...
> >
> > 1) Reduce the size of the ring buffer
> >
> > 2) Dump out just one CPU
> >
> > 3) use kexec/kdump and make a crash kernel to extract trace.dat from
> >
> >
> > That's my debugging environment, but it looks like you want something
> > else.
>
> Thanks for sharing. Yeah, I'd like to know other ways to overcome
> this if possible. Since I don't have enough knowledge about this
> area, I hope others would have better idea. :)
Now I'm thinking about extending the pstore subsystem. AFAICS it's
the best fit for my use case. While it only supports function tracer
with a dedicated ftrace_ops now, it can be used for ftrace dump IMHO.
Does it make sense to add a virtio pstore driver and saves the dump to
files on host?
Thanks,
Namhyung
next prev parent reply other threads:[~2016-07-01 4:27 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-28 6:25 [QUESTION] Is there a better way to get ftrace dump on guest? Namhyung Kim
2016-06-28 6:33 ` Namhyung Kim
2016-06-28 13:57 ` Steven Rostedt
2016-06-29 0:52 ` Namhyung Kim
2016-07-01 4:27 ` Namhyung Kim [this message]
2016-06-28 16:46 ` Rabin Vincent
2016-06-29 0:57 ` Minchan Kim
2016-06-29 1:26 ` Steven Rostedt
2016-07-01 4:05 ` Namhyung Kim
2016-06-29 1:49 ` Namhyung Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160701042758.GB32617@sejong \
--to=namhyung@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=minchan@kernel.org \
--cc=rostedt@goodmis.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox