public inbox for kexec@lists.infradead.org
 help / color / mirror / Atom feed
From: Vivek Goyal <vgoyal@redhat.com>
To: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Cc: kexec@lists.infradead.org, Cliff Wickman <cpw@sgi.com>
Subject: Re: 32TB kdump
Date: Mon, 1 Jul 2013 12:06:36 -0400	[thread overview]
Message-ID: <20130701160635.GB9840@redhat.com> (raw)
In-Reply-To: <51D0D399.1070501@jp.fujitsu.com>

On Mon, Jul 01, 2013 at 09:55:53AM +0900, HATAYAMA Daisuke wrote:
> (2013/06/28 6:17), Vivek Goyal wrote:
> >On Fri, Jun 21, 2013 at 09:17:14AM -0500, Cliff Wickman wrote:
> 
> >
> >Try using snappy or lzo for faster compression.
> >
> >>   So a good workaround for a very large system might be to dump uncompressed
> >>   to an SSD.
> >
> >Interesting.
> >
> >>   The multi-threading of the crash kernel would produce a big gain.
> >
> >Hatayama once was working on patches to bring up multiple cpus in second
> >kernel. Not sure what happened to those patches.
> >
> >>- Use of mmap on /proc/vmcore increased page scanning speed from 4.4 minutes
> >>   to 3 minutes.  It also increased data copying speed (unexpectedly) from
> >>   38min. to 35min.
> >
> >Hmm.., so on large memory systems, mmap() will not help a lot? In those
> >systems dump times are dominidated by disk speed and compression time.
> >
> >So far I was thinking that ioremap() per page was big issue and you
> >also once had done the analysis that passing page list to kernel made
> >things significantly faster.
> >
> >So on 32TB machines if it is taking 2hrs to save dump and mmap() shortens
> >it by only few minutes, it really is not significant win.
> >
> 
> Sorry, I've explained this earlier in this ML.
> 
> Some patches have been applied on makedumpfile to improve the filtering speed.
> Two changes that were useful for the improvement are the one implementing
> a 8-slot cache for physical page for the purpose of reducing the number of
> /proc/vmcore access for paging (just as TLB), and the one that cleanups
> makedumpfile's filtering path.

So biggest performance improvement came from implementing some kind of
TLB cache in makedumpfile?

> 
> Performance degradation by ioremap() is now being hided on a single cpu, but
> it would again occur on multiple cpus. Sorry, but I have yet to do benchmark
> showing the fact cleanly as numeral values.

IIUC, are you saying that now ioremap() overhead per page is not very
significant on single cpu system (after above makeudmpfile changes). And
that's the reason using mmap() does not show a very significant
improvement in overall scheme of things. And these overheads will become
more important when multiple cpus are brought up in kdump environment.

Please correct me if I am wrong, I just want to understand it better. So
most of our performance problems w.r.t to scanning got solved by
makeumpdfile changes and mmap() changes bring us only little bit of
improvements in overall scheme of things on large machines?

Vivek

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

  reply	other threads:[~2013-07-01 16:07 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-21 14:17 32TB kdump Cliff Wickman
2013-06-27 21:17 ` Vivek Goyal
2013-06-28 21:56   ` Cliff Wickman
2013-07-01  0:42     ` HATAYAMA Daisuke
2013-07-01  2:57     ` Atsushi Kumagai
2013-07-01 16:12     ` Vivek Goyal
2013-07-01  0:55   ` HATAYAMA Daisuke
2013-07-01 16:06     ` Vivek Goyal [this message]
     [not found]       ` <51D3D15D.5090600@jp.fujitsu.com>
2013-07-03 13:03         ` Vivek Goyal
2013-07-04  2:03           ` HATAYAMA Daisuke
2013-07-05 15:21             ` Not booting BSP in kdump kernel (Was: Re: 32TB kdump) Vivek Goyal
2013-07-08  9:23               ` HATAYAMA Daisuke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130701160635.GB9840@redhat.com \
    --to=vgoyal@redhat.com \
    --cc=cpw@sgi.com \
    --cc=d.hatayama@jp.fujitsu.com \
    --cc=kexec@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox