public inbox for kexec@lists.infradead.org
 help / color / mirror / Atom feed
From: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: kexec@lists.infradead.org, Cliff Wickman <cpw@sgi.com>
Subject: Re: 32TB kdump
Date: Mon, 01 Jul 2013 09:55:53 +0900	[thread overview]
Message-ID: <51D0D399.1070501@jp.fujitsu.com> (raw)
In-Reply-To: <20130627211725.GM4899@redhat.com>

(2013/06/28 6:17), Vivek Goyal wrote:
> On Fri, Jun 21, 2013 at 09:17:14AM -0500, Cliff Wickman wrote:

>
> Try using snappy or lzo for faster compression.
>
>>    So a good workaround for a very large system might be to dump uncompressed
>>    to an SSD.
>
> Interesting.
>
>>    The multi-threading of the crash kernel would produce a big gain.
>
> Hatayama once was working on patches to bring up multiple cpus in second
> kernel. Not sure what happened to those patches.
>
>> - Use of mmap on /proc/vmcore increased page scanning speed from 4.4 minutes
>>    to 3 minutes.  It also increased data copying speed (unexpectedly) from
>>    38min. to 35min.
>
> Hmm.., so on large memory systems, mmap() will not help a lot? In those
> systems dump times are dominidated by disk speed and compression time.
>
> So far I was thinking that ioremap() per page was big issue and you
> also once had done the analysis that passing page list to kernel made
> things significantly faster.
>
> So on 32TB machines if it is taking 2hrs to save dump and mmap() shortens
> it by only few minutes, it really is not significant win.
>

Sorry, I've explained this earlier in this ML.

Some patches have been applied on makedumpfile to improve the filtering speed.
Two changes that were useful for the improvement are the one implementing
a 8-slot cache for physical page for the purpose of reducing the number of
/proc/vmcore access for paging (just as TLB), and the one that cleanups
makedumpfile's filtering path.

Performance degradation by ioremap() is now being hided on a single cpu, but
it would again occur on multiple cpus. Sorry, but I have yet to do benchmark
showing the fact cleanly as numeral values.

-- 
Thanks.
HATAYAMA, Daisuke


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

  parent reply	other threads:[~2013-07-01  0:56 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-21 14:17 32TB kdump Cliff Wickman
2013-06-27 21:17 ` Vivek Goyal
2013-06-28 21:56   ` Cliff Wickman
2013-07-01  0:42     ` HATAYAMA Daisuke
2013-07-01  2:57     ` Atsushi Kumagai
2013-07-01 16:12     ` Vivek Goyal
2013-07-01  0:55   ` HATAYAMA Daisuke [this message]
2013-07-01 16:06     ` Vivek Goyal
     [not found]       ` <51D3D15D.5090600@jp.fujitsu.com>
2013-07-03 13:03         ` Vivek Goyal
2013-07-04  2:03           ` HATAYAMA Daisuke
2013-07-05 15:21             ` Not booting BSP in kdump kernel (Was: Re: 32TB kdump) Vivek Goyal
2013-07-08  9:23               ` HATAYAMA Daisuke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51D0D399.1070501@jp.fujitsu.com \
    --to=d.hatayama@jp.fujitsu.com \
    --cc=cpw@sgi.com \
    --cc=kexec@lists.infradead.org \
    --cc=vgoyal@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox