public inbox for kexec@lists.infradead.org
 help / color / mirror / Atom feed
From: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: kexec@lists.infradead.org,
	HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>,
	kumagai-atsushi@mxc.nes.nec.co.jp, Cliff Wickman <cpw@sgi.com>
Subject: Re: makedumpfile 1.5.4, 734G kdump tests
Date: Tue, 16 Jul 2013 18:22:17 +0900	[thread overview]
Message-ID: <51E510C9.1060200@jp.fujitsu.com> (raw)
In-Reply-To: <20130712164202.GG2272@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 4221 bytes --]

(2013/07/13 1:42), Vivek Goyal wrote:
> On Fri, Jul 12, 2013 at 11:14:27AM -0500, Cliff Wickman wrote:
>> On Thu, Jul 11, 2013 at 09:06:47AM -0400, Vivek Goyal wrote:
>>> On Tue, Jul 09, 2013 at 11:24:03AM -0500, Cliff Wickman wrote:
>>>
>>> [..]
>>>> UV2000   memory: 734G
>>>> makedumpfile: makedumpfile-1.5.4
>>>> kexec:   git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
>>>> booted with   crashkernel=1G,high crashkernel=192M,low
>>>> non-cyclic mode
>>>>
>>>> write to       option            init&scan sec.   copy sec.  dump size
>>>> -------------  -----------------           ----   ---------  ---------
>>>> megaraid disk  no compression                19          91      11.7G
>>>> megaraid disk  zlib compression              20         209       1.4G
>>>> megaraid disk  snappy compression            20          46       2.4G
>>>> megaraid disk  snappy compression no mmap    30          72       2.4G
>>>> /dev/null      no compression                19          28          -
>>>> /dev/null      zlib compression              19         206          -
>>>> /dev/null      snappy compression            19          41          -
>>>>
>>>> Notes and observations
>>>> - Snappy compression is a big win over zlib compression; over 4 times faster
>>>>    with a cost of relatively little disk space.
>>>
>>> Thanks for the results Cliff. If it is not too much of trouble, can you
>>> please also test with lzo compression on same configuration. I am
>>> curious to know how much better snappy performs as compared to lzo.
>>>
>>> Thanks
>>> Vivek
>>
>> Ok.  I repeated the tests and included LZO compression.
>>
>> UV2000   memory: 734G
>> makedumpfile: makedumpfile-1.5.4     non-cyclic mode
>> kexec: git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
>> 3.10 kernel with vmcore mmap patches
>> booted with   crashkernel=1G,high crashkernel=192M,low
>>
>> write to       compression       init&scan sec.   copy sec.  dump size
>> -------------  -----------------           ----   ---------  ---------
>> megaraid disk  no compression                20          86      11.6G
>> megaraid disk  zlib compression              19         209       1.4G
>> megaraid disk  snappy compression            20          47       2.4G
>> megaraid disk  lzo compression               19          54       2.8G
>>
>> /dev/null      no compression                19          28          -
>> /dev/null      zlib compression              20         206          -
>> /dev/null      snappy compression            19          42          -
>> /dev/null      lzo compression               20          47          -
>>
>> Notes:
>> - Snappy compression is still be fastest (and more compressed than LZO),
>>    but LZO is close.
>> - Compression and I/O seem pretty well overlapped, so I am not sure that
>>    multithreading the crash kernel (to speed compression) will speed the
>>    dump as much I was hoping, unless perhaps the I/O device is an SSD.
>
> Thanks Cliff. So LZO is pretty close to snappy in this case.
>

This benchmarks lack considering randamized part ratio of data.
On my benchmark, LZO was slower than snappy from 50% to 100% randomized.

The attached is a graph of benchmark result that compares LZO and snappy
on a variety of ratio of randomized data. The benchmark detail is that

- block size is 4KiB
- sample data is 4MiB
   - so 4K blocks in total
- x value is percentage of amount of randomized data
- y value is performance of compression, i.e. 4MiB / (the time consumed for
   compressing the 4MiB sample data)
- processor is Xeon E7540
- randomizing data is done per a single byte. The 1-byte randomized data
   is chosen from /dev/urandom. Other part is filled with '\000'.

On this result, LZO remains 100 [MiB/sec] on data whose more than 50 percent
is randomized, while snappy shows better performance on more randomized
ratio.

On the worst case of this 100 [MiB/sec], 1TiB system memory needs about 3
hours to take crash dump.

While I don't think it's typical case, it's problematic that crash dump
requires some more hours depending on contents of memory at crash time.
It should always complete in as stable time as possible.

-- 
Thanks.
HATAYAMA, Daisuke

[-- Attachment #2: xen_e7540-performance.png --]
[-- Type: image/png, Size: 12137 bytes --]

[-- Attachment #3: Type: text/plain, Size: 143 bytes --]

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

  reply	other threads:[~2013-07-16  9:23 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-09 16:24 makedumpfile 1.5.4, 734G kdump tests Cliff Wickman
     [not found] ` <CAJGZr0JPrBB3cVyVdwJdd6cEUfnXNMuRijb9EOoSy+XmRupv7A@mail.gmail.com>
2013-07-10  9:07   ` HATAYAMA Daisuke
     [not found]     ` <CAJGZr0L7GBqJHaPzgpFxhpF_jmAZfDJYU_=MFxATxnLk13ni4g@mail.gmail.com>
2013-07-10 18:27       ` Cliff Wickman
2013-07-11 13:06 ` Vivek Goyal
2013-07-12 16:14   ` Cliff Wickman
2013-07-12 16:42     ` Vivek Goyal
2013-07-16  9:22       ` HATAYAMA Daisuke [this message]
2013-07-16 14:15         ` Vivek Goyal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51E510C9.1060200@jp.fujitsu.com \
    --to=d.hatayama@jp.fujitsu.com \
    --cc=cpw@sgi.com \
    --cc=kexec@lists.infradead.org \
    --cc=kumagai-atsushi@mxc.nes.nec.co.jp \
    --cc=vgoyal@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox