public inbox for kexec@lists.infradead.org
 help / color / mirror / Atom feed
From: Tao Liu <ltao@redhat.com>
To: "HAGIO KAZUHITO(萩尾 一仁)" <k-hagio-ab@nec.com>
Cc: "kexec@lists.infradead.org" <kexec@lists.infradead.org>
Subject: Re: [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile
Date: Tue, 21 Sep 2021 17:26:02 +0800	[thread overview]
Message-ID: <YUmlKqK/CY9SfiAN@localhost.localdomain> (raw)
In-Reply-To: <TYYPR01MB67772E684C0CAE16782D5788DDDD9@TYYPR01MB6777.jpnprd01.prod.outlook.com>

Hello Kazu,

Sorry for the late reply.

On Fri, Sep 17, 2021 at 07:03:50AM +0000, HAGIO KAZUHITO(萩尾 一仁) wrote:
> -----Original Message-----
> > -----Original Message-----
> > > > > > This patch set adds ZSTD compression support to makedumpfile. With ZSTD compression
> > > > > > support, the vmcore dump size and time consumption can have a better balance than
> > > > > > zlib/lzo/snappy.
> > > > > >
> > > > > > How to build:
> > > > > >
> > > > > >   Build using make:
> > > > > >     $ make USEZSTD=on
> > > > > >
> > > > > > Performance Comparison:
> > > > > >
> > > > > >   How to measure
> > > > > >
> > > > > >     I took a x86_64 machine which had 4T memory, and the compression level
> > > > > >     range from (-3 to 4) for ZSTD, as well as zlib/lzo/snappy compression.
> > > > > >     All testing was done by makedumpfile single thread mode.
> > > > > >
> > > > > >     As for compression performance testing, in order to avoid the performance
> > > > > >     bottle neck of disk I/O, I used the following makedumpfile cmd, which took
> > > > > >     lzo compression as an example. "--dry-run" will not write any data to disk,
> > > > > >     "--show-stat" will output the vmcore size after compression, and the time
> > > > > >     consumption can be collected from the output logs.
> > > > > >
> > > > > >     $ makedumpfile -d 0 -l /proc/kcore vmcore --dry-run --show-stat
> > > > > >
> > > > > >
> > > > > >     As for decompression performance testing, I only tested the (-d 31) case,
> > > > > >     because the vmcore size of (-d 0) case is too big to fit in the disk, in
> > > > > >     addtion, to read a oversized file from disk will encounter the disk I/O
> > > > > >     bottle neck.
> > > > > >
> > > > > >     I triggered a kernel crash and collected a vmcore. Then I converted the
> > > > > >     vmcore into specific compression format using the following makedumpfile
> > > > > >     cmd, which would get a lzo format vmcore as an example:
> > > > > >
> > > > > >     $ makedumpfile -l vmcore vmcore.lzo
> > > > > >
> > > > > >     After all vmcores were ready, I used the following cmd to perform the
> > > > > >     decompression, the time consumption can be collected from the logs.
> > > > > >
> > > > > >     $ makedumpfile -F vmcore.lzo --dry-run --show-stat
> > > > > >
> > > > > >
> > > > > >   Result charts
> > > > > >
> > > > > >     For compression:
> > > > > >
> > > > > >             makedumpfile -d31			|  makedumpfile -d0
> > > > > >             Compression time	vmcore size	|  Compression time  vmcore size
> > > > > >     zstd-3  325.516446	        5285179595	|  8205.452248	     51715430204
> > > > > >     zstd-2  332.069432	        5319726604	|  8057.381371	     51732062793
> > > > > >     zstd-1  309.942773	        5730516274	|  8138.060786	     52136191571
> > > > > >     zstd0   439.773076	        4673859661	|  8873.059963	     50993669657
> > > > > >     zstd1   406.68036	        4700959521	|  8259.417132	     51036900055
> > > > > >     zstd2   397.195643	        4699263608	|  8230.308291	     51030410942
> > > > > >     zstd3   436.491632	        4673306398	|  8803.970103	     51043393637
> > > > > >     zstd4   543.363928	        4668419304	|  8991.240244	     51058088514
> > > > > >     zlib    561.217381	        8514803195      | 14381.755611	     78199283893
> > > > > >     lzo	    248.175953	       16696411879	|  6057.528781	     90020895741
> > > > > >     snappy  231.868312	       11782236674	|  5290.919894	    245661288355
> > > > > >
> > > > > >     For decompression:
> > > > > >
> > > > > >             makedumpfile -d31
> > > > > >             decompress time	   vmcore size
> > > > > >     zstd-3	477.543396	       5289373448
> > > > > >     zstd-2	478.034534	       5327454123
> > > > > >     zstd-1	459.066807	       5748037931
> > > > > >     zstd0	561.687525	       4680009013
> > > > > >     zstd1	547.248917	       4706358547
> > > > > >     zstd2	544.219758	       4704780719
> > > > > >     zstd3	555.726343	       4680009013
> > > > > >     zstd4	558.031721	       4675545933
> > > > > >     zlib	630.965426	       8555376229
> > > > > >     lzo	    	427.292107	      16849457649
> > > > > >     snappy	446.542806	      11841407957
> > > > > >
> > > > > >   Discussion
> > > > > >
> > > > > >     For zstd range from -3 to 4, compression level 2 (ZSTD_dfast) has
> > > > > >     the best time consumption and vmcore dump size balance.
> > > > >
> > > > > Do you have a result of -d 1 compression test?  I think -d 0 is not
> > > > > practical, I would like to see a -d 1 result of such a large vmcore.
> > > > >
> > > >
> > > > No, I haven't tested the -d 1 case. I have returned the machine which used
> > > > for performance testing, I will borrow and test on it again, please wait for
> > > > a while...
> > >
> > > Thanks, it would be helpful.
> > >
> > > >
> > > > > And just out of curiosity, what version of zstd are you using?
> > > > > When I tested zstd last time, compression level 1 was faster than 2, iirc.
> > > > >
> > > >
> > > > The OS running on the machine is fedora34, I used its default zstd package, whose
> > > > version is v1.4.9.
> > >
> > > Thanks for the info.
> > >
> > > >
> > > > > btw, ZSTD_dfast is an enum of ZSTD_strategy, not for compression level?
> > > >
> > > > Yes, it's enum of ZSTD_strategy [1].
> > >
> > > ok, so it'll have to be replaced with "2" to avoid confusion.
> > >
> > > >
> > > > [1]: https://zstd.docsforge.com/dev/api-documentation/#advanced-compression-api-requires-v140
> > > >
> > > > Thanks,
> > > > Tao Liu
> > > >
> > > > > (no need to update for now, I will review later)
> > >
> > > The series almost looks good to me (though I will merge those into a patch),
> > > just two questions are:
> > > - whether 2 is the best balanced compression level,
> 
> As far as I've tested on two machines this time, compression level 1 was faster
> than 2.  There is no large difference between them, but generally 1 should be
> faster than 2 according to the zstd manual:
>   "The lower the level, the faster the speed (at the cost of compression)."
> And as you know, level 0 is unstable, that was the same when I tested before.
> 
> So currently I would prefer 1 rather than 2, what do you think?

As we mentioned before, I have tested the -d 1 compression measurement on
the same x86_64 machine with 4T memory:

       compression time| vmcore size
zstd-3  4620.795194	31720632985
zstd-2  4545.636437	31716847503
zstd-1  4516.076298	32113300399
zstd0   4663.17618	30967496299
zstd1   4618.386313	31010305809
zstd2   4633.535771	31005073344
zstd3   4673.240663	30967855841
zstd4   4771.1416	30965914853
lzo     4801.958368	34920417584
zlib    4442.257105	43482765168
snappy  4433.957005	38594790371

As for the decompression, I didn't get a meaningful value, because the vmcore size
were too large, and the most time was spent on disk I/O, thus the decompression time
measurement didn't show an obvious difference.

I agree the compression level 1 and 2 don't have a big difference. I'm OK with
your preference.

> 
> Results:
> * RHEL8.4 with libzstd-1.4.4 / 64GB filled with QEMU memory/images mainly
> # free
>               total        used        free      shared  buff/cache   available
> Mem:       65599824    21768028      549364        4668    43282432    43078828
> Swap:      32964604     4827916    28136688
> 
>        makedumpfile -d 1           makedumpfile -d 31
>        copy sec.   write bytes     copy sec.  write bytes
> zstd1  220.979689  26456659213     9.014176   558845000
> zstd2  227.774602  26402437190     9.078599   560681256
> lzo     83.406291  33078995065     3.603778   810219860
> 
> * RHEL with libzstd-1.5.0 / 64GB filled with kernel source code mainly
> # free
>                total        used        free      shared  buff/cache   available
> Mem:        65329632     9925536      456020    53086068    54948076     1549088
> Swap:       32866300     1607424    31258876
> 
>        makedumpfile -d 1           makedumpfile -d 31
> zstd1  520.844189  15537080819     53.494782  1200754023
> zstd2  533.912451  15469575651     53.641510  1199561609
> lzo    233.370800  20780821165     23.281374  1740041245
> 
> (Used /proc/kcore, so not stable memory, but measured zstd 3 times and
> picked the middle elapsed time.)
> 

Thanks for sharing the results. Just for curiosity, can you share your way
of testing as well? I can improve mine for later use.

Thanks,
Tao Liu

> Thanks,
> Kazu
> 
> 


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

  reply	other threads:[~2021-09-21  9:26 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-10 10:33 [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile Tao Liu
2021-09-10 10:33 ` [PATCH 01/11] makedumpfile: Add dump header for zstd Tao Liu
2021-09-10 10:33 ` [PATCH 02/11] makedumpfile: Add command-line processing " Tao Liu
2021-09-10 10:33 ` [PATCH 03/11] makedumpfile: Add zstd build support Tao Liu
2021-09-10 10:33 ` [PATCH 04/11] makedumpfile: Notify zstd unsupporting when disabled Tao Liu
2021-09-10 10:33 ` [PATCH 05/11] makedumpfile: Add single thread zstd compression processing Tao Liu
2021-09-10 10:33 ` [PATCH 06/11] makedumpfile: Add parallel threads " Tao Liu
2021-09-10 10:33 ` [PATCH 07/11] makedumpfile: Add single thread zstd uncompression processing Tao Liu
2021-09-10 10:33 ` [PATCH 08/11] makedumpfile: Add parallel threads " Tao Liu
2021-09-10 10:33 ` [PATCH 09/11] makedumpfile: Add zstd help message Tao Liu
2021-09-10 10:33 ` [PATCH 10/11] makedumpfile: Add zstd manual description Tao Liu
2021-09-10 10:33 ` [PATCH 11/11] makedumpfile: Add zstd README description Tao Liu
2021-09-14  7:04 ` [PATCH 00/11] makedumpfile: Add zstd support for makedumpfile HAGIO KAZUHITO(萩尾 一仁)
2021-09-14  8:33   ` Tao Liu
2021-09-17  1:34     ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-17  2:31       ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-17  7:03         ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-21  9:26           ` Tao Liu [this message]
2021-09-22  2:21             ` HAGIO KAZUHITO(萩尾 一仁)
2021-09-22  8:16               ` HAGIO KAZUHITO(萩尾 一仁)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YUmlKqK/CY9SfiAN@localhost.localdomain \
    --to=ltao@redhat.com \
    --cc=k-hagio-ab@nec.com \
    --cc=kexec@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox