From: Chao Fan <cfan@redhat.com>
To: Atsushi Kumagai <ats-kumagai@wm.jp.nec.com>
Cc: "HATAYAMA Daisuke (d.hatayama@jp.fujitsu.com)"
<d.hatayama@jp.fujitsu.com>,
zhouwj-fnst@cn.fujitsu.com, kexec@lists.infradead.org
Subject: Re: [PATCH RFC 00/11] makedumpfile: parallel processing
Date: Thu, 24 Dec 2015 04:04:36 -0500 (EST) [thread overview]
Message-ID: <1660650601.2900673.1450947876223.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <0910DD04CBD6DE4193FCF86B9C00BE9701E10859@BPXM01GP.gisp.nec.co.jp>
----- Original Message -----
> From: "Atsushi Kumagai" <ats-kumagai@wm.jp.nec.com>
> To: "HATAYAMA Daisuke (d.hatayama@jp.fujitsu.com)" <d.hatayama@jp.fujitsu.com>, "Chao Fan" <cfan@redhat.com>
> Cc: zhouwj-fnst@cn.fujitsu.com, kexec@lists.infradead.org
> Sent: Thursday, December 24, 2015 4:20:42 PM
> Subject: RE: [PATCH RFC 00/11] makedumpfile: parallel processing
>
> >> >> >> >> Could you provide the information of your cpu ?
> >> >> >> >> I will do some further investigation later.
> >> >> >> >>
> >> >> >> >
> >> >> >> > OK, of course, here is the information of cpu:
> >> >> >> >
> >> >> >> > # lscpu
> >> >> >> > Architecture: x86_64
> >> >> >> > CPU op-mode(s): 32-bit, 64-bit
> >> >> >> > Byte Order: Little Endian
> >> >> >> > CPU(s): 48
> >> >> >> > On-line CPU(s) list: 0-47
> >> >> >> > Thread(s) per core: 1
> >> >> >> > Core(s) per socket: 6
> >> >> >> > Socket(s): 8
> >> >> >> > NUMA node(s): 8
> >> >> >> > Vendor ID: AuthenticAMD
> >> >> >> > CPU family: 16
> >> >> >> > Model: 8
> >> >> >> > Model name: Six-Core AMD Opteron(tm) Processor 8439 SE
> >> >> >> > Stepping: 0
> >> >> >> > CPU MHz: 2793.040
> >> >> >> > BogoMIPS: 5586.22
> >> >> >> > Virtualization: AMD-V
> >> >> >> > L1d cache: 64K
> >> >> >> > L1i cache: 64K
> >> >> >> > L2 cache: 512K
> >> >> >> > L3 cache: 5118K
> >> >> >> > NUMA node0 CPU(s): 0,8,16,24,32,40
> >> >> >> > NUMA node1 CPU(s): 1,9,17,25,33,41
> >> >> >> > NUMA node2 CPU(s): 2,10,18,26,34,42
> >> >> >> > NUMA node3 CPU(s): 3,11,19,27,35,43
> >> >> >> > NUMA node4 CPU(s): 4,12,20,28,36,44
> >> >> >> > NUMA node5 CPU(s): 5,13,21,29,37,45
> >> >> >> > NUMA node6 CPU(s): 6,14,22,30,38,46
> >> >> >> > NUMA node7 CPU(s): 7,15,23,31,39,47
> >> >> >>
> >> >> >> This CPU assignment on NUMA nodes looks interesting. Is it possible
> >> >> >> that this affects performance of makedumpfile? This is just a guess.
> >> >> >>
> >> >> >> Could you check whether the performance gets imporoved if you run
> >> >> >> each
> >> >> >> thread on the same NUMA node? For example:
> >> >> >>
> >> >> >> # taskset -c 0,8,16,24 makedumpfile --num-threads 4 -c -d 0 vmcore
> >> >> >> vmcore-cd0
> >> >> >>
> >> >> > Hi HATAYAMA,
> >> >> >
> >> >> > I think your guess is right, but maybe your command has a little
> >> >> > problem.
> >> >> >
> >> >> > From my test, the NUMA did affect the performance, but not too much.
> >> >> > The average time of cpus in the same NUMA node:
> >> >> > # taskset -c 0,8,16,24,32 makedumpfile --num-threads 4 -c -d 0 vmcore
> >> >> > vmcore-cd0
> >> >> > is 314s
> >> >> > The average time of cpus in different NUMA node:
> >> >> > # taskset -c 2,3,5,6,7 makedumpfile --num-threads 4 -c -d 0 vmcore
> >> >> > vmcore-cd0
> >> >> > is 354s
> >> >> >
> >> >>
> >> >> Hmm, according to some previous discussion, what we should see here is
> >> >> whether it affects performance of makedumpfile with --num-threads 1
> >> >> and -d 31. So you should need to compare:
> >> >>
> >> >> # taskset 0,8 makedumpfile --num-threads 1 -c -d 31 vmcore
> >> >> vmcore-d31
> >> >>
> >> >> with:
> >> >>
> >> >> # taskset 0 makedumpfile -c -d 0 vmcore vmcore-d31
> >>
> >> I removed -c option wrongly. What I wanted to write is:
> >>
> >> # taskset -c 0,8 makedumpfile --num-threads 1 -d 31 vmcore vmcore-d31
> >>
> >> and:
> >>
> >> # taskset -c 0 makedumpfile -d 31 vmcore vmcore-d31
> >>
> >> just in case...
>
> Why did you remove -c option from makedumpfile ?
> We are discussing the performance with compression.
> I think the below is correct:
>
> # taskset -c 0,8 makedumpfile --num-threads 1 [-c|-l|-p] -d 31 vmcore
> vmcore-d31
>
> and:
>
> # taskset -c 0 makedumpfile [-c|-l|-p] -d 31 vmcore vmcore-d31
>
Hi Atsushi Kumagai,
"taskset -c 0,8 makedumpfile --num-threads 1" "taskset -c 0 makedumpfile"
-c 52s 61s
-l 33s 17s
-p 33s 18s
Thanks,
Chao Fan
>
> Thanks,
> Atsushi Kumagai
>
> >Hi HATAYAMA,
> >
> >the average time of
> ># taskset -c 0,8 makedumpfile --num-threads 1 -d 31 vmcore vmcore-d31
> >is 33s.
> >the average time of
> ># taskset -c 0 makedumpfile -d 31 vmcore vmcore-d31
> >is 18s.
> >
> >My test steps:
> >1. change /etc/kdump/conf with
> >"core_collector makedumpfile -l --message-level 1 -d 31"
> >2. make a crash
> >3. cd into the directory of the vmcore made by kdump
> >4. in the directory of vmcore do
> ># taskset -c 0,8 makedumpfile --num-threads 1 -d 31 vmcore vmcore-d31
> >or
> ># taskset -c 0 makedumpfile -d 31 vmcore vmcore-d31
> >
> >if there are there any problems, please tell me.
> >
> >Thanks,
> >Chao Fan
> >
> >> >>
> >> >> Also, I'm assuming that you've done these benchmark on kdump 1st
> >> >> kernel, not kdump 2nd kernel. Is this correct?
> >> >>
> >> > Hi HATAYAMA,
> >> >
> >> > I test in the first kernel, not in the kdump second kernel.
> >> >
> >>
> >> I see.
> >>
> >> --
> >> Thanks.
> >> HATAYAMA, Daisuke
> >> _______________________________________________
> >> kexec mailing list
> >> kexec@lists.infradead.org
> >> http://lists.infradead.org/mailman/listinfo/kexec
> >>
> >
> >_______________________________________________
> >kexec mailing list
> >kexec@lists.infradead.org
> >http://lists.infradead.org/mailman/listinfo/kexec
> _______________________________________________
> kexec mailing list
> kexec@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/kexec
>
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
next prev parent reply other threads:[~2015-12-24 9:05 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-05 7:56 [PATCH RFC 00/11] makedumpfile: parallel processing Zhou Wenjian
2015-06-05 7:56 ` [PATCH RFC 01/11] Add readpage_kdump_compressed_parallel Zhou Wenjian
2015-06-05 7:56 ` [PATCH RFC 02/11] Add mappage_elf_parallel Zhou Wenjian
2015-06-05 7:56 ` [PATCH RFC 03/11] Add readpage_elf_parallel Zhou Wenjian
2015-06-05 7:56 ` [PATCH RFC 04/11] Add read_pfn_parallel Zhou Wenjian
2015-06-05 7:56 ` [PATCH RFC 05/11] Add function to initial bitmap for parallel use Zhou Wenjian
2015-06-05 7:57 ` [PATCH RFC 06/11] Add filter_data_buffer_parallel Zhou Wenjian
2015-06-05 7:57 ` [PATCH RFC 07/11] Add write_kdump_pages_parallel to allow parallel process Zhou Wenjian
2015-06-05 7:57 ` [PATCH RFC 08/11] Add write_kdump_pages_parallel_cyclic to allow parallel process in cyclic_mode Zhou Wenjian
2015-06-05 7:57 ` [PATCH RFC 09/11] Initial and free data used for parallel process Zhou Wenjian
2015-06-05 7:57 ` [PATCH RFC 10/11] Make makedumpfile available to read and compress pages parallelly Zhou Wenjian
2015-06-05 7:57 ` [PATCH RFC 11/11] Add usage and manual about multiple threads process Zhou Wenjian
2015-06-08 3:55 ` [PATCH RFC 00/11] makedumpfile: parallel processing "Zhou, Wenjian/周文剑"
2015-12-01 8:39 ` Chao Fan
2015-12-02 5:29 ` "Zhou, Wenjian/周文剑"
2015-12-02 7:24 ` Dave Young
2015-12-02 7:38 ` "Zhou, Wenjian/周文剑"
2015-12-04 2:30 ` Atsushi Kumagai
2015-12-04 3:33 ` "Zhou, Wenjian/周文剑"
2015-12-04 8:56 ` Chao Fan
2015-12-07 1:09 ` "Zhou, Wenjian/周文剑"
2015-12-10 8:14 ` Atsushi Kumagai
2015-12-10 9:36 ` "Zhou, Wenjian/周文剑"
2015-12-10 9:58 ` Chao Fan
2015-12-10 10:32 ` "Zhou, Wenjian/周文剑"
2015-12-10 10:54 ` Chao Fan
2015-12-22 8:32 ` HATAYAMA Daisuke
2015-12-24 2:20 ` Chao Fan
2015-12-24 3:22 ` HATAYAMA Daisuke
2015-12-24 3:31 ` Chao Fan
2015-12-24 3:50 ` HATAYAMA Daisuke
2015-12-24 6:02 ` Chao Fan
2015-12-24 7:22 ` HATAYAMA Daisuke
2015-12-24 8:20 ` Atsushi Kumagai
2015-12-24 9:04 ` Chao Fan [this message]
2015-12-14 8:26 ` Atsushi Kumagai
2015-12-14 8:59 ` "Zhou, Wenjian/周文剑"
2015-06-10 6:06 ` Atsushi Kumagai
2015-06-11 3:47 ` "Zhou, Wenjian/周文剑"
2015-06-15 1:59 ` qiaonuohan
2015-06-15 5:57 ` Atsushi Kumagai
2015-06-15 6:06 ` qiaonuohan
2015-06-15 6:07 ` qiaonuohan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1660650601.2900673.1450947876223.JavaMail.zimbra@redhat.com \
--to=cfan@redhat.com \
--cc=ats-kumagai@wm.jp.nec.com \
--cc=d.hatayama@jp.fujitsu.com \
--cc=kexec@lists.infradead.org \
--cc=zhouwj-fnst@cn.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox