Kexec Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "\"Zhou, Wenjian/周文剑\"" <zhouwj-fnst@cn.fujitsu.com>
To: Minfei Huang <mhuang@redhat.com>
Cc: kexec@lists.infradead.org
Subject: Re: [PATCH v4] Improve the performance of --num-threads -d 31
Date: Fri, 1 Apr 2016 19:21:45 +0800	[thread overview]
Message-ID: <56FE59C9.6060307@cn.fujitsu.com> (raw)
In-Reply-To: <20160401062714.GA23403@dhcp-128-25.nay.redhat.com>

On 04/01/2016 02:27 PM, Minfei Huang wrote:
> On 03/31/16 at 05:09pm, "Zhou, Wenjian/周文剑" wrote:
>> Hello Minfei,
>>
>> Thanks for your results.
>> And I have some questions.
>>
>> On 03/31/2016 04:38 PM, Minfei Huang wrote:
>>> Hi, Zhou.
>>>
>>> I have tested the increasing patch on 4T memory machine.
>>>
>>> makedumpfile fails to dump vmcore, if there are about 384M memory in 2nd
>>> kernel which is reserved by crashkernel=auto. But once the reserved
>>> memory is enlarged up to 10G, makedumpfile can dump vmcore successfully.
>>>
>>
>> Will it fail with patch v3? or just v4?
>
> Both v3 and v4 can work well, once reserved memory is enlarged manually.
>
>> I don't think it is a problem.
>> If 128 cpus are enabled in second kernel, there won't be much memory left if total memory is 384M.
>
> Enable 128 CPUs with 1GB reserved memory.
> kdump:/# /sysroot/bin/free -m
>                total        used        free      shared  buff/cache   available
> Mem:            976          97         732           6         146         774
>
> Enable 1 CPU with 1GB reserved memory.
> kdump:/# /sysroot/bin/free -m
>                total        used        free      shared  buff/cache   available
> Mem:            991          32         873           6          85         909
>
> Extra enabled 127 CPUs will consume 65MB. So I think it is acceptable
> in kdump kernel.
>
> The major memory is consumed by makedumpfile from the test result.
> crashkernel=auto doesn't work any more, if option --num-threads is
> set. Even more, there is no warning to let user enlarge the reserved
> memory.
>

Yes, we should remind user if they want to use too much threads.

>>
>> And I think it will also work if the reserved memory is set to 1G.
>
> Yes, makedumpfile can work well under 1GB reserved memory.
>
>>
>>> The cache should be dropped before testing, otherwise makedumpfile will
>>> fail to dump vmcore.
>>> echo 3 > /proc/sys/vm/drop_caches
>>> Maybe there is something cleanup we can do to avoid this.
>>>
>>> Following is the result with different parameter for option
>>> --num-threads.
>>>
>>> makedumpfile -l --num-threads 128 --message-level 1 -d 31 /proc/vmcore a.128
>>> real    5m34.116s
>>> user    103m42.531s
>>> sys 86m12.586s
> [ snip ]
>>> makedumpfile -l --num-threads 0 --message-level 1 -d 31 /proc/vmcore a.0
>>> real    3m46.531s
>>> user    3m29.371s
>>> sys 0m16.909s
>>>
>>> makedumpfile.back -l --message-level 1 -d 31 /proc/vmcore a
>>> real    3m55.712s
>>> user    3m39.254s
>>> sys 0m16.287s
>>>
>>> Once the reserved memory is enlarged, makedumpfile works well with or
>>> without this increaseing patch.
>>>
>>> But there is an another issue I found during testing. makedumpfile may
>>> hang in about 24%. And with option --num-threads 64, this issue is also
>>> occured.
>>>
>>
>> Will it occur with patch v3?
>> If it not occurs, then neither of the previous two increasing patches will work?
>>
>> And did you test it with or without the increasing patch?
>
> without this increasing patch, v4 works well.
>

Do you mean makedumpfile won't hang without the increasing patch?

-- 
Thanks
Zhou
>>
>>> makedumpfile -l --num-threads 128 --message-level 1 -d 31 /proc/vmcore a.128
>>> Excluding unnecessary pages        : [100.0 %] |
>>> Excluding unnecessary pages        : [100.0 %] /
>>> Excluding unnecessary pages        : [100.0 %] -
>>> Copying data                       : [ 11.2 %] |
>>> Copying data                       : [ 12.4 %] -
>>> Excluding unnecessary pages        : [100.0 %] \
>>> Excluding unnecessary pages        : [100.0 %] |
>>> Copying data                       : [ 23.6 %] -
>>> Copying data                       : [ 24.4 %] /
>>>
>>
>> Could you help me find which line of the code is running at when it hanging?
>> makedumpfile may be in a loop and can't go out by some bugs.
>
> This issue happens very occasionally. I can update it once meet it.
>
> Thanks
> Minfei
>
>






_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

  reply	other threads:[~2016-04-01 11:23 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-09  0:27 [PATCH v4] Improve the performance of --num-threads -d 31 Zhou Wenjian
2016-03-09  0:35 ` "Zhou, Wenjian/周文剑"
2016-03-11  1:00 ` "Zhou, Wenjian/周文剑"
2016-03-11  3:03   ` Minoru Usui
2016-03-11  3:10     ` "Zhou, Wenjian/周文剑"
2016-03-11  4:55       ` Atsushi Kumagai
2016-03-11  5:33   ` Minfei Huang
2016-03-15  6:34 ` Minfei Huang
2016-03-15  7:12   ` "Zhou, Wenjian/周文剑"
2016-03-15  7:38     ` Minfei Huang
2016-03-15  9:33     ` Minfei Huang
2016-03-16  1:55       ` "Zhou, Wenjian/周文剑"
2016-03-16  8:04         ` Minfei Huang
2016-03-16  8:24           ` Minfei Huang
2016-03-16  8:26           ` "Zhou, Wenjian/周文剑"
     [not found]             ` <B049E864-7426-4817-96FA-8E3CCA59CA24@redhat.com>
2016-03-16  8:59               ` "Zhou, Wenjian/周文剑"
2016-03-16  9:30                 ` Minfei Huang
2016-03-15  8:35   ` "Zhou, Wenjian/周文剑"
2016-03-18  2:46   ` "Zhou, Wenjian/周文剑"
2016-03-18  4:16     ` Minfei Huang
2016-03-18  5:48       ` "Zhou, Wenjian/周文剑"
2016-03-24  5:28         ` "Zhou, Wenjian/周文剑"
2016-03-24  5:39           ` Minfei Huang
2016-03-25  2:57             ` Atsushi Kumagai
2016-03-28  1:23               ` "Zhou, Wenjian/周文剑"
2016-03-28  5:43                 ` Atsushi Kumagai
2016-03-31  8:38         ` Minfei Huang
2016-03-31  9:09           ` "Zhou, Wenjian/周文剑"
2016-04-01  6:27             ` Minfei Huang
2016-04-01 11:21               ` "Zhou, Wenjian/周文剑" [this message]
2016-04-01 13:15                 ` Minfei Huang
2016-04-04  5:46                   ` Atsushi Kumagai
2016-04-05  9:18                     ` Minfei Huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56FE59C9.6060307@cn.fujitsu.com \
    --to=zhouwj-fnst@cn.fujitsu.com \
    --cc=kexec@lists.infradead.org \
    --cc=mhuang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox