Kexec Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "\"Zhou, Wenjian/周文剑\"" <zhouwj-fnst@cn.fujitsu.com>
To: Atsushi Kumagai <ats-kumagai@wm.jp.nec.com>,
	Minfei Huang <mhuang@redhat.com>
Cc: "kexec@lists.infradead.org" <kexec@lists.infradead.org>
Subject: Re: [PATCH v4] Improve the performance of --num-threads -d 31
Date: Mon, 28 Mar 2016 09:23:23 +0800	[thread overview]
Message-ID: <56F8878B.8080909@cn.fujitsu.com> (raw)
In-Reply-To: <0910DD04CBD6DE4193FCF86B9C00BE9701E341B4@BPXM01GP.gisp.nec.co.jp>

On 03/25/2016 10:57 AM, Atsushi Kumagai wrote:
> Hello,
>
> This is just a quick note to inform you.
> I measured the memory consumption with -d31 by VmHWM in
> /proc/PID/status and compared them between v3 and v4 since
> Minfei said the problem only occurs in v4.
>
>              |          VmHWM[kB]
> num-thread  |      v3            v4
> ------------+--------------------------
>       1      |    20,516        20,516
>       2      |    20,624        20,628
>       4      |    20,832        20,832
>       8      |    21,292        21,288
>      16      |    22,240        22,236
>      32      |    24,096        24,100
>      64      |    27,900        27,888
>
> According to this result, the problem we face seems not just
> any lack of memory issue.
>

Yes, I had realized it, for there isn't much difference between v3 and v4.
And it is hardly to some further investigation, until get Minfei's result.

BTW, can you reproduce the bug?

> BTW, the memory consumption increases depending on num-thread,
> I think it should be considered in the calculate_cyclic_buffer_size().
>

I will think about it.

-- 
Thanks
Zhou

>
> Thanks,
> Atsushi Kumagai
>
> diff --git a/makedumpfile.c b/makedumpfile.c
> index 4075f3e..d5626f9 100644
> --- a/makedumpfile.c
> +++ b/makedumpfile.c
> @@ -44,6 +44,14 @@ extern int find_vmemmap();
>
>   char filename_stdout[] = FILENAME_STDOUT;
>
> +void
> +print_VmHWM(void)
> +{
> +       char command[64];
> +       sprintf(command, "grep VmHWM /proc/%d/status", getpid());
> +       system(command);
> +}
> +
>   /* Cache statistics */
>   static unsigned long long      cache_hit;
>   static unsigned long long      cache_miss;
> @@ -11185,5 +11193,7 @@ out:
>          }
>          free_elf_info();
>
> +       print_VmHWM();
> +
>          return retcd;
>   }
>
>
>> Hi, Zhou.
>>
>> I'm on holiday now, you can ask other people to help test, if necessary.
>>
>> Thanks
>> Minfei
>>
>>> 在 2016年3月24日,12:29,Zhou, Wenjian/周文剑 <zhouwj-fnst@cn.fujitsu.com> 写道:
>>>
>>> Hello Minfei,
>>>
>>> How do these two patches work?
>>>
>>> --
>>> Thanks
>>> Zhou
>>>
>>>> On 03/18/2016 01:48 PM, "Zhou, Wenjian/周文剑" wrote:
>>>>> On 03/18/2016 12:16 PM, Minfei Huang wrote:
>>>>>> On 03/18/16 at 10:46am, "Zhou, Wenjian/周文剑" wrote:
>>>>>> Hello Minfei,
>>>>>>
>>>>>> Since I can't produce the bug, I reviewed the patch and wrote an increment patch.
>>>>>> Though there are some bugs in the increment patch,
>>>>>> I wonder if the previous bug still exists with this patch.
>>>>>> Could you help me confirm it?
>>>>>
>>>>> Ok. I will help verify this increasing patch.
>>>>
>>>> Thank you very much.
>>>>
>>>>>>
>>>>>> And I have another question.
>>>>>> Did it only occur in patch v4?
>>>>>
>>>>> This issue doesn't exist in v3. I have pasted the test result with
>>>>> --num-thread 32 in that thread.
>>>>>
>>>>> applied makedumpfile with option -d 31 --num-threads 32
>>>>> real    3m3.533s
>>>>
>>>> Oh, then the patch in the previous mail may not work.
>>>>
>>>> I'm appreciated if you can also test the patch in this letter.
>>>>
>>>> I introduced semaphore to fix the bug in the v3.
>>>> So I want to know if it is this which affects the result.
>>>> The attached patch is based on v4, used to remove semaohore.
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> kexec mailing list
>>>> kexec@lists.infradead.org
>>>> http://lists.infradead.org/mailman/listinfo/kexec
>>>
>>>
>> _______________________________________________
>> kexec mailing list
>> kexec@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/kexec
>
>



_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

  reply	other threads:[~2016-03-28  1:25 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-09  0:27 [PATCH v4] Improve the performance of --num-threads -d 31 Zhou Wenjian
2016-03-09  0:35 ` "Zhou, Wenjian/周文剑"
2016-03-11  1:00 ` "Zhou, Wenjian/周文剑"
2016-03-11  3:03   ` Minoru Usui
2016-03-11  3:10     ` "Zhou, Wenjian/周文剑"
2016-03-11  4:55       ` Atsushi Kumagai
2016-03-11  5:33   ` Minfei Huang
2016-03-15  6:34 ` Minfei Huang
2016-03-15  7:12   ` "Zhou, Wenjian/周文剑"
2016-03-15  7:38     ` Minfei Huang
2016-03-15  9:33     ` Minfei Huang
2016-03-16  1:55       ` "Zhou, Wenjian/周文剑"
2016-03-16  8:04         ` Minfei Huang
2016-03-16  8:24           ` Minfei Huang
2016-03-16  8:26           ` "Zhou, Wenjian/周文剑"
     [not found]             ` <B049E864-7426-4817-96FA-8E3CCA59CA24@redhat.com>
2016-03-16  8:59               ` "Zhou, Wenjian/周文剑"
2016-03-16  9:30                 ` Minfei Huang
2016-03-15  8:35   ` "Zhou, Wenjian/周文剑"
2016-03-18  2:46   ` "Zhou, Wenjian/周文剑"
2016-03-18  4:16     ` Minfei Huang
2016-03-18  5:48       ` "Zhou, Wenjian/周文剑"
2016-03-24  5:28         ` "Zhou, Wenjian/周文剑"
2016-03-24  5:39           ` Minfei Huang
2016-03-25  2:57             ` Atsushi Kumagai
2016-03-28  1:23               ` "Zhou, Wenjian/周文剑" [this message]
2016-03-28  5:43                 ` Atsushi Kumagai
2016-03-31  8:38         ` Minfei Huang
2016-03-31  9:09           ` "Zhou, Wenjian/周文剑"
2016-04-01  6:27             ` Minfei Huang
2016-04-01 11:21               ` "Zhou, Wenjian/周文剑"
2016-04-01 13:15                 ` Minfei Huang
2016-04-04  5:46                   ` Atsushi Kumagai
2016-04-05  9:18                     ` Minfei Huang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56F8878B.8080909@cn.fujitsu.com \
    --to=zhouwj-fnst@cn.fujitsu.com \
    --cc=ats-kumagai@wm.jp.nec.com \
    --cc=kexec@lists.infradead.org \
    --cc=mhuang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox