From: Xunlei Pang <xpang@redhat.com>
To: Pratyush Anand <panand@redhat.com>,
xlpang@redhat.com, Dave Young <dyoung@redhat.com>,
Pingfan Liu <piliu@redhat.com>
Cc: kexec@lists.infradead.org, Baoquan He <bhe@redhat.com>,
kernelfans@gmail.com
Subject: Re: [PATCHv2 2/2] [fs] proc/vmcore: check the dummy place holder for offline cpu to avoid warning
Date: Wed, 21 Dec 2016 12:52:05 +0800 [thread overview]
Message-ID: <585A0A75.7000703@redhat.com> (raw)
In-Reply-To: <569964b8-aa13-11cf-5825-bb830a940277@redhat.com>
On 12/21/2016 at 11:57 AM, Pratyush Anand wrote:
>
>
> On Wednesday 21 December 2016 08:56 AM, Xunlei Pang wrote:
>> On 12/20/2016 at 11:38 PM, Pratyush Anand wrote:
>>>
>>>
>>> On Monday 19 December 2016 08:10 AM, Dave Young wrote:
>>>> Hi, Pingfan
>>>>
>>>> On 12/19/16 at 10:08am, Pingfan Liu wrote:
>>>>>> kexec-tools always allocates program headers for present cpus. But
>>>>>> when crashing, offline cpus have dummy headers. We do not copy these
>>>>>> dummy notes into ELF file, also have no need of warning on them.
>>>> I still think it is not worth such a fix, if you feel a lot of warnings
>>>> in case large cpu numbers, I think you can change the pr_warn to
>>>> pr_warn_once, we do not care the null cpu notes if it has nothing bad
>>>> to the vmcore.
>>>>
>>>
>>> I agree. Warning is more like information here. May be, we can count the number of times real_sz was 0, and then can print an info at the end in stead of warning, like..."N number of CPUs would have been offline, PT_NOTE entries was absent for them."
>>
>> Well, OTOH the warning may also be due to some user-space misuse, we can't distinguish that without extra information added.
>
> Yes, yes..I agree, I meant that the above info is just indicative. May be "might have been" could be better word than "would have been" in the above info print message.
>
>
>>
>> Another possible user-space fix would be: Firstly fix kexec-tools to add notes only for online cpus,
>> then utilize udev rules(cpu online/offline events) to automatically trigger kdump kernel reload.
>
> Hummm..this is certainly possible. But can we do much even when we get the info that the PT_NOTE was compromised by user space?
>
> Therefore, I am of the view that if at all we are concerned about number of warning messages in case of multiple offline cpu, then we can just print the total number of NULL PT_NOTE at the end of loop.
Yes, agree.
Regards,
Xunlei
_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec
next prev parent reply other threads:[~2016-12-21 4:50 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-19 2:08 [PATCHv2 1/2] kexec: add a dummy note for each offline cpu Pingfan Liu
2016-12-19 2:08 ` [PATCHv2 2/2] [fs] proc/vmcore: check the dummy place holder for offline cpu to avoid warning Pingfan Liu
2016-12-19 2:40 ` Dave Young
2016-12-20 15:38 ` Pratyush Anand
2016-12-21 3:26 ` Xunlei Pang
2016-12-21 3:57 ` Pratyush Anand
2016-12-21 4:52 ` Xunlei Pang [this message]
2016-12-21 7:15 ` Liu ping fan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=585A0A75.7000703@redhat.com \
--to=xpang@redhat.com \
--cc=bhe@redhat.com \
--cc=dyoung@redhat.com \
--cc=kernelfans@gmail.com \
--cc=kexec@lists.infradead.org \
--cc=panand@redhat.com \
--cc=piliu@redhat.com \
--cc=xlpang@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox