From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Yi Sun <yi.y.sun@linux.intel.com>
Cc: kevin.tian@intel.com, wei.liu2@citrix.com,
he.chen@linux.intel.com, andrew.cooper3@citrix.com,
dario.faggioli@citrix.com, ian.jackson@eu.citrix.com,
mengxu@cis.upenn.edu, jbeulich@suse.com,
chao.p.peng@linux.intel.com, xen-devel@lists.xenproject.org
Subject: Re: [PATCH v6 04/24] x86: refactor psr: implement CPU init and free flow.
Date: Wed, 8 Feb 2017 14:03:15 -0500 [thread overview]
Message-ID: <20170208190314.GB32168@x230.dumpdata.com> (raw)
In-Reply-To: <20170208163415.GB30224@localhost.localdomain>
> > +static void cpu_fini_work(unsigned int cpu)
> > +{
> > + unsigned int socket = cpu_to_socket(cpu);
> > +
> > + if ( !socket_cpumask[socket] || cpumask_empty(socket_cpumask[socket]) )
>
> I fear I don't understand this.
>
> Looking at 65a399ac it says:
>
> + .notifier_call = cpu_callback,
> + /*
> + * Ensure socket_cpumask is still valid in CPU_DEAD notification
> + * (E.g. our CPU_DEAD notification should be called ahead of
> + * cpu_smpboot_free).
>
> Which means that socket_cpumask[socket] should have an value.
>
> In fact cpumask_any(socket_cpumask[socket]) will return true at this
> point.
>
> Which means that code above gets called if this psr_cpu_fini is called
> _after_ cpu_smpboot_free (b/c socket_cpumask[socket] will be NULL), see
> cpu_smpboot_free.
>
> But with .priority being -1 that won't happen.
>
> But lets ignore that, lets say it does get called _after_
> cpu_smpboot_free. In which case the first part of the 'if' condition
> is true and we end up doing: cpumask_empty(NULL) which will result
> in an page fault.
>
> I think this is a bug.
I missed the fact that we call this function on CPU_UP_CANCELED
_and_ CPU_DEAD and that remove_siblinginfo is called before CPU_DEAD.
So say the socket is being onlined and we are the first CPU on
that (CPU=2).
1) CPU_UP_PREPARE notifier calls psr_cpu_prepare which allocates
'feat_l3_cat'.
2). Some other notifies dies and we end up calling CPU_UP_CANCELED.
psr_cpu_fini -> cpu_fini_work:
Since __cpu_up has not been called that means
socket_cpumask[socket] == NULL
and we enter free_feature.
It does 'if !(socket_info + socket)' effectively. And that is false
as init_psr was the one initializing that array which means the
pointer (info aka socket_info + socket) certainly is not NULL.
Anyhow this means free_feature tries to walk the list - but 'info'
is full of zeros.
Which means list_for_each_entry_safe is going to blow when trying
to set 'n' (as pos is zero aka NULL).
Perhaps you want an INIT_LIST_HEAD in 'init_psr' so that free_feature
can be idempotent?
Or perhaps you need '&&' in the if conditional (and naturally
remove the !)?
I wonder what other CPU notifiers are guilty of this..
Lets try another example:
1) CPU_UP_PREPARE notifier calls psr_cpu_prepare which allocates
'feat_l3_cat'.
2) All notifiers were OK, the __cpu_up has been called. The
socket_cpumask[socket] = <some data>
3) All notifiers are called with CPU_ONLINE.
All good.
..some time later ..
4). CPU is brought down. Notifiers for CPU_DOWN_PREPARE are called
[psr is not part of it]
5) CPU notifiers CPU_DEAD are called. They all have to return
NOTIFY_DONE otherwise we hit 'BUG_ON'
We call psr_cpu_fini -> cpu_fini_work
socket_cpumask[socket] has some data, so the first conditional
is false:
( !socket_cpumask[socket] ||
the second:
cpumask_empty(socket_cpumask[socket]) )
is true as take_cpu_down -> __cpu_disable -> remove_siblinginfo
was called before us.(and this is the first CPU in the socket).
And we call free_feature.
which is correct and we free our data.
And the 'cpumask_empty' guards us so that we only call this
on the very last CPU.
And we would still call this if the conditional was:
if ( socket_cpumask[socket] && cpumask_empty(socket_cpumask[socket]))
I think?
Feel free to poke holes at my analysis - I am sure I missed some edge
case.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-02-08 19:04 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-08 8:15 [PATCH v6 00/24] Enable L2 Cache Allocation Technology & Refactor psr.c Yi Sun
2017-02-08 8:15 ` [PATCH v6 01/24] docs: create Cache Allocation Technology (CAT) and Code and Data Prioritization (CDP) feature document Yi Sun
2017-02-08 15:56 ` Konrad Rzeszutek Wilk
2017-02-09 6:38 ` Yi Sun
2017-02-08 8:15 ` [PATCH v6 02/24] x86: refactor psr: remove L3 CAT/CDP codes Yi Sun
2017-02-08 8:15 ` [PATCH v6 03/24] x86: refactor psr: implement main data structures Yi Sun
2017-02-08 16:02 ` Konrad Rzeszutek Wilk
2017-02-09 11:05 ` Wei Liu
2017-02-08 8:15 ` [PATCH v6 04/24] x86: refactor psr: implement CPU init and free flow Yi Sun
2017-02-08 16:34 ` Konrad Rzeszutek Wilk
2017-02-08 19:03 ` Konrad Rzeszutek Wilk [this message]
2017-02-09 11:10 ` Yi Sun
2017-02-09 11:00 ` Yi Sun
2017-02-08 8:15 ` [PATCH v6 05/24] x86: refactor psr: implement Domain init/free and schedule flows Yi Sun
2017-02-08 16:43 ` Konrad Rzeszutek Wilk
2017-02-08 8:15 ` [PATCH v6 06/24] x86: refactor psr: implement get hw info flow Yi Sun
2017-02-10 21:34 ` Konrad Rzeszutek Wilk
2017-02-08 8:15 ` [PATCH v6 07/24] x86: refactor psr: implement get value flow Yi Sun
2017-02-09 11:05 ` Wei Liu
2017-02-10 21:38 ` Konrad Rzeszutek Wilk
2017-02-08 8:16 ` [PATCH v6 08/24] x86: refactor psr: set value: implement framework Yi Sun
2017-02-08 8:16 ` [PATCH v6 09/24] x86: refactor psr: set value: assemble features value array Yi Sun
2017-02-08 8:16 ` [PATCH v6 10/24] x86: refactor psr: set value: implement cos finding flow Yi Sun
2017-02-08 8:16 ` [PATCH v6 11/24] x86: refactor psr: set value: implement cos id picking flow Yi Sun
2017-02-08 8:16 ` [PATCH v6 12/24] x86: refactor psr: set value: implement write msr flow Yi Sun
2017-02-08 8:16 ` [PATCH v6 13/24] x86: refactor psr: implement CPU init and free flow for CDP Yi Sun
2017-02-08 8:16 ` [PATCH v6 14/24] x86: refactor psr: implement get hw info " Yi Sun
2017-02-08 8:16 ` [PATCH v6 15/24] x86: refactor psr: implement get value " Yi Sun
2017-02-08 8:16 ` [PATCH v6 16/24] x86: refactor psr: implement set value callback functions " Yi Sun
2017-02-08 8:16 ` [PATCH v6 17/24] x86: L2 CAT: implement CPU init and free flow Yi Sun
2017-02-08 8:16 ` [PATCH v6 18/24] x86: L2 CAT: implement get hw info flow Yi Sun
2017-02-08 8:16 ` [PATCH v6 19/24] x86: L2 CAT: implement get value flow Yi Sun
2017-02-08 8:16 ` [PATCH v6 20/24] x86: L2 CAT: implement set " Yi Sun
2017-02-08 8:16 ` [PATCH v6 21/24] tools: L2 CAT: support get HW info for L2 CAT Yi Sun
2017-02-08 8:16 ` [PATCH v6 22/24] tools: L2 CAT: support show cbm " Yi Sun
2017-02-08 8:16 ` [PATCH v6 23/24] tools: L2 CAT: support set " Yi Sun
2017-02-08 8:16 ` [PATCH v6 24/24] docs: add L2 CAT description in docs Yi Sun
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170208190314.GB32168@x230.dumpdata.com \
--to=konrad.wilk@oracle.com \
--cc=andrew.cooper3@citrix.com \
--cc=chao.p.peng@linux.intel.com \
--cc=dario.faggioli@citrix.com \
--cc=he.chen@linux.intel.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=kevin.tian@intel.com \
--cc=mengxu@cis.upenn.edu \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
--cc=yi.y.sun@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).