From: jszhang@marvell.com (Jisheng Zhang)
To: linux-arm-kernel@lists.infradead.org
Subject: Armada XP (MV78460): BUG in netdevice.h with maxcpus=2
Date: Fri, 8 Jan 2016 21:48:32 +0800 [thread overview]
Message-ID: <20160108214832.0c3f3a1e@xhacker> (raw)
In-Reply-To: <20160108133104.GJ19062@n2100.arm.linux.org.uk>
Dear Russell,
On Fri, 8 Jan 2016 13:31:04 +0000 Russell King - ARM Linux wrote:
> On Fri, Jan 08, 2016 at 08:45:23PM +0800, Jisheng Zhang wrote:
> > let's assume a quad core system, boot with maxcpus=2, after booting.
> >
> > on arm64, present cpus is cpu0, cpu1
> >
> > on arm, present cpus is cpu0, cpu1, cpu2 and cpu3.
> >
> > the arm core code requires every platform to update the present map in
> > platforms' smp_prepare_cpus(), but only two or three platforms do so.
>
> The behaviour of maxcpus= is architecture specific. Some architectures
> take notice of this to limit the number of present and possible CPUs,
> others ignore it. In any case, it limits the number of CPUs that come
> online at boot.
>
> However, that is a complete red herring, because what we're talking
> about is bringing up the network interface at some point later, when
> CPU hotplug may have changed the online state already: the kernel may
> have booted with CPU0-3 online, but CPUs 2 and 3 may have been taken
> offline before the network interface is brought up.
Oh yeah! On arm64, the BUG_ON can be reproduced in this case. So the driver
need a fix. Thanks a lot
>
> > What's the better solution? Could you please guide me?
>
> I would suggest that you need to track the state of each CPU within
> your percpu specific code with appropriate locks to prevent concurrency,
> and ensure that you register the CPU hotplug notifier just before
> walking the currently on-line CPUs.
>
> Also, walk the list of currently on-line CPUs (using, eg,
> smp_call_function) just before unregistering the hotplug notifier.
>
> Remember that your per-CPU bringup/takedown may be executed twice
> for the same CPU - once via the hotplug notifier and once via
> smp_call_function() - hence why you need locking and state tracking.
Got it. Thanks for the guidance, I'll try.
>
> That fixes two of the sites. For the other site, I wonder whether
> it's possible to restructure the code so there's no need to have
> three sites, since it's fairly obvious when thinking about the
> CPU hotplug case, you only have notification of the CPU coming
> online and the CPU going away. It's been a while since I looked
> at the mvneta code to really comment on that though.
>
Thank you very much,
Jisheng
prev parent reply other threads:[~2016-01-08 13:48 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-08 7:50 Armada XP (MV78460): BUG in netdevice.h with maxcpus=2 Stefan Roese
2016-01-08 10:25 ` Jisheng Zhang
2016-01-08 10:51 ` Gregory CLEMENT
2016-01-08 10:53 ` Stefan Roese
2016-01-08 10:57 ` Russell King - ARM Linux
2016-01-08 12:45 ` Jisheng Zhang
2016-01-08 13:03 ` Jisheng Zhang
2016-01-08 13:21 ` Jisheng Zhang
2016-01-08 13:31 ` Russell King - ARM Linux
2016-01-08 13:48 ` Jisheng Zhang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160108214832.0c3f3a1e@xhacker \
--to=jszhang@marvell.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).