xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Keir Fraser <keir@xen.org>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Cc: "Liu, Jinsong" <jinsong.liu@intel.com>,
	xen-devel@lists.xensource.com, mark.langsdorf@amd.com
Subject: Re: [PATCH] use per-cpu variables in cpufreq
Date: Sat, 28 May 2011 08:52:49 +0100	[thread overview]
Message-ID: <CA066861.2E032%keir@xen.org> (raw)
In-Reply-To: <4DDFA72D.2060806@ts.fujitsu.com>

On 27/05/2011 14:29, "Juergen Gross" <juergen.gross@ts.fujitsu.com> wrote:

> On 05/27/11 15:11, Keir Fraser wrote:
>> On 27/05/2011 12:11, "Juergen Gross"<juergen.gross@ts.fujitsu.com>  wrote:
>> 
>>> The cpufreq driver used some local arrays indexed by cpu number. This patch
>>> replaces those arrays by per-cpu variables. The AMD and INTEL specific parts
>>> used different per-cpu data structures with nearly identical semantics.
>>> Fold the two structures into one by adding a generic architecture data item.
>> Xen's per-cpu data gets freed across cpu offline/online, whereas cpu-indexed
>> arrays of course do not. Will the cpufreq state be correctly handled across
>> offline/online if we switch to per-cpu vars?
> 
> As far as I could see, yes. The data should only be used for cpus with
> a valid acpid->cpuid translation, which is created when a cpu is going
> online and destroyed when it is going offline again.

That simply isn't true. acpiid_to_apicid[] is populated during boot and
entries are never destroyed.

Specifically, my fear is that this data gets pushed into the hypervisor
once-only during dom0 boot (via XENPF_set_processor_pminfo). If it is freed
during processor offline, we lose it forever and have no power management
when/if a CPU is brought back online. Worse I suspect your patch as it is
will crash if some CPUs are offline during boot as you'll deference their
per_cpu area which doesn't actually exist unless a CPU is online. You can
test this for yourself by adding a maxcpus=1 boot parameter for Xen.

The folding of the Intel/AMD structures might still be interesting, and
probably belongs as a separate patch anyway.

Cc'ing Intel and AMD guys to confirm this.

 -- Keir

> It would be nice, however, if the INTEL and/or AMD code owners could
> give an ack on this...
> 
> 
> Juergen

  reply	other threads:[~2011-05-28  7:52 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-27 11:11 [PATCH] use per-cpu variables in cpufreq Juergen Gross
2011-05-27 13:11 ` Keir Fraser
2011-05-27 13:29   ` Juergen Gross
2011-05-28  7:52     ` Keir Fraser [this message]
2011-05-30  5:47       ` Juergen Gross
2011-05-30  9:45         ` Keir Fraser
2011-05-31  1:51           ` Tian, Kevin
2011-05-31  7:31             ` Keir Fraser
2011-05-31  7:37             ` Liu, Jinsong
2011-05-30  8:06       ` Tian, Kevin
2011-05-30 15:33         ` Liu, Jinsong
2011-06-10 19:00 ` Langsdorf, Mark
2011-06-14  9:04   ` Juergen Gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA066861.2E032%keir@xen.org \
    --to=keir@xen.org \
    --cc=jinsong.liu@intel.com \
    --cc=juergen.gross@ts.fujitsu.com \
    --cc=mark.langsdorf@amd.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).