From: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
To: Nathan Fontenot <nfont@austin.ibm.com>,
Benjamin Herrenschmidt <benh@au1.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>,
Gautham R Shenoy <ego@in.ibm.com>,
linux-kernel@vger.kernel.org,
Arun R Bharadwaj <arun@linux.vnet.ibm.com>,
Andrew Morton <akpm@linux-foundation.org>,
linuxppc-dev@lists.ozlabs.org, Ingo Molnar <mingo@elte.hu>
Subject: [PATCH v6 1/2] pseries: Add code to online/offline CPUs of a DLPAR node
Date: Fri, 27 Nov 2009 01:28:55 +0530 [thread overview]
Message-ID: <20091126195855.29159.29613.stgit@drishya> (raw)
In-Reply-To: <20091126195121.29159.66098.stgit@drishya>
From: Gautham R Shenoy <ego@in.ibm.com>
Currently the cpu-allocation/deallocation on pSeries is a
two step process from the Userspace.
- Set the indicators and update the device tree by writing to the sysfs
tunable "probe" during allocation and "release" during deallocation.
- Online / Offline the CPUs of the allocated/would_be_deallocated node by
writing to the sysfs tunable "online".
This patch adds kernel code to online/offline the CPUs soon_after/just_before
they have been allocated/would_be_deallocated. This way, the userspace tool
that performs DLPAR operations would only have to deal with one set of sysfs
tunables namely "probe" and release".
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Acked-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
---
arch/powerpc/platforms/pseries/dlpar.c | 101 ++++++++++++++++++++++++++++++++
1 files changed, 101 insertions(+), 0 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c
index fe8d4b3..642e1b2 100644
--- a/arch/powerpc/platforms/pseries/dlpar.c
+++ b/arch/powerpc/platforms/pseries/dlpar.c
@@ -16,6 +16,7 @@
#include <linux/proc_fs.h>
#include <linux/spinlock.h>
#include <linux/cpu.h>
+#include "offline_states.h"
#include <asm/prom.h>
#include <asm/machdep.h>
@@ -287,6 +288,98 @@ int dlpar_detach_node(struct device_node *dn)
return 0;
}
+int online_node_cpus(struct device_node *dn)
+{
+ int rc = 0;
+ unsigned int cpu;
+ int len, nthreads, i;
+ const u32 *intserv;
+
+ intserv = of_get_property(dn, "ibm,ppc-interrupt-server#s", &len);
+ if (!intserv)
+ return -EINVAL;
+
+ nthreads = len / sizeof(u32);
+
+ cpu_maps_update_begin();
+ for (i = 0; i < nthreads; i++) {
+ for_each_present_cpu(cpu) {
+ if (get_hard_smp_processor_id(cpu) != intserv[i])
+ continue;
+ BUG_ON(get_cpu_current_state(cpu)
+ != CPU_STATE_OFFLINE);
+ cpu_maps_update_done();
+ rc = cpu_up(cpu);
+ if (rc)
+ goto out;
+ cpu_maps_update_begin();
+
+ break;
+ }
+ if (cpu == num_possible_cpus())
+ printk(KERN_WARNING "Could not find cpu to online "
+ "with physical id 0x%x\n", intserv[i]);
+ }
+ cpu_maps_update_done();
+
+out:
+ return rc;
+
+}
+
+int offline_node_cpus(struct device_node *dn)
+{
+ int rc = 0;
+ unsigned int cpu;
+ int len, nthreads, i;
+ const u32 *intserv;
+
+ intserv = of_get_property(dn, "ibm,ppc-interrupt-server#s", &len);
+ if (!intserv)
+ return -EINVAL;
+
+ nthreads = len / sizeof(u32);
+
+ cpu_maps_update_begin();
+ for (i = 0; i < nthreads; i++) {
+ for_each_present_cpu(cpu) {
+ if (get_hard_smp_processor_id(cpu) != intserv[i])
+ continue;
+
+ if (get_cpu_current_state(cpu) == CPU_STATE_OFFLINE)
+ break;
+
+ if (get_cpu_current_state(cpu) == CPU_STATE_ONLINE) {
+ cpu_maps_update_done();
+ rc = cpu_down(cpu);
+ if (rc)
+ goto out;
+ cpu_maps_update_begin();
+ break;
+
+ }
+
+ /*
+ * The cpu is in CPU_STATE_INACTIVE.
+ * Upgrade it's state to CPU_STATE_OFFLINE.
+ */
+ set_preferred_offline_state(cpu, CPU_STATE_OFFLINE);
+ BUG_ON(plpar_hcall_norets(H_PROD, intserv[i])
+ != H_SUCCESS);
+ __cpu_die(cpu);
+ break;
+ }
+ if (cpu == num_possible_cpus())
+ printk(KERN_WARNING "Could not find cpu to offline "
+ "with physical id 0x%x\n", intserv[i]);
+ }
+ cpu_maps_update_done();
+
+out:
+ return rc;
+
+}
+
#define DR_ENTITY_SENSE 9003
#define DR_ENTITY_PRESENT 1
#define DR_ENTITY_UNUSABLE 2
@@ -385,6 +478,8 @@ static ssize_t dlpar_cpu_probe(const char *buf, size_t count)
dlpar_free_cc_nodes(dn);
}
+ rc = online_node_cpus(dn);
+
return rc ? rc : count;
}
@@ -404,6 +499,12 @@ static ssize_t dlpar_cpu_release(const char *buf, size_t count)
return -EINVAL;
}
+ rc = offline_node_cpus(dn);
+ if (rc) {
+ of_node_put(dn);
+ return -EINVAL;
+ }
+
rc = dlpar_release_drc(*drc_index);
if (rc) {
of_node_put(dn);
next prev parent reply other threads:[~2009-11-26 19:59 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-11-26 19:58 [PATCH v6 0/2] pseries: Add cede support for cpu-offline Vaidyanathan Srinivasan
2009-11-26 19:58 ` Vaidyanathan Srinivasan [this message]
2009-11-26 19:59 ` [PATCH v6 2/2] pseries: Serialize cpu hotplug operations during deactivate Vs deallocate Vaidyanathan Srinivasan
2009-12-08 6:14 ` [PATCH v6 0/2] pseries: Add cede support for cpu-offline Benjamin Herrenschmidt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091126195855.29159.29613.stgit@drishya \
--to=svaidy@linux.vnet.ibm.com \
--cc=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=arun@linux.vnet.ibm.com \
--cc=benh@au1.ibm.com \
--cc=ego@in.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mingo@elte.hu \
--cc=nfont@austin.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).