From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58063) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bNjuc-0002yY-MY for qemu-devel@nongnu.org; Thu, 14 Jul 2016 12:54:59 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bNjub-00008y-Is for qemu-devel@nongnu.org; Thu, 14 Jul 2016 12:54:58 -0400 Received: from mx1.redhat.com ([209.132.183.28]:51037) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bNjub-00008u-D6 for qemu-devel@nongnu.org; Thu, 14 Jul 2016 12:54:57 -0400 From: Igor Mammedov Date: Thu, 14 Jul 2016 18:54:34 +0200 Message-Id: <1468515285-173356-6-git-send-email-imammedo@redhat.com> In-Reply-To: <1468515285-173356-1-git-send-email-imammedo@redhat.com> References: <1468515285-173356-1-git-send-email-imammedo@redhat.com> Subject: [Qemu-devel] [PATCH v4 05/16] pc: enforce adding CPUs contiguously and removing them in opposit order List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: pkrempa@redhat.com, ehabkost@redhat.com, mst@redhat.com, eduardo.otubo@profitbricks.com, Bandan Das it will still allow us to use cpu_index as migration instance_id since when CPUs are added contiguously (from the first to the last) and removed in opposite order, cpu_index stays stable and it's reproducable on destination side. Signed-off-by: Igor Mammedov --- While there is work in progress to support migration when there are holes in cpu_index range resulting from out-of-order plug or unplug, this patch is intended as a last resort if no easy, risk-free and elegant solution emerges before 2.7 dev cycle ends. --- hw/i386/pc.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/hw/i386/pc.c b/hw/i386/pc.c index 33c5f97..75a92d0 100644 --- a/hw/i386/pc.c +++ b/hw/i386/pc.c @@ -1762,6 +1762,23 @@ static void pc_cpu_unplug_request_cb(HotplugHandler *hotplug_dev, goto out; } + if (idx < pcms->possible_cpus->len - 1 && + pcms->possible_cpus->cpus[idx + 1].cpu != NULL) { + X86CPU *cpu; + + for (idx = pcms->possible_cpus->len - 1; + pcms->possible_cpus->cpus[idx].cpu == NULL; idx--) { + ;; + } + + cpu = X86_CPU(pcms->possible_cpus->cpus[idx].cpu); + error_setg(&local_err, "CPU [socket-id: %u, core-id: %u," + " thread-id: %u] should be removed first", + cpu->socket_id, cpu->core_id, cpu->thread_id); + goto out; + + } + hhc = HOTPLUG_HANDLER_GET_CLASS(pcms->acpi_dev); hhc->unplug_request(HOTPLUG_HANDLER(pcms->acpi_dev), dev, &local_err); @@ -1860,6 +1877,23 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev, return; } + if (idx != 0 && pcms->possible_cpus->cpus[idx - 1].cpu == NULL) { + PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms); + + for (idx = 1; pcms->possible_cpus->cpus[idx].cpu != NULL; idx++) { + ;; + } + + x86_topo_ids_from_apicid(pcms->possible_cpus->cpus[idx].arch_id, + smp_cores, smp_threads, &topo); + + if (!pcmc->legacy_cpu_hotplug) { + error_setg(errp, "CPU [socket: %u, core: %u, thread: %u] should be" + " added first", topo.pkg_id, topo.core_id, topo.smt_id); + return; + } + } + /* if 'address' properties socket-id/core-id/thread-id are not set, set them * so that query_hotpluggable_cpus would show correct values */ -- 2.7.4