From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34580) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VJRjX-0003q0-Tt for qemu-devel@nongnu.org; Tue, 10 Sep 2013 13:28:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VJRjQ-0005oh-FZ for qemu-devel@nongnu.org; Tue, 10 Sep 2013 13:28:11 -0400 Received: from smtp02.citrix.com ([66.165.176.63]:23957) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VJRjQ-0005oO-Ar for qemu-devel@nongnu.org; Tue, 10 Sep 2013 13:28:04 -0400 From: Anthony PERARD Date: Tue, 10 Sep 2013 18:26:28 +0100 Message-ID: <1378833989-6313-2-git-send-email-anthony.perard@citrix.com> In-Reply-To: <1378833989-6313-1-git-send-email-anthony.perard@citrix.com> References: <1378833989-6313-1-git-send-email-anthony.perard@citrix.com> MIME-Version: 1.0 Content-Type: text/plain Subject: [Qemu-devel] [PATCH v4 1/2] xen: Fix vcpu initialization. List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: QEMU-devel Cc: Anthony PERARD , Stefano Stabellini Each vcpu need a evtchn binded in qemu, even those that are offline at QEMU initialisation. Signed-off-by: Anthony PERARD --- xen-all.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/xen-all.c b/xen-all.c index eb13111..f4c3ffa 100644 --- a/xen-all.c +++ b/xen-all.c @@ -612,13 +612,13 @@ static ioreq_t *cpu_get_ioreq(XenIOState *state) } if (port != -1) { - for (i = 0; i < smp_cpus; i++) { + for (i = 0; i < max_cpus; i++) { if (state->ioreq_local_port[i] == port) { break; } } - if (i == smp_cpus) { + if (i == max_cpus) { hw_error("Fatal error while trying to get io event!\n"); } @@ -1105,10 +1105,10 @@ int xen_hvm_init(void) hw_error("map buffered IO page returned error %d", errno); } - state->ioreq_local_port = g_malloc0(smp_cpus * sizeof (evtchn_port_t)); + state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t)); /* FIXME: how about if we overflow the page here? */ - for (i = 0; i < smp_cpus; i++) { + for (i = 0; i < max_cpus; i++) { rc = xc_evtchn_bind_interdomain(state->xce_handle, xen_domid, xen_vcpu_eport(state->shared_page, i)); if (rc == -1) { -- Anthony PERARD