From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=42062 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Pzmrt-0006Rj-5U for qemu-devel@nongnu.org; Wed, 16 Mar 2011 05:18:20 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Pzmrm-0004VX-6t for qemu-devel@nongnu.org; Wed, 16 Mar 2011 05:18:07 -0400 Received: from mx1.redhat.com ([209.132.183.28]:37355) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Pzmrl-0004VP-SP for qemu-devel@nongnu.org; Wed, 16 Mar 2011 05:18:06 -0400 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p2G9I5aE011389 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Wed, 16 Mar 2011 05:18:05 -0400 Message-ID: <4D8080CA.6010804@redhat.com> Date: Wed, 16 Mar 2011 10:20:10 +0100 From: Hans de Goede MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 3/4] qxl/spice: remove qemu_mutex_{un, }lock_iothread around dispatcher References: <1300220228-27423-1-git-send-email-alevy@redhat.com> <1300220228-27423-4-git-send-email-alevy@redhat.com> In-Reply-To: <1300220228-27423-4-git-send-email-alevy@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alon Levy Cc: qemu-devel@nongnu.org Ack (assuming the issues with the previous 2 patches are fixed): Acked-by: Hans de Goede On 03/15/2011 09:17 PM, Alon Levy wrote: > with the previous patch making sure get_command no longer needs to lock, > there is no reason to drop the qemu iothread mutex in qxl.c and in > ui/spice-display.c > > The only location where the lock remains are the cursor related callbacks, > that path is currently broken. It is only triggered if running spice and sdl, > which is broken already before that. > --- > hw/qxl.c | 8 -------- > ui/spice-display.c | 11 ++--------- > 2 files changed, 2 insertions(+), 17 deletions(-) > > diff --git a/hw/qxl.c b/hw/qxl.c > index 2419236..72f204b 100644 > --- a/hw/qxl.c > +++ b/hw/qxl.c > @@ -707,10 +707,8 @@ static void qxl_hard_reset(PCIQXLDevice *d, int loadvm) > dprint(d, 1, "%s: start%s\n", __FUNCTION__, > loadvm ? " (loadvm)" : ""); > > - qemu_mutex_unlock_iothread(); > d->ssd.worker->reset_cursor(d->ssd.worker); > d->ssd.worker->reset_image_cache(d->ssd.worker); > - qemu_mutex_lock_iothread(); > qxl_reset_surfaces(d); > qxl_reset_memslots(d); > > @@ -840,9 +838,7 @@ static void qxl_reset_surfaces(PCIQXLDevice *d) > { > dprint(d, 1, "%s:\n", __FUNCTION__); > d->mode = QXL_MODE_UNDEFINED; > - qemu_mutex_unlock_iothread(); > d->ssd.worker->destroy_surfaces(d->ssd.worker); > - qemu_mutex_lock_iothread(); > memset(&d->guest_surfaces.cmds, 0, sizeof(d->guest_surfaces.cmds)); > } > > @@ -911,9 +907,7 @@ static void qxl_destroy_primary(PCIQXLDevice *d) > dprint(d, 1, "%s\n", __FUNCTION__); > > d->mode = QXL_MODE_UNDEFINED; > - qemu_mutex_unlock_iothread(); > d->ssd.worker->destroy_primary_surface(d->ssd.worker, 0); > - qemu_mutex_lock_iothread(); > } > > static void qxl_set_mode(PCIQXLDevice *d, int modenr, int loadvm) > @@ -983,10 +977,8 @@ static void ioport_write(void *opaque, uint32_t addr, uint32_t val) > case QXL_IO_UPDATE_AREA: > { > QXLRect update = d->ram->update_area; > - qemu_mutex_unlock_iothread(); > d->ssd.worker->update_area(d->ssd.worker, d->ram->update_surface, > &update, NULL, 0, 0); > - qemu_mutex_lock_iothread(); > break; > } > case QXL_IO_NOTIFY_CMD: > diff --git a/ui/spice-display.c b/ui/spice-display.c > index a9ecee0..f3dfba8 100644 > --- a/ui/spice-display.c > +++ b/ui/spice-display.c > @@ -78,9 +78,7 @@ SimpleSpiceUpdate *qemu_spice_create_update(SimpleSpiceDisplay *ssd) > uint8_t *src, *dst; > int by, bw, bh; > > - qemu_mutex_lock_iothread(); > if (qemu_spice_rect_is_empty(&ssd->dirty)) { > - qemu_mutex_unlock_iothread(); > return NULL; > }; > > @@ -141,7 +139,6 @@ SimpleSpiceUpdate *qemu_spice_create_update(SimpleSpiceDisplay *ssd) > cmd->data = (intptr_t)drawable; > > memset(&ssd->dirty, 0, sizeof(ssd->dirty)); > - qemu_mutex_unlock_iothread(); > return update; > } > > @@ -169,6 +166,7 @@ void qemu_spice_create_host_memslot(SimpleSpiceDisplay *ssd) > ssd->worker->add_memslot(ssd->worker,&memslot); > } > > +/* called from iothread (main) or a vcpu-thread */ > void qemu_spice_create_host_primary(SimpleSpiceDisplay *ssd) > { > QXLDevSurfaceCreate surface; > @@ -186,18 +184,14 @@ void qemu_spice_create_host_primary(SimpleSpiceDisplay *ssd) > surface.mem = (intptr_t)ssd->buf; > surface.group_id = MEMSLOT_GROUP_HOST; > > - qemu_mutex_unlock_iothread(); > ssd->worker->create_primary_surface(ssd->worker, 0,&surface); > - qemu_mutex_lock_iothread(); > } > > void qemu_spice_destroy_host_primary(SimpleSpiceDisplay *ssd) > { > dprint(1, "%s:\n", __FUNCTION__); > > - qemu_mutex_unlock_iothread(); > ssd->worker->destroy_primary_surface(ssd->worker, 0); > - qemu_mutex_lock_iothread(); > } > > void qemu_spice_vm_change_state_handler(void *opaque, int running, int reason) > @@ -207,9 +201,7 @@ void qemu_spice_vm_change_state_handler(void *opaque, int running, int reason) > if (running) { > ssd->worker->start(ssd->worker); > } else { > - qemu_mutex_unlock_iothread(); > ssd->worker->stop(ssd->worker); > - qemu_mutex_lock_iothread(); > } > ssd->running = running; > } > @@ -233,6 +225,7 @@ void qemu_spice_display_update(SimpleSpiceDisplay *ssd, > qemu_spice_rect_union(&ssd->dirty,&update_area); > } > > +/* called only from iothread (main) */ > void qemu_spice_display_resize(SimpleSpiceDisplay *ssd) > { > dprint(1, "%s:\n", __FUNCTION__);