From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:50602) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QcH7W-0000Dy-Nk for qemu-devel@nongnu.org; Thu, 30 Jun 2011 09:17:31 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QcH7V-0005fL-58 for qemu-devel@nongnu.org; Thu, 30 Jun 2011 09:17:26 -0400 Received: from mx1.redhat.com ([209.132.183.28]:22637) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QcH7U-0005ey-Ew for qemu-devel@nongnu.org; Thu, 30 Jun 2011 09:17:24 -0400 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p5UDHNOw000881 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 30 Jun 2011 09:17:23 -0400 Message-ID: <4E0C7755.6070708@redhat.com> Date: Thu, 30 Jun 2011 15:17:09 +0200 From: Gerd Hoffmann MIME-Version: 1.0 References: <4E083E8B.7010302@redhat.com> <20110627092036.GR2731@bow.redhat.com> <4E0AE9D7.6090706@redhat.com> <20110629092133.GL30873@bow.redhat.com> <4E0AFD7C.2050209@redhat.com> <20110629113812.GS30873@bow.redhat.com> <4E0C4F52.90405@redhat.com> <4E0C5423.2060208@redhat.com> <20110630114134.GH26431@bow.redhat.com> <4E0C6844.70902@redhat.com> <20110630125054.GL26431@bow.redhat.com> In-Reply-To: <20110630125054.GL26431@bow.redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 2/2] qxl: add QXL_IO_UPDATE_MEM for guest S3&S4 support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Yonit Halperin , qemu-devel@nongnu.org Hi, >> Yes. Backward compatibility. > > So at least deprecate it to be dropped later? I don't like that the code just gets > bigger and bigger. Deprecate them is fine. > I was thinking of a different solution - one in which the same "READY" messages are > written, but read from a different place. That would not have actually required any changes > to the spice-server api. But if you say you prefer to add a completion callback, that's cool. > > Just to answer, I was thinking of this flow for the async commands: > > vcpu thread -> pipe_to_red_worker : update_area_async > red_worker thread -> pipe_to_io_thread : update_area_async complete > > but that wouldn't have worked, would it? unless we made sure to prevent tries to do async/sync > while async in progress. The pipe is a libspice-server internal thing and it should stay that way. libspice-server should be able to use something completely different for dispatcher <-> worker communication (say a linked job list with mutex locking and condition variable wakeup) and everything should continue to work. cheers, Gerd