From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:46306) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UhM4r-0006b6-Ll for qemu-devel@nongnu.org; Tue, 28 May 2013 11:45:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UhM4j-0004dl-Eu for qemu-devel@nongnu.org; Tue, 28 May 2013 11:44:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34323) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UhM4j-0004dC-5y for qemu-devel@nongnu.org; Tue, 28 May 2013 11:44:37 -0400 Date: Tue, 28 May 2013 18:44:55 +0300 From: "Michael S. Tsirkin" Message-ID: <20130528154455.GB29468@redhat.com> References: <20130527093409.GH21969@stefanha-thinkpad.redhat.com> <51A496C4.1020602@os.inf.tu-dresden.de> <20130528115326.GB21107@redhat.com> <51A49E71.7030908@os.inf.tu-dresden.de> <20130528135617.GA22608@redhat.com> <51A4CEDB.7000203@os.inf.tu-dresden.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <51A4CEDB.7000203@os.inf.tu-dresden.de> Subject: Re: [Qemu-devel] snabbswitch integration with QEMU for userspace ethernet I/O List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Julian Stecklina Cc: "snabb-devel@googlegroups.com" , qemu-devel@nongnu.org On Tue, May 28, 2013 at 05:35:55PM +0200, Julian Stecklina wrote: > On 05/28/2013 03:56 PM, Michael S. Tsirkin wrote: > > and in fact that was how vhost worked originally. > > There were some issues to be fixed before it worked > > without issues, but we do plan to go back to that I think. > > Do you know why they abandoned this execution model? > > Julian > Yes I do. Two main issues: 1. Originally vhost used a shared workqueue for everything. It turned out that an access to userspace memory might sometimes block e.g. if it hit swap. When this happened the whole workqueue got blocked, and no guest could make progress. 2. workqueue was sticking to one CPU too aggressively E.g. there could be 10 free CPUs on the box, workqueue was still using the same one which queued the work even if that one was very busy. This all got sorted out in core workqueue code, so we should go back and try using the regular workqueue. -- MST