From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Mi7JW-00040Q-8L for qemu-devel@nongnu.org; Mon, 31 Aug 2009 09:52:54 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1Mi7JR-00040E-4C for qemu-devel@nongnu.org; Mon, 31 Aug 2009 09:52:54 -0400 Received: from [199.232.76.173] (port=52701 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Mi7JR-00040B-0C for qemu-devel@nongnu.org; Mon, 31 Aug 2009 09:52:49 -0400 Received: from mx20.gnu.org ([199.232.41.8]:19424) by monty-python.gnu.org with esmtps (TLS-1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1Mi7JQ-0008U7-Gh for qemu-devel@nongnu.org; Mon, 31 Aug 2009 09:52:48 -0400 Received: from mx1.redhat.com ([209.132.183.28]) by mx20.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1Mi7JP-0008DV-0O for qemu-devel@nongnu.org; Mon, 31 Aug 2009 09:52:47 -0400 Date: Mon, 31 Aug 2009 19:21:47 +0530 From: Amit Shah Message-ID: <20090831135147.GA16371@amit-x200.redhat.com> References: <1251181044-3696-1-git-send-email-amit.shah@redhat.com> <20090826112718.GA11117@amit-x200.redhat.com> <4A980D18.30106@codemonkey.ws> <20090830101057.GB32563@amit-x200.redhat.com> <4A9A7525.6010707@codemonkey.ws> <20090830131738.GC3401@amit-x200.redhat.com> <4A9BCD61.2040903@codemonkey.ws> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4A9BCD61.2040903@codemonkey.ws> Subject: [Qemu-devel] Re: Extending virtio_console to support multiple ports List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org On (Mon) Aug 31 2009 [08:17:21], Anthony Liguori wrote: >>>> - A lock has to be introduced to fetch one unused buffer from the list >>>> and pass it on to the host. And this lock has to be a spinlock, just >>>> because writes can be called from irq context. >>>> >>> I don't see a problem here. >>> >> >> You mean you don't see a problem in using a spinlock vs not using one? >> > > Right. This isn't a fast path. > >> Userspace will typically send the entire buffer to be transmitted in one >> system call. If it's large, the system call will have to be broken into >> several. This results in multiple guest system calls, each one to be >> handled with a spinlock held. >> >> Compare this with the entire write handled in one system call in the >> current method. >> > > Does it matter? This isn't a fast path. The question isn't just about how much work happens inside the spinlock. It's also a question about introducing spinlocks where they shouldn't be. I don't see why such changes have to creep into the kernel. Can you please explain your rationale for being so rigid about merging the two drivers? Amit