From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:52640) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1axyvc-0003HB-3D for qemu-devel@nongnu.org; Wed, 04 May 2016 11:41:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1axyvP-0006mw-Vc for qemu-devel@nongnu.org; Wed, 04 May 2016 11:41:26 -0400 Received: from e06smtp13.uk.ibm.com ([195.75.94.109]:41156) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1axyvP-0006bc-NA for qemu-devel@nongnu.org; Wed, 04 May 2016 11:41:19 -0400 Received: from localhost by e06smtp13.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 4 May 2016 16:40:41 +0100 Date: Wed, 4 May 2016 17:40:33 +0200 From: Greg Kurz Message-ID: <20160504174033.39faaada@bahia.huguette.org> In-Reply-To: References: <20160427083840.GA27160@igalia.com> <20160427191215.037c4c5c@bahia.huguette.org> <20160502145731.66bdcf27@bahia.huguette.org> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [Qemu-discuss] iolimits for virtio-9p List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Pradeep Kiruvale Cc: Alberto Garcia , qemu-devel@nongnu.org, "qemu-discuss@nongnu.org" On Mon, 2 May 2016 17:49:26 +0200 Pradeep Kiruvale wrote: > On 2 May 2016 at 14:57, Greg Kurz wrote: > > > On Thu, 28 Apr 2016 11:45:41 +0200 > > Pradeep Kiruvale wrote: > > > > > On 27 April 2016 at 19:12, Greg Kurz wrote: > > > > > > > On Wed, 27 Apr 2016 16:39:58 +0200 > > > > Pradeep Kiruvale wrote: > > > > > > > > > On 27 April 2016 at 10:38, Alberto Garcia wrote: > > > > > > > > > > > On Wed, Apr 27, 2016 at 09:29:02AM +0200, Pradeep Kiruvale wrote: > > > > > > > > > > > > > Thanks for the reply. I am still in the early phase, I will let > > you > > > > > > > know if any changes are needed for the APIs. > > > > > > > > > > > > > > We might also have to implement throttle-group.c for 9p devices, > > if > > > > > > > we want to apply throttle for group of devices. > > > > > > > > > > > > Fair enough, but again please note that: > > > > > > > > > > > > - throttle-group.c is not meant to be generic, but it's tied to > > > > > > BlockDriverState / BlockBackend. > > > > > > - it is currently being rewritten: > > > > > > > > https://lists.gnu.org/archive/html/qemu-block/2016-04/msg00645.html > > > > > > > > > > > > If you can explain your use case with a bit more detail we can try > > to > > > > > > see what can be done about it. > > > > > > > > > > > > > > > > > We want to use virtio-9p for block io instead of virtio-blk-pci. > > But in > > > > > case of > > > > > > > > 9p is mostly aimed at sharing files... why would you want to use it for > > > > block io instead of a true block device ? And how would you do that ? > > > > > > > > > > *Yes, we want to share the files itself. So we are using the virtio-9p.* > > > > You want to pass a disk image to the guest as a plain file on a 9p mount ? > > And then, what do you do in the guest ? Attach it to a loop device ? > > > > Yes, would like to mount as a 9p drive and create file inside that and > read/write. > This was the experiment we are doing, actual use case no idea. My work is > to do > a feasibility test does it work or not. > > > > > > > *We want to have QoS on these files access for every VM.* > > > > > > > You won't be able to have QoS on selected files, but it may be possible to > > introduce limits at the fsdev level: control all write accesses to all > > files > > and all read accesses to all files for a 9p device. > > > > That is right, I do not want to have QoS for individual files but to whole > fsdev device. > > > > > > > > > > > > > > virtio-9p we can just use fsdev devices, so we want to apply > > throttling > > > > > (QoS) > > > > > on these devices and as of now the io throttling only possible with > > the > > > > > -drive option. > > > > > > > > > > > > > Indeed. > > > > > > > > > As a work around we are doing the throttling using cgroup. It has > > its own > > > > > costs. > > > > > > > > Can you elaborate ? > > > > > > > > > > *We saw that we need to create cgroups and set it and also we observed > > lot > > > of iowaits * > > > *compared to implementing the throttling inside the qemu.* > > > *This we did observe by using the virtio-blk-pci devices. (Using cgroups > > Vs > > > qemu throttling)* > > > > > > > > > > > Just to be sure I get it right. > > > > You tried both: > > 1) run QEMU with -device virtio-blk-pci and -drive throttling.* > > 2) run QEMU with -device virtio-blk-pci in its own cgroup > > > > And 1) has better performance and is easier to use than 2) ? > > > > And what do you expect with 9p compared to 1) ? > > > > > That was just to understand the cost of cpu > io throttling inside the qemu vs using cgroup. > > The bench-marking we did to reproduce the numbers and understand the cost > mentioned in > > http://www.linux-kvm.org/images/7/72/2011-forum-keep-a-limit-on-it-io-throttling-in-qemu.pdf > > Thanks, > Pradeep > Ok. So you did compare current QEMU block I/O throttling with cgroup ? And you observed numbers similar to the link above ? And now you would like to run the same test on a file in a 9p mount with experimental 9p QoS ? Maybe possible to reuse the throttle.h API and hack v9fs_write() and v9fs_read() in 9p.c then. Cheers. -- Greg > > > > > > > Thanks, > > > Pradeep > > > >