From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57691) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c3h45-0000Z4-W6 for qemu-devel@nongnu.org; Mon, 07 Nov 2016 05:22:11 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c3h42-0001CI-Rg for qemu-devel@nongnu.org; Mon, 07 Nov 2016 05:22:09 -0500 Received: from mail-wm0-x242.google.com ([2a00:1450:400c:c09::242]:33252) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1c3h42-0001Bs-KU for qemu-devel@nongnu.org; Mon, 07 Nov 2016 05:22:06 -0500 Received: by mail-wm0-x242.google.com with SMTP id u144so15615645wmu.0 for ; Mon, 07 Nov 2016 02:22:06 -0800 (PST) Date: Mon, 7 Nov 2016 10:22:03 +0000 From: Stefan Hajnoczi Message-ID: <20161107102202.GA5036@stefanha-x1.localdomain> References: <1475035789-685-1-git-send-email-ashish.mittal@veritas.com> <20160930083606.GA5411@stefanha-x1.localdomain> <20161104095223.GC9817@stefanha-x1.localdomain> <87C0A0E2-9D12-4C54-81F9-07C5200A5306@veritas.com> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="5vNYLRcllDrimb99" Content-Disposition: inline In-Reply-To: <87C0A0E2-9D12-4C54-81F9-07C5200A5306@veritas.com> Subject: Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Ketan Nilangekar Cc: Ashish Mittal , "qemu-devel@nongnu.org" , "pbonzini@redhat.com" , "kwolf@redhat.com" , "armbru@redhat.com" , "berrange@redhat.com" , "jcody@redhat.com" , "famz@redhat.com" , Ashish Mittal , Abhijit Dey --5vNYLRcllDrimb99 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable On Fri, Nov 04, 2016 at 06:30:47PM +0000, Ketan Nilangekar wrote: > > On Nov 4, 2016, at 2:52 AM, Stefan Hajnoczi wrote: > >> On Thu, Oct 20, 2016 at 01:31:15AM +0000, Ketan Nilangekar wrote: > >> 2. The idea of having multi-threaded epoll based network client was to= drive more throughput by using multiplexed epoll implementation and (fairl= y) distributing IOs from several vdisks (typical VM assumed to have atleast= 2) across 8 connections.=20 > >> Each connection is serviced by single epoll and does not share its con= text with other connections/epoll. All memory pools/queues are in the conte= xt of a connection/epoll. > >> The qemu thread enqueues IO request in one of the 8 epoll queues using= a round-robin. Responses are also handled in the context of an epoll loop = and do not share context with other epolls. Any synchronization code that y= ou see today in the driver callback is code that handles the split IOs whic= h we plan to address by a) implementing readv in libqnio and b) removing th= e 4MB limit on write IO size. > >> The number of client epoll threads (8) is a #define in qnio and can ea= sily be changed. However our tests indicate that we are able to drive a goo= d number of IOs using 8 threads/epolls. > >> I am sure there are ways to simplify the library implementation, but f= or now the performance of the epoll threads is more than satisfactory. > >=20 > > By the way, when you benchmark with 8 epoll threads, are there any other > > guests with vxhs running on the machine? > >=20 >=20 > Yes. Infact the total througput with around 4-5 VMs scales well to satura= te around 90% of available storage throughput of a typical pcie ssd device. >=20 > > In a real-life situation where multiple VMs are running on a single host > > it may turn out that giving each VM 8 epoll threads doesn't help at all > > because the host CPUs are busy with other tasks. >=20 > The exact number of epolls required to achieve optimal throughput may be = something that can be adjusted dynamically by the qnio library in subsequen= t revisions.=20 >=20 > But as I mentioned today we can change this by simply rebuilding qnio wit= h a different value for the #define In QEMU there is currently work to add multiqueue support to the block layer. This enables true multiqueue from the guest down to the storage backend. virtio-blk already supports multiple queues but they are all processed =66rom the same thread in QEMU today. Once multiple threads are able to process the queues it would make sense to continue down into the vxhs block driver. So I don't think implementing multiple epoll threads in libqnio is useful in the long term. Rather, a straightforward approach of integrating with the libqnio user's event loop (as described in my previous emails) would simplify the code and allow you to take advantage of full multiqueue support in the future. Stefan --5vNYLRcllDrimb99 Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQEcBAEBAgAGBQJYIFXKAAoJEJykq7OBq3PIbgsIAKwvinzRzIV1AP2g7/v5kbEC XHXIquiBJxOcKnwooZ97LoXumhYXALlzqFMbM+OsgPjguyC2GhhulNQ9AUd/9yz5 I68vo9PquLmA7d2FBWrbha9W9Q2wJ/dzZhbscwTzrF1ZrZOerVEmL6evsMoWtwKy geLrCguZ/OifcJb4S5t9f3KlHqgoNJ7OSRi+HDKNhbKeLsk3j1psbtBomB+YolDr 2//5Hw6cLObOVXi+6lGuj4fMQmeC0YcbjKAxLGMeVL/RLLglloXyhPoSb8oIjxUm cnMxms9ZR83TrP9Lda1SryI5/p4XHVzMv1babjPg5wP/gIc0/y6o4spEwCS16+s= =oIc4 -----END PGP SIGNATURE----- --5vNYLRcllDrimb99--