From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:52155) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c0FAE-0003Jt-IN for qemu-devel@nongnu.org; Fri, 28 Oct 2016 17:58:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c0FAB-0004fZ-Ae for qemu-devel@nongnu.org; Fri, 28 Oct 2016 17:58:14 -0400 Received: from mail1.bemta8.messagelabs.com ([216.82.243.203]:58628) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1c0FAB-0004f0-6Q for qemu-devel@nongnu.org; Fri, 28 Oct 2016 17:58:11 -0400 From: Buddhi Madhav Date: Fri, 28 Oct 2016 21:58:05 +0000 Message-ID: References: <1477640667-4775-1-git-send-email-ashish.mittal@veritas.com> <20161028190350.GC6304@localhost.localdomain> In-Reply-To: <20161028190350.GC6304@localhost.localdomain> Content-Language: en-US Content-Type: text/plain; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH v3] block/vxhs: Add Veritas HyperScale VxHS block device support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jeff Cody , Ashish Mittal Cc: "qemu-devel@nongnu.org" , "pbonzini@redhat.com" , "kwolf@redhat.com" , "armbru@redhat.com" , "berrange@redhat.com" , "famz@redhat.com" , Ashish Mittal , "stefanha@gmail.com" , Rakesh Ranjan , Ketan Nilangekar , Abhijit Dey , "Venkatesha M.G." On 10/28/16, 12:03 PM, "Jeff Cody" wrote: >On Fri, Oct 28, 2016 at 12:44:27AM -0700, Ashish Mittal wrote: >> This patch adds support for a new block device type called "vxhs". >> Source code for the qnio library that this code loads can be downloaded= >>from: >> https://github.com/MittalAshish/libqnio.git >>=20 >> Sample command line using the JSON syntax: >> ./qemu-system-x86_64 -name instance-00000008 -S -vnc 0.0.0.0:0 -k en-us= >> -vga cirrus -device virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,addr=3D= 0x5 >> -msg timestamp=3Don >>=20 >>'json:{"driver":"vxhs","vdisk_id":"{c3e9095a-a5ee-4dce-afeb-2a59fb387410= } >>", >> "server":{"host":"172.172.17.4","port":"9999"}}' >>=20 >> Sample command line using the URI syntax: >> qemu-img convert -f raw -O raw -n >> /var/lib/nova/instances/_base/0c5eacd5ebea5ed914b6a3e7b18f1ce734c386ad >> vxhs://192.168.0.1:9999/%7Bc6718f6b-0401-441d-a8c3-1f0064d75ee0%7D >>=20 >> Signed-off-by: Ashish Mittal >> --- >> v3 changelog: >> (1) Added QAPI schema for the VxHS driver. >>=20 >> v2 changelog: >> (1) Changes done in response to v1 comments. >>=20 >> TODO: >> (1) Add qemu-iotest >> (2) We need to be able to free all resources once we close the last >>vxhs drive. >>=20 >> block/Makefile.objs | 2 + >> block/trace-events | 21 ++ >> block/vxhs.c | 669 >>+++++++++++++++++++++++++++++++++++++++++++++++++++ >> configure | 41 ++++ >> qapi/block-core.json | 20 +- >> 5 files changed, 751 insertions(+), 2 deletions(-) >> create mode 100644 block/vxhs.c >>=20 >> diff --git a/block/Makefile.objs b/block/Makefile.objs >> index 67a036a..58313a2 100644 >> --- a/block/Makefile.objs >> +++ b/block/Makefile.objs >> @@ -18,6 +18,7 @@ block-obj-$(CONFIG_LIBNFS) +=3D nfs.o >> block-obj-$(CONFIG_CURL) +=3D curl.o >> block-obj-$(CONFIG_RBD) +=3D rbd.o >> block-obj-$(CONFIG_GLUSTERFS) +=3D gluster.o >> +block-obj-$(CONFIG_VXHS) +=3D vxhs.o >> block-obj-$(CONFIG_ARCHIPELAGO) +=3D archipelago.o >> block-obj-$(CONFIG_LIBSSH2) +=3D ssh.o >> block-obj-y +=3D accounting.o dirty-bitmap.o >> @@ -38,6 +39,7 @@ rbd.o-cflags :=3D $(RBD_CFLAGS) >> rbd.o-libs :=3D $(RBD_LIBS) >> gluster.o-cflags :=3D $(GLUSTERFS_CFLAGS) >> gluster.o-libs :=3D $(GLUSTERFS_LIBS) >> +vxhs.o-libs :=3D $(VXHS_LIBS) >> ssh.o-cflags :=3D $(LIBSSH2_CFLAGS) >> ssh.o-libs :=3D $(LIBSSH2_LIBS) >> archipelago.o-libs :=3D $(ARCHIPELAGO_LIBS) >> diff --git a/block/trace-events b/block/trace-events >> index 05fa13c..94249ee 100644 >> --- a/block/trace-events >> +++ b/block/trace-events >> @@ -114,3 +114,24 @@ qed_aio_write_data(void *s, void *acb, int ret, >>uint64_t offset, size_t len) "s >> qed_aio_write_prefill(void *s, void *acb, uint64_t start, size_t len, >>uint64_t offset) "s %p acb %p start %"PRIu64" len %zu offset %"PRIu64 >> qed_aio_write_postfill(void *s, void *acb, uint64_t start, size_t len,= >>uint64_t offset) "s %p acb %p start %"PRIu64" len %zu offset %"PRIu64 >> qed_aio_write_main(void *s, void *acb, int ret, uint64_t offset, >>size_t len) "s %p acb %p ret %d offset %"PRIu64" len %zu" >> + >> +# block/vxhs.c >> +vxhs_iio_callback(int error, int reason) "ctx is NULL: error %d, >>reason %d" >> +vxhs_setup_qnio(void *s) "Context to HyperScale IO manager =3D %p" >> +vxhs_iio_callback_chnfail(int err, int error) "QNIO channel failed, no= >>i/o %d, %d" >> +vxhs_iio_callback_unknwn(int opcode, int err) "unexpected opcode %d, >>errno %d" >> +vxhs_open_fail(int ret) "Could not open the device. Error =3D %d" >> +vxhs_open_epipe(int ret) "Could not create a pipe for device. Bailing >>out. Error=3D%d" >> +vxhs_aio_rw_invalid(int req) "Invalid I/O request iodir %d" >> +vxhs_aio_rw_ioerr(char *guid, int iodir, uint64_t size, uint64_t off, >>void *acb, int ret, int err) "IO ERROR (vDisk %s) FOR : Read/Write =3D %= d >>size =3D %lu offset =3D %lu ACB =3D %p. Error =3D %d, errno =3D %d" >> +vxhs_get_vdisk_stat_err(char *guid, int ret, int err) "vDisk (%s) stat= >>ioctl failed, ret =3D %d, errno =3D %d" >> +vxhs_get_vdisk_stat(char *vdisk_guid, uint64_t vdisk_size) "vDisk %s >>stat ioctl returned size %lu" >> +vxhs_qnio_iio_open(const char *ip) "Failed to connect to storage agent= >>on host-ip %s" >> +vxhs_qnio_iio_devopen(const char *fname) "Failed to open vdisk device:= >>%s" >> +vxhs_complete_aio(void *acb, uint64_t ret) "aio failed acb %p ret %ld"= >> +vxhs_parse_uri_filename(const char *filename) "URI passed via >>bdrv_parse_filename %s" >> +vxhs_qemu_init_vdisk(const char *vdisk_id) "vdisk_id from json %s" >> +vxhs_parse_uri_hostinfo(int num, char *host, int port) "Host %d: IP >>%s, Port %d" >> +vxhs_qemu_init(char *of_vsa_addr, int port) "Adding host %s:%d to >>BDRVVXHSState" >> +vxhs_qemu_init_filename(const char *filename) "Filename passed as %s" >> +vxhs_close(char *vdisk_guid) "Closing vdisk %s" >> diff --git a/block/vxhs.c b/block/vxhs.c >> new file mode 100644 >> index 0000000..08ad681 >> --- /dev/null >> +++ b/block/vxhs.c >> @@ -0,0 +1,669 @@ >> +/* >> + * QEMU Block driver for Veritas HyperScale (VxHS) >> + * >> + * This work is licensed under the terms of the GNU GPL, version 2 or >>later. >>=20+ * See the COPYING file in the top-level directory. >> + * >> + */ >> + >> +#include "qemu/osdep.h" >> +#include "block/block_int.h" >> +#include >> +#include "qapi/qmp/qerror.h" >> +#include "qapi/qmp/qdict.h" >> +#include "qapi/qmp/qstring.h" >> +#include "trace.h" >> +#include "qemu/uri.h" >> +#include "qapi/error.h" >> +#include "qemu/error-report.h" >> + >> +#define VDISK_FD_READ 0 >> +#define VDISK_FD_WRITE 1 >> + >> +#define VXHS_OPT_FILENAME "filename" >> +#define VXHS_OPT_VDISK_ID "vdisk_id" >> +#define VXHS_OPT_SERVER "server" >> +#define VXHS_OPT_HOST "host" >> +#define VXHS_OPT_PORT "port" >> + >> +/* qnio client ioapi_ctx */ >> +static void *global_qnio_ctx; > >This is still never freed anywhere, is it? Once all drives are done with= >the global context, we need to free the resources allocated in it. > >You'll need something like a ref counter to track usage of the global >instance, and then free the resources when the last drive is closed. > >Coincidentally, there was very recently a patch on list to do a similar >thing for gluster. If you look at "block/gluster: memory usage: use one >glfs instance per volume", you can see the basic idea I am talking about >with regards to wrapping the global ctx in a struct, and tracking refs an= d >unrefs to it. > The next patch(V4) will contain the fix for this. Added refcounting for device open/close. On last device close library resources gets free(added new library API iio_fini()).