* [Qemu-devel] [PATCH v20 0/5] block/gluster: add support for multiple gluster servers
@ 2016-07-19 16:57 Prasanna Kumar Kalever
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 1/5] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
` (5 more replies)
0 siblings, 6 replies; 20+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-19 16:57 UTC (permalink / raw)
To: qemu-devel
Cc: armbru, kwolf, eblake, pkrempa, jcody, rtalur, vbellur,
Prasanna Kumar Kalever
This version of patches are rebased repo at
git://repo.or.cz/qemu/armbru.git qapi-not-next
Prasanna Kumar Kalever (5):
block/gluster: rename [server, volname, image] -> [host, volume, path]
block/gluster: code cleanup
block/gluster: deprecate rdma support
block/gluster: using new qapi schema
block/gluster: add support for multiple gluster servers
v1:
multiple host addresses but common port number and transport type
pattern: URI syntax with query (?) delimitor
syntax:
file=gluster[+transport-type]://host1:24007/testvol/a.img\
?server=host2&server=host3
v2:
multiple host addresses each have their own port number, but all use
common transport type
pattern: URI syntax with query (?) delimiter
syntax:
file=gluster[+transport-type]://[host[:port]]/testvol/a.img\
[?server=host1[:port]\
&server=host2[:port]]
v3:
multiple host addresses each have their own port number and transport type
pattern: changed to json
syntax:
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","server":
[{"host":"1.2.3.4","port":"24007","transport":"tcp"},
{"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'
v4, v5:
address comments from "Eric Blake" <eblake@redhat.com>
renamed:
'backup-volfile-servers' -> 'volfile-servers'
v6:
address comments from Peter Krempa <pkrempa@redhat.com>
renamed:
'volname' -> 'volume'
'image-path' -> 'path'
'server' -> 'host'
v7:
fix for v6 (initialize num_servers to 1 and other typos)
v8:
split patch set v7 into series of 3 as per Peter Krempa <pkrempa@redhat.com>
review comments
v9:
reorder the series of patches addressing "Eric Blake" <eblake@redhat.com>
review comments
v10:
fix mem-leak as per Peter Krempa <pkrempa@redhat.com> review comments
v11:
using qapi-types* defined structures as per "Eric Blake" <eblake@redhat.com>
review comments.
v12:
fix crash caused in qapi_free_BlockdevOptionsGluster
v13:
address comments from "Jeff Cody" <jcody@redhat.com>
v14:
address comments from "Eric Blake" <eblake@redhat.com>
split patch 3/3 into two
rename input option and variable from 'servers' to 'server'
v15:
patch 1/4 changed the commit message as per Eric's comment
patch 2/4 are unchanged
patch 3/4 addressed Jeff's comments
patch 4/4 concentrates on unix transport related help info,
rename 'parse_transport_option()' to 'qapi_enum_parse()',
address memory leaks and other comments given by Jeff and Eric
v16:
In patch 4/4 fixed segfault on glfs_init() error case, as per Jeff's comments
other patches in this series remain unchanged
v17:
rebase of v16 on latest master
v18:
rebase of v17 on latest master
rebase has demanded type conversion of 'qemu_gluster_init()'[s] first argument
from 'BlockdevOptionsGluster**' to 'BlockdevOptionsGluster*' and all its callees
both in 3/4 and 4/4 patches
v19:
patches 1/5, 2/5 remains unchanged
patch 3/5 is something new, in which the rdma deadcode is removed
patch 4/5 (i.e. 3/4 in v18) now uses union discriminator, I have made a choice
to use gluster with custom schema since @UnixSocketAddress uses 'path' as key,
which may be confusing with gluster, and in @InetSocketAddress port was str
again I have made a choice to keep it uint16 which really make sense.
Hmmm.. As Markus suggested in v18 qemu_gluster_parseuri() is *parse_uri() same
with *parse_json() (in 5/5)
patch 5/5 (i.e 4/4 in v18) adds a list of servers and json parser functionality
as usual
Thanks to Markus and Eric for help in understanding the new schema changes.
v20:
address comments from Markus and Eric on v19
patch 4/5 and 5/5 Use InetSocketAddress instead of GlusterInetSocketAddress
Port is not optional anymore
block/gluster.c | 637 +++++++++++++++++++++++++++++++++++++++------------
qapi/block-core.json | 68 +++++-
2 files changed, 553 insertions(+), 152 deletions(-)
--
2.7.4
^ permalink raw reply [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH v20 1/5] block/gluster: rename [server, volname, image] -> [host, volume, path]
2016-07-19 16:57 [Qemu-devel] [PATCH v20 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2016-07-19 16:57 ` Prasanna Kumar Kalever
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 2/5] block/gluster: code cleanup Prasanna Kumar Kalever
` (4 subsequent siblings)
5 siblings, 0 replies; 20+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-19 16:57 UTC (permalink / raw)
To: qemu-devel
Cc: armbru, kwolf, eblake, pkrempa, jcody, rtalur, vbellur,
Prasanna Kumar Kalever
A future patch will add support for multiple gluster servers. Existing
terminology is a bit unusual in relation to what names are used by
other networked devices, and doesn't map very well to the terminology
we expect to use for multiple servers. Therefore, rename the following
options:
'server' -> 'host'
'image' -> 'path'
'volname' -> 'volume'
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
---
block/gluster.c | 54 +++++++++++++++++++++++++++---------------------------
1 file changed, 27 insertions(+), 27 deletions(-)
diff --git a/block/gluster.c b/block/gluster.c
index 16f7778..f1ac9a2 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -29,10 +29,10 @@ typedef struct BDRVGlusterState {
} BDRVGlusterState;
typedef struct GlusterConf {
- char *server;
+ char *host;
int port;
- char *volname;
- char *image;
+ char *volume;
+ char *path;
char *transport;
int debug_level;
} GlusterConf;
@@ -40,9 +40,9 @@ typedef struct GlusterConf {
static void qemu_gluster_gconf_free(GlusterConf *gconf)
{
if (gconf) {
- g_free(gconf->server);
- g_free(gconf->volname);
- g_free(gconf->image);
+ g_free(gconf->host);
+ g_free(gconf->volume);
+ g_free(gconf->path);
g_free(gconf->transport);
g_free(gconf);
}
@@ -62,19 +62,19 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
if (*p == '\0') {
return -EINVAL;
}
- gconf->volname = g_strndup(q, p - q);
+ gconf->volume = g_strndup(q, p - q);
- /* image */
+ /* path */
p += strspn(p, "/");
if (*p == '\0') {
return -EINVAL;
}
- gconf->image = g_strdup(p);
+ gconf->path = g_strdup(p);
return 0;
}
/*
- * file=gluster[+transport]://[server[:port]]/volname/image[?socket=...]
+ * file=gluster[+transport]://[host[:port]]/volume/path[?socket=...]
*
* 'gluster' is the protocol.
*
@@ -83,10 +83,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
* tcp, unix and rdma. If a transport type isn't specified, then tcp
* type is assumed.
*
- * 'server' specifies the server where the volume file specification for
+ * 'host' specifies the host where the volume file specification for
* the given volume resides. This can be either hostname, ipv4 address
* or ipv6 address. ipv6 address needs to be within square brackets [ ].
- * If transport type is 'unix', then 'server' field should not be specified.
+ * If transport type is 'unix', then 'host' field should not be specified.
* The 'socket' field needs to be populated with the path to unix domain
* socket.
*
@@ -95,9 +95,9 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
* default port. If the transport type is unix, then 'port' should not be
* specified.
*
- * 'volname' is the name of the gluster volume which contains the VM image.
+ * 'volume' is the name of the gluster volume which contains the VM image.
*
- * 'image' is the path to the actual VM image that resides on gluster volume.
+ * 'path' is the path to the actual VM image that resides on gluster volume.
*
* Examples:
*
@@ -106,7 +106,7 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
* file=gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
* file=gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
* file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
- * file=gluster+tcp://server.domain.com:24007/testvol/dir/a.img
+ * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
* file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
* file=gluster+rdma://1.2.3.4:24007/testvol/a.img
*/
@@ -157,9 +157,9 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
ret = -EINVAL;
goto out;
}
- gconf->server = g_strdup(qp->p[0].value);
+ gconf->host = g_strdup(qp->p[0].value);
} else {
- gconf->server = g_strdup(uri->server ? uri->server : "localhost");
+ gconf->host = g_strdup(uri->server ? uri->server : "localhost");
gconf->port = uri->port;
}
@@ -180,18 +180,18 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
ret = qemu_gluster_parseuri(gconf, filename);
if (ret < 0) {
- error_setg(errp, "Usage: file=gluster[+transport]://[server[:port]]/"
- "volname/image[?socket=...]");
+ error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
+ "volume/path[?socket=...]");
errno = -ret;
goto out;
}
- glfs = glfs_new(gconf->volname);
+ glfs = glfs_new(gconf->volume);
if (!glfs) {
goto out;
}
- ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->server,
+ ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
gconf->port);
if (ret < 0) {
goto out;
@@ -205,9 +205,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
ret = glfs_init(glfs);
if (ret) {
error_setg_errno(errp, errno,
- "Gluster connection failed for server=%s port=%d "
- "volume=%s image=%s transport=%s", gconf->server,
- gconf->port, gconf->volname, gconf->image,
+ "Gluster connection failed for host=%s port=%d "
+ "volume=%s path=%s transport=%s", gconf->host,
+ gconf->port, gconf->volume, gconf->path,
gconf->transport);
/* glfs_init sometimes doesn't set errno although docs suggest that */
@@ -373,7 +373,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
qemu_gluster_parse_flags(bdrv_flags, &open_flags);
- s->fd = glfs_open(s->glfs, gconf->image, open_flags);
+ s->fd = glfs_open(s->glfs, gconf->path, open_flags);
if (!s->fd) {
ret = -errno;
}
@@ -439,7 +439,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
}
#endif
- reop_s->fd = glfs_open(reop_s->glfs, gconf->image, open_flags);
+ reop_s->fd = glfs_open(reop_s->glfs, gconf->path, open_flags);
if (reop_s->fd == NULL) {
/* reops->glfs will be cleaned up in _abort */
ret = -errno;
@@ -587,7 +587,7 @@ static int qemu_gluster_create(const char *filename,
goto out;
}
- fd = glfs_creat(glfs, gconf->image,
+ fd = glfs_creat(glfs, gconf->path,
O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
if (!fd) {
ret = -errno;
--
2.7.4
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH v20 2/5] block/gluster: code cleanup
2016-07-19 16:57 [Qemu-devel] [PATCH v20 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 1/5] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
@ 2016-07-19 16:57 ` Prasanna Kumar Kalever
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 3/5] block/gluster: deprecate rdma support Prasanna Kumar Kalever
` (3 subsequent siblings)
5 siblings, 0 replies; 20+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-19 16:57 UTC (permalink / raw)
To: qemu-devel
Cc: armbru, kwolf, eblake, pkrempa, jcody, rtalur, vbellur,
Prasanna Kumar Kalever
unified coding styles of multiline function arguments and other error functions
moved random declarations of structures and other list variables
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
---
block/gluster.c | 143 +++++++++++++++++++++++++++++---------------------------
1 file changed, 75 insertions(+), 68 deletions(-)
diff --git a/block/gluster.c b/block/gluster.c
index f1ac9a2..40ee852 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -13,6 +13,12 @@
#include "qapi/error.h"
#include "qemu/uri.h"
+#define GLUSTER_OPT_FILENAME "filename"
+#define GLUSTER_OPT_DEBUG "debug"
+#define GLUSTER_DEBUG_DEFAULT 4
+#define GLUSTER_DEBUG_MAX 9
+
+
typedef struct GlusterAIOCB {
int64_t size;
int ret;
@@ -28,6 +34,11 @@ typedef struct BDRVGlusterState {
int debug_level;
} BDRVGlusterState;
+typedef struct BDRVGlusterReopenState {
+ struct glfs *glfs;
+ struct glfs_fd *fd;
+} BDRVGlusterReopenState;
+
typedef struct GlusterConf {
char *host;
int port;
@@ -37,6 +48,49 @@ typedef struct GlusterConf {
int debug_level;
} GlusterConf;
+
+static QemuOptsList qemu_gluster_create_opts = {
+ .name = "qemu-gluster-create-opts",
+ .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
+ .desc = {
+ {
+ .name = BLOCK_OPT_SIZE,
+ .type = QEMU_OPT_SIZE,
+ .help = "Virtual disk size"
+ },
+ {
+ .name = BLOCK_OPT_PREALLOC,
+ .type = QEMU_OPT_STRING,
+ .help = "Preallocation mode (allowed values: off, full)"
+ },
+ {
+ .name = GLUSTER_OPT_DEBUG,
+ .type = QEMU_OPT_NUMBER,
+ .help = "Gluster log level, valid range is 0-9",
+ },
+ { /* end of list */ }
+ }
+};
+
+static QemuOptsList runtime_opts = {
+ .name = "gluster",
+ .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
+ .desc = {
+ {
+ .name = GLUSTER_OPT_FILENAME,
+ .type = QEMU_OPT_STRING,
+ .help = "URL to the gluster image",
+ },
+ {
+ .name = GLUSTER_OPT_DEBUG,
+ .type = QEMU_OPT_NUMBER,
+ .help = "Gluster log level, valid range is 0-9",
+ },
+ { /* end of list */ }
+ },
+};
+
+
static void qemu_gluster_gconf_free(GlusterConf *gconf)
{
if (gconf) {
@@ -181,7 +235,7 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
ret = qemu_gluster_parseuri(gconf, filename);
if (ret < 0) {
error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
- "volume/path[?socket=...]");
+ "volume/path[?socket=...]");
errno = -ret;
goto out;
}
@@ -255,30 +309,6 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
qemu_bh_schedule(acb->bh);
}
-#define GLUSTER_OPT_FILENAME "filename"
-#define GLUSTER_OPT_DEBUG "debug"
-#define GLUSTER_DEBUG_DEFAULT 4
-#define GLUSTER_DEBUG_MAX 9
-
-/* TODO Convert to fine grained options */
-static QemuOptsList runtime_opts = {
- .name = "gluster",
- .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
- .desc = {
- {
- .name = GLUSTER_OPT_FILENAME,
- .type = QEMU_OPT_STRING,
- .help = "URL to the gluster image",
- },
- {
- .name = GLUSTER_OPT_DEBUG,
- .type = QEMU_OPT_NUMBER,
- .help = "Gluster log level, valid range is 0-9",
- },
- { /* end of list */ }
- },
-};
-
static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
{
assert(open_flags != NULL);
@@ -395,12 +425,6 @@ out:
return ret;
}
-typedef struct BDRVGlusterReopenState {
- struct glfs *glfs;
- struct glfs_fd *fd;
-} BDRVGlusterReopenState;
-
-
static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
BlockReopenQueue *queue, Error **errp)
{
@@ -501,7 +525,9 @@ static void qemu_gluster_reopen_abort(BDRVReopenState *state)
#ifdef CONFIG_GLUSTERFS_ZEROFILL
static coroutine_fn int qemu_gluster_co_pwrite_zeroes(BlockDriverState *bs,
- int64_t offset, int size, BdrvRequestFlags flags)
+ int64_t offset,
+ int size,
+ BdrvRequestFlags flags)
{
int ret;
GlusterAIOCB acb;
@@ -527,7 +553,7 @@ static inline bool gluster_supports_zerofill(void)
}
static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
- int64_t size)
+ int64_t size)
{
return glfs_zerofill(fd, offset, size);
}
@@ -539,7 +565,7 @@ static inline bool gluster_supports_zerofill(void)
}
static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
- int64_t size)
+ int64_t size)
{
return 0;
}
@@ -576,19 +602,17 @@ static int qemu_gluster_create(const char *filename,
tmp = qemu_opt_get_del(opts, BLOCK_OPT_PREALLOC);
if (!tmp || !strcmp(tmp, "off")) {
prealloc = 0;
- } else if (!strcmp(tmp, "full") &&
- gluster_supports_zerofill()) {
+ } else if (!strcmp(tmp, "full") && gluster_supports_zerofill()) {
prealloc = 1;
} else {
error_setg(errp, "Invalid preallocation mode: '%s'"
- " or GlusterFS doesn't support zerofill API",
- tmp);
+ " or GlusterFS doesn't support zerofill API", tmp);
ret = -EINVAL;
goto out;
}
fd = glfs_creat(glfs, gconf->path,
- O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
+ O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, S_IRUSR | S_IWUSR);
if (!fd) {
ret = -errno;
} else {
@@ -614,7 +638,8 @@ out:
}
static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
- int64_t sector_num, int nb_sectors, QEMUIOVector *qiov, int write)
+ int64_t sector_num, int nb_sectors,
+ QEMUIOVector *qiov, int write)
{
int ret;
GlusterAIOCB acb;
@@ -629,10 +654,10 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
if (write) {
ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
- gluster_finish_aiocb, &acb);
+ gluster_finish_aiocb, &acb);
} else {
ret = glfs_preadv_async(s->fd, qiov->iov, qiov->niov, offset, 0,
- gluster_finish_aiocb, &acb);
+ gluster_finish_aiocb, &acb);
}
if (ret < 0) {
@@ -657,13 +682,17 @@ static int qemu_gluster_truncate(BlockDriverState *bs, int64_t offset)
}
static coroutine_fn int qemu_gluster_co_readv(BlockDriverState *bs,
- int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
+ int64_t sector_num,
+ int nb_sectors,
+ QEMUIOVector *qiov)
{
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 0);
}
static coroutine_fn int qemu_gluster_co_writev(BlockDriverState *bs,
- int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
+ int64_t sector_num,
+ int nb_sectors,
+ QEMUIOVector *qiov)
{
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 1);
}
@@ -725,7 +754,8 @@ error:
#ifdef CONFIG_GLUSTERFS_DISCARD
static coroutine_fn int qemu_gluster_co_discard(BlockDriverState *bs,
- int64_t sector_num, int nb_sectors)
+ int64_t sector_num,
+ int nb_sectors)
{
int ret;
GlusterAIOCB acb;
@@ -934,29 +964,6 @@ static int64_t coroutine_fn qemu_gluster_co_get_block_status(
}
-static QemuOptsList qemu_gluster_create_opts = {
- .name = "qemu-gluster-create-opts",
- .head = QTAILQ_HEAD_INITIALIZER(qemu_gluster_create_opts.head),
- .desc = {
- {
- .name = BLOCK_OPT_SIZE,
- .type = QEMU_OPT_SIZE,
- .help = "Virtual disk size"
- },
- {
- .name = BLOCK_OPT_PREALLOC,
- .type = QEMU_OPT_STRING,
- .help = "Preallocation mode (allowed values: off, full)"
- },
- {
- .name = GLUSTER_OPT_DEBUG,
- .type = QEMU_OPT_NUMBER,
- .help = "Gluster log level, valid range is 0-9",
- },
- { /* end of list */ }
- }
-};
-
static BlockDriver bdrv_gluster = {
.format_name = "gluster",
.protocol_name = "gluster",
--
2.7.4
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH v20 3/5] block/gluster: deprecate rdma support
2016-07-19 16:57 [Qemu-devel] [PATCH v20 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 1/5] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 2/5] block/gluster: code cleanup Prasanna Kumar Kalever
@ 2016-07-19 16:57 ` Prasanna Kumar Kalever
2016-07-19 17:17 ` Markus Armbruster
` (2 more replies)
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema Prasanna Kumar Kalever
` (2 subsequent siblings)
5 siblings, 3 replies; 20+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-19 16:57 UTC (permalink / raw)
To: qemu-devel
Cc: armbru, kwolf, eblake, pkrempa, jcody, rtalur, vbellur,
Prasanna Kumar Kalever
gluster volfile server fetch happens through unix and/or tcp, it doesn't
support volfile fetch over rdma, the rdma code may actually mislead,
to make sure things do not break, for now we fallback to tcp when requested
for rdma with a warning.
If you are wondering how this worked all these days, its the gluster libgfapi
code which handles anything other than unix transport as socket/tcp, sad but
true.
Also gluster doesn't support ipv6 addresses, removing the ipv6 related
comments/docs section
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
block/gluster.c | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/block/gluster.c b/block/gluster.c
index 40ee852..8a54ad4 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -12,6 +12,7 @@
#include "block/block_int.h"
#include "qapi/error.h"
#include "qemu/uri.h"
+#include "qemu/error-report.h"
#define GLUSTER_OPT_FILENAME "filename"
#define GLUSTER_OPT_DEBUG "debug"
@@ -134,12 +135,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
*
* 'transport' specifies the transport type used to connect to gluster
* management daemon (glusterd). Valid transport types are
- * tcp, unix and rdma. If a transport type isn't specified, then tcp
- * type is assumed.
+ * tcp, unix. If a transport type isn't specified, then tcp type is assumed.
*
* 'host' specifies the host where the volume file specification for
- * the given volume resides. This can be either hostname, ipv4 address
- * or ipv6 address. ipv6 address needs to be within square brackets [ ].
+ * the given volume resides. This can be either hostname, ipv4 address.
* If transport type is 'unix', then 'host' field should not be specified.
* The 'socket' field needs to be populated with the path to unix domain
* socket.
@@ -158,11 +157,8 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
* file=gluster://1.2.3.4/testvol/a.img
* file=gluster+tcp://1.2.3.4/testvol/a.img
* file=gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
- * file=gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
- * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
* file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
* file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
- * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
*/
static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
{
@@ -185,7 +181,9 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
gconf->transport = g_strdup("unix");
is_unix = true;
} else if (!strcmp(uri->scheme, "gluster+rdma")) {
- gconf->transport = g_strdup("rdma");
+ gconf->transport = g_strdup("tcp");
+ error_report("Warning: rdma feature is not supported falling "
+ "back to tcp");
} else {
ret = -EINVAL;
goto out;
@@ -1048,6 +1046,12 @@ static BlockDriver bdrv_gluster_unix = {
.create_opts = &qemu_gluster_create_opts,
};
+/* rdma is deprecated (actually never supported for volfile fetch)
+ * lets maintain for the protocol compatibility, to make sure things
+ * won't break immediately for now gluster+rdma will fall back to gluster+tcp
+ * protocol with Warning
+ * TODO: remove gluster+rdma interface support
+ */
static BlockDriver bdrv_gluster_rdma = {
.format_name = "gluster",
.protocol_name = "gluster+rdma",
--
2.7.4
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema
2016-07-19 16:57 [Qemu-devel] [PATCH v20 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
` (2 preceding siblings ...)
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 3/5] block/gluster: deprecate rdma support Prasanna Kumar Kalever
@ 2016-07-19 16:57 ` Prasanna Kumar Kalever
2016-07-19 17:30 ` Markus Armbruster
` (3 more replies)
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2016-07-19 21:40 ` [Qemu-devel] [PATCH v20 0/5] " Jeff Cody
5 siblings, 4 replies; 20+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-19 16:57 UTC (permalink / raw)
To: qemu-devel
Cc: armbru, kwolf, eblake, pkrempa, jcody, rtalur, vbellur,
Prasanna Kumar Kalever
this patch adds 'GlusterServer' related schema in qapi/block-core.json
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
block/gluster.c | 115 +++++++++++++++++++++++++++++----------------------
qapi/block-core.json | 68 +++++++++++++++++++++++++++---
2 files changed, 128 insertions(+), 55 deletions(-)
diff --git a/block/gluster.c b/block/gluster.c
index 8a54ad4..c4ca59e 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -16,6 +16,7 @@
#define GLUSTER_OPT_FILENAME "filename"
#define GLUSTER_OPT_DEBUG "debug"
+#define GLUSTER_DEFAULT_PORT 24007
#define GLUSTER_DEBUG_DEFAULT 4
#define GLUSTER_DEBUG_MAX 9
@@ -40,15 +41,6 @@ typedef struct BDRVGlusterReopenState {
struct glfs_fd *fd;
} BDRVGlusterReopenState;
-typedef struct GlusterConf {
- char *host;
- int port;
- char *volume;
- char *path;
- char *transport;
- int debug_level;
-} GlusterConf;
-
static QemuOptsList qemu_gluster_create_opts = {
.name = "qemu-gluster-create-opts",
@@ -92,18 +84,7 @@ static QemuOptsList runtime_opts = {
};
-static void qemu_gluster_gconf_free(GlusterConf *gconf)
-{
- if (gconf) {
- g_free(gconf->host);
- g_free(gconf->volume);
- g_free(gconf->path);
- g_free(gconf->transport);
- g_free(gconf);
- }
-}
-
-static int parse_volume_options(GlusterConf *gconf, char *path)
+static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
{
char *p, *q;
@@ -160,8 +141,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
* file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
* file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
*/
-static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
+static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
+ const char *filename)
{
+ GlusterServer *gsconf;
URI *uri;
QueryParams *qp = NULL;
bool is_unix = false;
@@ -172,16 +155,18 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
return -EINVAL;
}
+ gconf->server = gsconf = g_new0(GlusterServer, 1);
+
/* transport */
if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
- gconf->transport = g_strdup("tcp");
+ gsconf->type = GLUSTER_TRANSPORT_TCP;
} else if (!strcmp(uri->scheme, "gluster+tcp")) {
- gconf->transport = g_strdup("tcp");
+ gsconf->type = GLUSTER_TRANSPORT_TCP;
} else if (!strcmp(uri->scheme, "gluster+unix")) {
- gconf->transport = g_strdup("unix");
+ gsconf->type = GLUSTER_TRANSPORT_UNIX;
is_unix = true;
} else if (!strcmp(uri->scheme, "gluster+rdma")) {
- gconf->transport = g_strdup("tcp");
+ gsconf->type = GLUSTER_TRANSPORT_TCP;
error_report("Warning: rdma feature is not supported falling "
"back to tcp");
} else {
@@ -209,10 +194,14 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
ret = -EINVAL;
goto out;
}
- gconf->host = g_strdup(qp->p[0].value);
+ gsconf->u.q_unix.path = g_strdup(qp->p[0].value);
} else {
- gconf->host = g_strdup(uri->server ? uri->server : "localhost");
- gconf->port = uri->port;
+ gsconf->u.tcp.host = g_strdup(uri->server ? uri->server : "localhost");
+ if (uri->port) {
+ gsconf->u.tcp.port = g_strdup_printf("%d", uri->port);
+ } else {
+ gsconf->u.tcp.port = g_strdup_printf("%d", GLUSTER_DEFAULT_PORT);
+ }
}
out:
@@ -223,17 +212,18 @@ out:
return ret;
}
-static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
- Error **errp)
+static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
+ const char *filename, Error **errp)
{
struct glfs *glfs = NULL;
int ret;
int old_errno;
- ret = qemu_gluster_parseuri(gconf, filename);
+ ret = qemu_gluster_parse_uri(gconf, filename);
if (ret < 0) {
- error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
- "volume/path[?socket=...]");
+ error_setg(errp, "Invalid URI");
+ error_append_hint(errp, "Usage: file=gluster[+transport]://"
+ "[host[:port]]/volume/path[?socket=...]\n");
errno = -ret;
goto out;
}
@@ -243,8 +233,16 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
goto out;
}
- ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
- gconf->port);
+ if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
+ ret = glfs_set_volfile_server(glfs,
+ GlusterTransport_lookup[gconf->server->type],
+ gconf->server->u.q_unix.path, 0);
+ } else {
+ ret = glfs_set_volfile_server(glfs,
+ GlusterTransport_lookup[gconf->server->type],
+ gconf->server->u.tcp.host,
+ atoi(gconf->server->u.tcp.port));
+ }
if (ret < 0) {
goto out;
}
@@ -256,15 +254,22 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
ret = glfs_init(glfs);
if (ret) {
- error_setg_errno(errp, errno,
- "Gluster connection failed for host=%s port=%d "
- "volume=%s path=%s transport=%s", gconf->host,
- gconf->port, gconf->volume, gconf->path,
- gconf->transport);
+ if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
+ error_setg(errp,
+ "Gluster connection for volume %s, path %s failed on "
+ "socket %s ", gconf->volume, gconf->path,
+ gconf->server->u.q_unix.path);
+ } else {
+ error_setg(errp,
+ "Gluster connection for volume %s, path %s failed on "
+ "host %s and port %s ", gconf->volume, gconf->path,
+ gconf->server->u.tcp.host, gconf->server->u.tcp.port);
+ }
/* glfs_init sometimes doesn't set errno although docs suggest that */
- if (errno == 0)
+ if (errno == 0) {
errno = EINVAL;
+ }
goto out;
}
@@ -352,7 +357,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
BDRVGlusterState *s = bs->opaque;
int open_flags = 0;
int ret = 0;
- GlusterConf *gconf = g_new0(GlusterConf, 1);
+ BlockdevOptionsGluster *gconf = NULL;
QemuOpts *opts;
Error *local_err = NULL;
const char *filename;
@@ -375,7 +380,9 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
s->debug_level = GLUSTER_DEBUG_MAX;
}
+ gconf = g_new0(BlockdevOptionsGluster, 1);
gconf->debug_level = s->debug_level;
+ gconf->has_debug_level = true;
s->glfs = qemu_gluster_init(gconf, filename, errp);
if (!s->glfs) {
ret = -errno;
@@ -410,7 +417,9 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
out:
qemu_opts_del(opts);
- qemu_gluster_gconf_free(gconf);
+ if (gconf) {
+ qapi_free_BlockdevOptionsGluster(gconf);
+ }
if (!ret) {
return ret;
}
@@ -429,7 +438,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
int ret = 0;
BDRVGlusterState *s;
BDRVGlusterReopenState *reop_s;
- GlusterConf *gconf = NULL;
+ BlockdevOptionsGluster *gconf;
int open_flags = 0;
assert(state != NULL);
@@ -442,9 +451,9 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
qemu_gluster_parse_flags(state->flags, &open_flags);
- gconf = g_new0(GlusterConf, 1);
-
+ gconf = g_new0(BlockdevOptionsGluster, 1);
gconf->debug_level = s->debug_level;
+ gconf->has_debug_level = true;
reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
if (reop_s->glfs == NULL) {
ret = -errno;
@@ -470,7 +479,9 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
exit:
/* state->opaque will be freed in either the _abort or _commit */
- qemu_gluster_gconf_free(gconf);
+ if (gconf) {
+ qapi_free_BlockdevOptionsGluster(gconf);
+ }
return ret;
}
@@ -572,14 +583,15 @@ static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
static int qemu_gluster_create(const char *filename,
QemuOpts *opts, Error **errp)
{
+ BlockdevOptionsGluster *gconf;
struct glfs *glfs;
struct glfs_fd *fd;
int ret = 0;
int prealloc = 0;
int64_t total_size = 0;
char *tmp = NULL;
- GlusterConf *gconf = g_new0(GlusterConf, 1);
+ gconf = g_new0(BlockdevOptionsGluster, 1);
gconf->debug_level = qemu_opt_get_number_del(opts, GLUSTER_OPT_DEBUG,
GLUSTER_DEBUG_DEFAULT);
if (gconf->debug_level < 0) {
@@ -587,6 +599,7 @@ static int qemu_gluster_create(const char *filename,
} else if (gconf->debug_level > GLUSTER_DEBUG_MAX) {
gconf->debug_level = GLUSTER_DEBUG_MAX;
}
+ gconf->has_debug_level = true;
glfs = qemu_gluster_init(gconf, filename, errp);
if (!glfs) {
@@ -628,7 +641,9 @@ static int qemu_gluster_create(const char *filename,
}
out:
g_free(tmp);
- qemu_gluster_gconf_free(gconf);
+ if (gconf) {
+ qapi_free_BlockdevOptionsGluster(gconf);
+ }
if (glfs) {
glfs_fini(glfs);
}
diff --git a/qapi/block-core.json b/qapi/block-core.json
index a7fdb28..1fa0674 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1658,13 +1658,14 @@
# @host_device, @host_cdrom: Since 2.1
#
# Since: 2.0
+# @gluster: Since 2.7
##
{ 'enum': 'BlockdevDriver',
'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
- 'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
- 'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
- 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
- 'vmdk', 'vpc', 'vvfat' ] }
+ 'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
+ 'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
+ 'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
+ 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
##
# @BlockdevOptionsFile
@@ -2057,6 +2058,63 @@
'*read-pattern': 'QuorumReadPattern' } }
##
+# @GlusterTransport
+#
+# An enumeration of Gluster transport type
+#
+# @tcp: TCP - Transmission Control Protocol
+#
+# @unix: UNIX - Unix domain socket
+#
+# Since: 2.7
+##
+{ 'enum': 'GlusterTransport',
+ 'data': [ 'unix', 'tcp' ] }
+
+
+##
+# @GlusterServer
+#
+# Captures the address of a socket
+#
+# Details for connecting to a gluster server
+#
+# @type: Transport type used for gluster connection
+#
+# @unix: socket file
+#
+# @tcp: host address and port number
+#
+# Since: 2.7
+##
+{ 'union': 'GlusterServer',
+ 'base': { 'type': 'GlusterTransport' },
+ 'discriminator': 'type',
+ 'data': { 'unix': 'UnixSocketAddress',
+ 'tcp': 'InetSocketAddress' } }
+
+##
+# @BlockdevOptionsGluster
+#
+# Driver specific block device options for Gluster
+#
+# @volume: name of gluster volume where VM image resides
+#
+# @path: absolute path to image file in gluster volume
+#
+# @server: gluster server description
+#
+# @debug_level: #optional libgfapi log level (default '4' which is Error)
+#
+# Since: 2.7
+##
+{ 'struct': 'BlockdevOptionsGluster',
+ 'data': { 'volume': 'str',
+ 'path': 'str',
+ 'server': 'GlusterServer',
+ '*debug_level': 'int' } }
+
+##
# @BlockdevOptions
#
# Options for creating a block device. Many options are available for all
@@ -2119,7 +2177,7 @@
'file': 'BlockdevOptionsFile',
'ftp': 'BlockdevOptionsFile',
'ftps': 'BlockdevOptionsFile',
-# TODO gluster: Wait for structured options
+ 'gluster': 'BlockdevOptionsGluster',
'host_cdrom': 'BlockdevOptionsFile',
'host_device':'BlockdevOptionsFile',
'http': 'BlockdevOptionsFile',
--
2.7.4
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers
2016-07-19 16:57 [Qemu-devel] [PATCH v20 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
` (3 preceding siblings ...)
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema Prasanna Kumar Kalever
@ 2016-07-19 16:57 ` Prasanna Kumar Kalever
2016-07-19 17:55 ` Markus Armbruster
` (3 more replies)
2016-07-19 21:40 ` [Qemu-devel] [PATCH v20 0/5] " Jeff Cody
5 siblings, 4 replies; 20+ messages in thread
From: Prasanna Kumar Kalever @ 2016-07-19 16:57 UTC (permalink / raw)
To: qemu-devel
Cc: armbru, kwolf, eblake, pkrempa, jcody, rtalur, vbellur,
Prasanna Kumar Kalever
This patch adds a way to specify multiple volfile servers to the gluster
block backend of QEMU with tcp|rdma transport types and their port numbers.
Problem:
Currently VM Image on gluster volume is specified like this:
file=gluster[+tcp]://host[:port]/testvol/a.img
Say we have three hosts in a trusted pool with replica 3 volume in action.
When the host mentioned in the command above goes down for some reason,
the other two hosts are still available. But there's currently no way
to tell QEMU about them.
Solution:
New way of specifying VM Image on gluster volume with volfile servers:
(We still support old syntax to maintain backward compatibility)
Basic command line syntax looks like:
Pattern I:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,[debug=N,]
server.0.type=tcp,
server.0.host=1.2.3.4,
server.0.port=24007,
server.1.type=unix,
server.1.socket=/path/socketfile
Pattern II:
'json:{"driver":"qcow2","file":{"driver":"gluster",
"volume":"testvol","path":"/path/a.qcow2",["debug":N,]
"server":[{hostinfo_1}, ...{hostinfo_N}]}}'
driver => 'gluster' (protocol name)
volume => name of gluster volume where our VM image resides
path => absolute path of image in gluster volume
[debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
{hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
{type:"unix",socket:"/path/sockfile"}}
type => transport type used to connect to gluster management daemon,
it can be tcp|unix
host => host address (hostname/ipv4/ipv6 addresses/socket path)
port => port number on which glusterd is listening.
socket => path to socket file
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=tcp,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{type:"tcp",host:"1.2.3.4",port=24007},
{type:"unix",socket:"/var/run/glusterd.socket"}] } }'
This patch gives a mechanism to provide all the server addresses, which are in
replica set, so in case host1 is down VM can still boot from any of the
active hosts.
This is equivalent to the backup-volfile-servers option supported by
mount.glusterfs (FUSE way of mounting gluster volume)
credits: sincere thanks to all the supporters
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
---
block/gluster.c | 397 +++++++++++++++++++++++++++++++++++++++++++++------
qapi/block-core.json | 2 +-
2 files changed, 358 insertions(+), 41 deletions(-)
diff --git a/block/gluster.c b/block/gluster.c
index c4ca59e..0524789 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -11,15 +11,27 @@
#include <glusterfs/api/glfs.h>
#include "block/block_int.h"
#include "qapi/error.h"
+#include "qapi/qmp/qerror.h"
#include "qemu/uri.h"
#include "qemu/error-report.h"
#define GLUSTER_OPT_FILENAME "filename"
+#define GLUSTER_OPT_VOLUME "volume"
+#define GLUSTER_OPT_PATH "path"
+#define GLUSTER_OPT_TYPE "type"
+#define GLUSTER_OPT_SERVER_PATTERN "server."
+#define GLUSTER_OPT_HOST "host"
+#define GLUSTER_OPT_PORT "port"
+#define GLUSTER_OPT_TO "to"
+#define GLUSTER_OPT_IPV4 "ipv4"
+#define GLUSTER_OPT_IPV6 "ipv6"
+#define GLUSTER_OPT_SOCKET "socket"
#define GLUSTER_OPT_DEBUG "debug"
#define GLUSTER_DEFAULT_PORT 24007
#define GLUSTER_DEBUG_DEFAULT 4
#define GLUSTER_DEBUG_MAX 9
+#define GERR_INDEX_HINT "hint: check in 'server' array index '%d'\n"
typedef struct GlusterAIOCB {
int64_t size;
@@ -83,6 +95,92 @@ static QemuOptsList runtime_opts = {
},
};
+static QemuOptsList runtime_json_opts = {
+ .name = "gluster_json",
+ .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
+ .desc = {
+ {
+ .name = GLUSTER_OPT_VOLUME,
+ .type = QEMU_OPT_STRING,
+ .help = "name of gluster volume where VM image resides",
+ },
+ {
+ .name = GLUSTER_OPT_PATH,
+ .type = QEMU_OPT_STRING,
+ .help = "absolute path to image file in gluster volume",
+ },
+ {
+ .name = GLUSTER_OPT_DEBUG,
+ .type = QEMU_OPT_NUMBER,
+ .help = "Gluster log level, valid range is 0-9",
+ },
+ { /* end of list */ }
+ },
+};
+
+static QemuOptsList runtime_type_opts = {
+ .name = "gluster_type",
+ .head = QTAILQ_HEAD_INITIALIZER(runtime_type_opts.head),
+ .desc = {
+ {
+ .name = GLUSTER_OPT_TYPE,
+ .type = QEMU_OPT_STRING,
+ .help = "tcp|unix",
+ },
+ { /* end of list */ }
+ },
+};
+
+static QemuOptsList runtime_unix_opts = {
+ .name = "gluster_unix",
+ .head = QTAILQ_HEAD_INITIALIZER(runtime_unix_opts.head),
+ .desc = {
+ {
+ .name = GLUSTER_OPT_SOCKET,
+ .type = QEMU_OPT_STRING,
+ .help = "socket file path)",
+ },
+ { /* end of list */ }
+ },
+};
+
+static QemuOptsList runtime_tcp_opts = {
+ .name = "gluster_tcp",
+ .head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
+ .desc = {
+ {
+ .name = GLUSTER_OPT_TYPE,
+ .type = QEMU_OPT_STRING,
+ .help = "tcp|unix",
+ },
+ {
+ .name = GLUSTER_OPT_HOST,
+ .type = QEMU_OPT_STRING,
+ .help = "host address (hostname/ipv4/ipv6 addresses)",
+ },
+ {
+ .name = GLUSTER_OPT_PORT,
+ .type = QEMU_OPT_NUMBER,
+ .help = "port number on which glusterd is listening (default 24007)",
+ },
+ {
+ .name = "to",
+ .type = QEMU_OPT_NUMBER,
+ .help = "max port number, not supported by gluster",
+ },
+ {
+ .name = "ipv4",
+ .type = QEMU_OPT_BOOL,
+ .help = "ipv4 bool value, not supported by gluster",
+ },
+ {
+ .name = "ipv6",
+ .type = QEMU_OPT_BOOL,
+ .help = "ipv6 bool value, not supported by gluster",
+ },
+ { /* end of list */ }
+ },
+};
static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
{
@@ -155,7 +253,8 @@ static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
return -EINVAL;
}
- gconf->server = gsconf = g_new0(GlusterServer, 1);
+ gconf->server = g_new0(GlusterServerList, 1);
+ gconf->server->value = gsconf = g_new0(GlusterServer, 1);
/* transport */
if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
@@ -212,39 +311,34 @@ out:
return ret;
}
-static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
- const char *filename, Error **errp)
+static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
+ Error **errp)
{
- struct glfs *glfs = NULL;
+ struct glfs *glfs;
int ret;
int old_errno;
-
- ret = qemu_gluster_parse_uri(gconf, filename);
- if (ret < 0) {
- error_setg(errp, "Invalid URI");
- error_append_hint(errp, "Usage: file=gluster[+transport]://"
- "[host[:port]]/volume/path[?socket=...]\n");
- errno = -ret;
- goto out;
- }
+ GlusterServerList *server;
glfs = glfs_new(gconf->volume);
if (!glfs) {
goto out;
}
- if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
- ret = glfs_set_volfile_server(glfs,
- GlusterTransport_lookup[gconf->server->type],
- gconf->server->u.q_unix.path, 0);
- } else {
- ret = glfs_set_volfile_server(glfs,
- GlusterTransport_lookup[gconf->server->type],
- gconf->server->u.tcp.host,
- atoi(gconf->server->u.tcp.port));
- }
- if (ret < 0) {
- goto out;
+ for (server = gconf->server; server; server = server->next) {
+ if (server->value->type == GLUSTER_TRANSPORT_UNIX) {
+ ret = glfs_set_volfile_server(glfs,
+ GlusterTransport_lookup[server->value->type],
+ server->value->u.q_unix.path, 0);
+ } else {
+ ret = glfs_set_volfile_server(glfs,
+ GlusterTransport_lookup[server->value->type],
+ server->value->u.tcp.host,
+ atoi(server->value->u.tcp.port));
+ }
+
+ if (ret < 0) {
+ goto out;
+ }
}
ret = glfs_set_logging(glfs, "-", gconf->debug_level);
@@ -254,18 +348,21 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
ret = glfs_init(glfs);
if (ret) {
- if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
- error_setg(errp,
- "Gluster connection for volume %s, path %s failed on "
- "socket %s ", gconf->volume, gconf->path,
- gconf->server->u.q_unix.path);
- } else {
- error_setg(errp,
- "Gluster connection for volume %s, path %s failed on "
- "host %s and port %s ", gconf->volume, gconf->path,
- gconf->server->u.tcp.host, gconf->server->u.tcp.port);
+ error_setg(errp, "Gluster connection for volume %s, path %s failed"
+ " to connect", gconf->volume, gconf->path);
+ for (server = gconf->server; server; server = server->next) {
+ if (server->value->type == GLUSTER_TRANSPORT_UNIX) {
+ error_append_hint(errp, "hint: failed on socket %s ",
+ server->value->u.q_unix.path);
+ } else {
+ error_append_hint(errp, "hint: failed on host %s and port %s ",
+ server->value->u.tcp.host,
+ server->value->u.tcp.port);
+ }
}
+ error_append_hint(errp, "Please refer to gluster logs for more info\n");
+
/* glfs_init sometimes doesn't set errno although docs suggest that */
if (errno == 0) {
errno = EINVAL;
@@ -284,6 +381,226 @@ out:
return NULL;
}
+static int qapi_enum_parse(const char *opt)
+{
+ int i;
+
+ if (!opt) {
+ return GLUSTER_TRANSPORT__MAX;
+ }
+
+ for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
+ if (!strcmp(opt, GlusterTransport_lookup[i])) {
+ return i;
+ }
+ }
+
+ return i;
+}
+
+/*
+ * Convert the json formatted command line into qapi.
+*/
+static int qemu_gluster_parse_json(BlockdevOptionsGluster *gconf,
+ QDict *options, Error **errp)
+{
+ QemuOpts *opts;
+ GlusterServer *gsconf;
+ GlusterServerList *curr = NULL;
+ QDict *backing_options = NULL;
+ Error *local_err = NULL;
+ char *str = NULL;
+ const char *ptr;
+ size_t num_servers;
+ int i;
+
+ /* create opts info from runtime_json_opts list */
+ opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
+ qemu_opts_absorb_qdict(opts, options, &local_err);
+ if (local_err) {
+ goto out;
+ }
+
+ num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
+ if (num_servers < 1) {
+ error_setg(&local_err, QERR_MISSING_PARAMETER, "server");
+ goto out;
+ }
+
+ ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
+ if (!ptr) {
+ error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_VOLUME);
+ goto out;
+ }
+ gconf->volume = g_strdup(ptr);
+
+ ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
+ if (!ptr) {
+ error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_PATH);
+ goto out;
+ }
+ gconf->path = g_strdup(ptr);
+ qemu_opts_del(opts);
+
+ for (i = 0; i < num_servers; i++) {
+ str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
+ qdict_extract_subqdict(options, &backing_options, str);
+
+ /* create opts info from runtime_type_opts list */
+ opts = qemu_opts_create(&runtime_type_opts, NULL, 0, &error_abort);
+ qemu_opts_absorb_qdict(opts, backing_options, &local_err);
+ if (local_err) {
+ goto out;
+ }
+
+ ptr = qemu_opt_get(opts, GLUSTER_OPT_TYPE);
+ gsconf = g_new0(GlusterServer, 1);
+ gsconf->type = qapi_enum_parse(ptr);
+ if (!ptr) {
+ error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_TYPE);
+ error_append_hint(&local_err, GERR_INDEX_HINT, i);
+ goto out;
+
+ }
+ if (gsconf->type == GLUSTER_TRANSPORT__MAX) {
+ error_setg(&local_err, QERR_INVALID_PARAMETER_VALUE,
+ GLUSTER_OPT_TYPE, "tcp or unix");
+ error_append_hint(&local_err, GERR_INDEX_HINT, i);
+ goto out;
+ }
+ qemu_opts_del(opts);
+
+ if (gsconf->type == GLUSTER_TRANSPORT_TCP) {
+ /* create opts info from runtime_tcp_opts list */
+ opts = qemu_opts_create(&runtime_tcp_opts, NULL, 0, &error_abort);
+ qemu_opts_absorb_qdict(opts, backing_options, &local_err);
+ if (local_err) {
+ goto out;
+ }
+
+ ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
+ if (!ptr) {
+ error_setg(&local_err, QERR_MISSING_PARAMETER,
+ GLUSTER_OPT_HOST);
+ error_append_hint(&local_err, GERR_INDEX_HINT, i);
+ goto out;
+ }
+ gsconf->u.tcp.host = g_strdup(ptr);
+ ptr = qemu_opt_get(opts, GLUSTER_OPT_PORT);
+ if (!ptr) {
+ error_setg(&local_err, QERR_MISSING_PARAMETER,
+ GLUSTER_OPT_PORT);
+ error_append_hint(&local_err, GERR_INDEX_HINT, i);
+ goto out;
+ }
+ gsconf->u.tcp.port = g_strdup(ptr);
+
+ /* defend for unsupported fields in InetSocketAddress,
+ * i.e. @ipv4, @ipv6 and @to
+ */
+ ptr = qemu_opt_get(opts, GLUSTER_OPT_TO);
+ if (ptr) {
+ gsconf->u.tcp.has_to = true;
+ }
+ ptr = qemu_opt_get(opts, GLUSTER_OPT_IPV4);
+ if (ptr) {
+ gsconf->u.tcp.has_ipv4 = true;
+ }
+ ptr = qemu_opt_get(opts, GLUSTER_OPT_IPV6);
+ if (ptr) {
+ gsconf->u.tcp.has_ipv6 = true;
+ }
+ if (gsconf->u.tcp.has_to) {
+ error_setg(&local_err, "Parameter 'to' not supported");
+ goto out;
+ }
+ if (gsconf->u.tcp.has_ipv4 || gsconf->u.tcp.has_ipv6) {
+ error_setg(&local_err, "Parameters 'ipv4/ipv6' not supported");
+ goto out;
+ }
+ qemu_opts_del(opts);
+ } else {
+ /* create opts info from runtime_unix_opts list */
+ opts = qemu_opts_create(&runtime_unix_opts, NULL, 0, &error_abort);
+ qemu_opts_absorb_qdict(opts, backing_options, &local_err);
+ if (local_err) {
+ goto out;
+ }
+
+ ptr = qemu_opt_get(opts, GLUSTER_OPT_SOCKET);
+ if (!ptr) {
+ error_setg(&local_err, QERR_MISSING_PARAMETER,
+ GLUSTER_OPT_SOCKET);
+ error_append_hint(&local_err, GERR_INDEX_HINT, i);
+ goto out;
+ }
+ gsconf->u.q_unix.path = g_strdup(ptr);
+ qemu_opts_del(opts);
+ }
+
+ if (gconf->server == NULL) {
+ gconf->server = g_new0(GlusterServerList, 1);
+ gconf->server->value = gsconf;
+ curr = gconf->server;
+ } else {
+ curr->next = g_new0(GlusterServerList, 1);
+ curr->next->value = gsconf;
+ curr = curr->next;
+ }
+
+ qdict_del(backing_options, str);
+ g_free(str);
+ str = NULL;
+ }
+
+ return 0;
+
+out:
+ error_propagate(errp, local_err);
+ qemu_opts_del(opts);
+ if (str) {
+ qdict_del(backing_options, str);
+ g_free(str);
+ }
+ errno = EINVAL;
+ return -errno;
+}
+
+static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
+ const char *filename,
+ QDict *options, Error **errp)
+{
+ int ret;
+ if (filename) {
+ ret = qemu_gluster_parse_uri(gconf, filename);
+ if (ret < 0) {
+ error_setg(errp, "invalid URI");
+ error_append_hint(errp, "Usage: file=gluster[+transport]://"
+ "[host[:port]]/volume/path[?socket=...]\n");
+ errno = -ret;
+ return NULL;
+ }
+ } else {
+ ret = qemu_gluster_parse_json(gconf, options, errp);
+ if (ret < 0) {
+ error_append_hint(errp, "Usage: "
+ "-drive driver=qcow2,file.driver=gluster,"
+ "file.volume=testvol,file.path=/path/a.qcow2"
+ "[,file.debug=9],file.server.0.type=tcp,"
+ "file.server.0.host=1.2.3.4,"
+ "file.server.0.port=24007,"
+ "file.server.1.transport=unix,"
+ "file.server.1.socket=/var/run/glusterd.socket ..."
+ "\n");
+ errno = -ret;
+ return NULL;
+ }
+
+ }
+
+ return qemu_gluster_glfs_init(gconf, errp);
+}
+
static void qemu_gluster_complete_aio(void *opaque)
{
GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
@@ -383,7 +700,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
gconf = g_new0(BlockdevOptionsGluster, 1);
gconf->debug_level = s->debug_level;
gconf->has_debug_level = true;
- s->glfs = qemu_gluster_init(gconf, filename, errp);
+ s->glfs = qemu_gluster_init(gconf, filename, options, errp);
if (!s->glfs) {
ret = -errno;
goto out;
@@ -454,7 +771,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
gconf = g_new0(BlockdevOptionsGluster, 1);
gconf->debug_level = s->debug_level;
gconf->has_debug_level = true;
- reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
+ reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, NULL, errp);
if (reop_s->glfs == NULL) {
ret = -errno;
goto exit;
@@ -601,7 +918,7 @@ static int qemu_gluster_create(const char *filename,
}
gconf->has_debug_level = true;
- glfs = qemu_gluster_init(gconf, filename, errp);
+ glfs = qemu_gluster_init(gconf, filename, NULL, errp);
if (!glfs) {
ret = -errno;
goto out;
@@ -981,7 +1298,7 @@ static BlockDriver bdrv_gluster = {
.format_name = "gluster",
.protocol_name = "gluster",
.instance_size = sizeof(BDRVGlusterState),
- .bdrv_needs_filename = true,
+ .bdrv_needs_filename = false,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
@@ -1009,7 +1326,7 @@ static BlockDriver bdrv_gluster_tcp = {
.format_name = "gluster",
.protocol_name = "gluster+tcp",
.instance_size = sizeof(BDRVGlusterState),
- .bdrv_needs_filename = true,
+ .bdrv_needs_filename = false,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 1fa0674..5f8179b 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -2111,7 +2111,7 @@
{ 'struct': 'BlockdevOptionsGluster',
'data': { 'volume': 'str',
'path': 'str',
- 'server': 'GlusterServer',
+ 'server': ['GlusterServer'],
'*debug_level': 'int' } }
##
--
2.7.4
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 3/5] block/gluster: deprecate rdma support
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 3/5] block/gluster: deprecate rdma support Prasanna Kumar Kalever
@ 2016-07-19 17:17 ` Markus Armbruster
2016-07-19 18:09 ` Eric Blake
2016-07-19 18:46 ` Jeff Cody
2 siblings, 0 replies; 20+ messages in thread
From: Markus Armbruster @ 2016-07-19 17:17 UTC (permalink / raw)
To: Prasanna Kumar Kalever; +Cc: qemu-devel, kwolf, pkrempa, vbellur, jcody, rtalur
Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
> gluster volfile server fetch happens through unix and/or tcp, it doesn't
> support volfile fetch over rdma, the rdma code may actually mislead,
> to make sure things do not break, for now we fallback to tcp when requested
> for rdma with a warning.
>
> If you are wondering how this worked all these days, its the gluster libgfapi
> code which handles anything other than unix transport as socket/tcp, sad but
> true.
>
> Also gluster doesn't support ipv6 addresses, removing the ipv6 related
> comments/docs section
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
> block/gluster.c | 20 ++++++++++++--------
> 1 file changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/block/gluster.c b/block/gluster.c
> index 40ee852..8a54ad4 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -12,6 +12,7 @@
> #include "block/block_int.h"
> #include "qapi/error.h"
> #include "qemu/uri.h"
> +#include "qemu/error-report.h"
>
> #define GLUSTER_OPT_FILENAME "filename"
> #define GLUSTER_OPT_DEBUG "debug"
> @@ -134,12 +135,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
> *
> * 'transport' specifies the transport type used to connect to gluster
> * management daemon (glusterd). Valid transport types are
> - * tcp, unix and rdma. If a transport type isn't specified, then tcp
> - * type is assumed.
> + * tcp, unix. If a transport type isn't specified, then tcp type is assumed.
> *
> * 'host' specifies the host where the volume file specification for
> - * the given volume resides. This can be either hostname, ipv4 address
> - * or ipv6 address. ipv6 address needs to be within square brackets [ ].
> + * the given volume resides. This can be either hostname, ipv4 address.
"either hostname or ipv4 address"
Could be touched up on commit, or in a cleanup patch on top.
> * If transport type is 'unix', then 'host' field should not be specified.
> * The 'socket' field needs to be populated with the path to unix domain
> * socket.
> @@ -158,11 +157,8 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
> * file=gluster://1.2.3.4/testvol/a.img
> * file=gluster+tcp://1.2.3.4/testvol/a.img
> * file=gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
> - * file=gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
> - * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
> * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
> * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
> - * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
> */
> static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> {
> @@ -185,7 +181,9 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> gconf->transport = g_strdup("unix");
> is_unix = true;
> } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> - gconf->transport = g_strdup("rdma");
> + gconf->transport = g_strdup("tcp");
> + error_report("Warning: rdma feature is not supported falling "
> + "back to tcp");
"not supported, falling back"
Could be touched up on commit, or in a cleanup patch on top.
> } else {
> ret = -EINVAL;
> goto out;
> @@ -1048,6 +1046,12 @@ static BlockDriver bdrv_gluster_unix = {
> .create_opts = &qemu_gluster_create_opts,
> };
>
> +/* rdma is deprecated (actually never supported for volfile fetch)
> + * lets maintain for the protocol compatibility, to make sure things
> + * won't break immediately for now gluster+rdma will fall back to gluster+tcp
> + * protocol with Warning
> + * TODO: remove gluster+rdma interface support
> + */
> static BlockDriver bdrv_gluster_rdma = {
> .format_name = "gluster",
> .protocol_name = "gluster+rdma",
This comment could use some polish, but we can do that on top.
Reviewed-by: Markus Armbruster <armbru@redhat.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema Prasanna Kumar Kalever
@ 2016-07-19 17:30 ` Markus Armbruster
2016-07-19 17:37 ` Markus Armbruster
` (2 subsequent siblings)
3 siblings, 0 replies; 20+ messages in thread
From: Markus Armbruster @ 2016-07-19 17:30 UTC (permalink / raw)
To: Prasanna Kumar Kalever; +Cc: qemu-devel, kwolf, pkrempa, vbellur, jcody, rtalur
Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
> this patch adds 'GlusterServer' related schema in qapi/block-core.json
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
> block/gluster.c | 115 +++++++++++++++++++++++++++++----------------------
> qapi/block-core.json | 68 +++++++++++++++++++++++++++---
> 2 files changed, 128 insertions(+), 55 deletions(-)
>
> diff --git a/block/gluster.c b/block/gluster.c
> index 8a54ad4..c4ca59e 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -16,6 +16,7 @@
>
> #define GLUSTER_OPT_FILENAME "filename"
> #define GLUSTER_OPT_DEBUG "debug"
> +#define GLUSTER_DEFAULT_PORT 24007
> #define GLUSTER_DEBUG_DEFAULT 4
> #define GLUSTER_DEBUG_MAX 9
>
> @@ -40,15 +41,6 @@ typedef struct BDRVGlusterReopenState {
> struct glfs_fd *fd;
> } BDRVGlusterReopenState;
>
> -typedef struct GlusterConf {
> - char *host;
> - int port;
> - char *volume;
> - char *path;
> - char *transport;
> - int debug_level;
> -} GlusterConf;
> -
>
> static QemuOptsList qemu_gluster_create_opts = {
> .name = "qemu-gluster-create-opts",
> @@ -92,18 +84,7 @@ static QemuOptsList runtime_opts = {
> };
>
>
> -static void qemu_gluster_gconf_free(GlusterConf *gconf)
> -{
> - if (gconf) {
> - g_free(gconf->host);
> - g_free(gconf->volume);
> - g_free(gconf->path);
> - g_free(gconf->transport);
> - g_free(gconf);
> - }
> -}
> -
> -static int parse_volume_options(GlusterConf *gconf, char *path)
> +static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
> {
> char *p, *q;
>
> @@ -160,8 +141,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
> * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
> * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
> */
> -static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> +static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
> + const char *filename)
> {
> + GlusterServer *gsconf;
> URI *uri;
> QueryParams *qp = NULL;
> bool is_unix = false;
> @@ -172,16 +155,18 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> return -EINVAL;
> }
>
> + gconf->server = gsconf = g_new0(GlusterServer, 1);
> +
> /* transport */
> if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
> - gconf->transport = g_strdup("tcp");
> + gsconf->type = GLUSTER_TRANSPORT_TCP;
> } else if (!strcmp(uri->scheme, "gluster+tcp")) {
> - gconf->transport = g_strdup("tcp");
> + gsconf->type = GLUSTER_TRANSPORT_TCP;
> } else if (!strcmp(uri->scheme, "gluster+unix")) {
> - gconf->transport = g_strdup("unix");
> + gsconf->type = GLUSTER_TRANSPORT_UNIX;
> is_unix = true;
> } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> - gconf->transport = g_strdup("tcp");
> + gsconf->type = GLUSTER_TRANSPORT_TCP;
> error_report("Warning: rdma feature is not supported falling "
> "back to tcp");
> } else {
> @@ -209,10 +194,14 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> ret = -EINVAL;
> goto out;
> }
> - gconf->host = g_strdup(qp->p[0].value);
> + gsconf->u.q_unix.path = g_strdup(qp->p[0].value);
> } else {
> - gconf->host = g_strdup(uri->server ? uri->server : "localhost");
> - gconf->port = uri->port;
> + gsconf->u.tcp.host = g_strdup(uri->server ? uri->server : "localhost");
> + if (uri->port) {
> + gsconf->u.tcp.port = g_strdup_printf("%d", uri->port);
> + } else {
> + gsconf->u.tcp.port = g_strdup_printf("%d", GLUSTER_DEFAULT_PORT);
> + }
I'd prefer something like
gsconf->u.tcp.port = g_strdup_printf("%d",
uri->has_port ? uri->port
: GLUSTER_DEFAULT_PORT);
or with a new variable port:
port = uri->has_port ? uri->port : GLUSTER_DEFAULT_PORT;
gsconf->u.tcp.port = g_strdup_printf("%d", port);
Not worth a respin now, we can take it as is.
> }
>
> out:
> @@ -223,17 +212,18 @@ out:
> return ret;
> }
>
> -static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
> - Error **errp)
> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
> + const char *filename, Error **errp)
> {
> struct glfs *glfs = NULL;
> int ret;
> int old_errno;
>
> - ret = qemu_gluster_parseuri(gconf, filename);
> + ret = qemu_gluster_parse_uri(gconf, filename);
> if (ret < 0) {
> - error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> - "volume/path[?socket=...]");
> + error_setg(errp, "Invalid URI");
> + error_append_hint(errp, "Usage: file=gluster[+transport]://"
> + "[host[:port]]/volume/path[?socket=...]\n");
> errno = -ret;
> goto out;
> }
> @@ -243,8 +233,16 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
> goto out;
> }
>
> - ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
> - gconf->port);
> + if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> + ret = glfs_set_volfile_server(glfs,
> + GlusterTransport_lookup[gconf->server->type],
> + gconf->server->u.q_unix.path, 0);
> + } else {
> + ret = glfs_set_volfile_server(glfs,
> + GlusterTransport_lookup[gconf->server->type],
> + gconf->server->u.tcp.host,
> + atoi(gconf->server->u.tcp.port));
atoi() is okay, because its argument can only come from
qemu_gluster_parse_uri(), so atoi() can't fail. It'll become wrong when
the argument comes from a QMP client.
> + }
> if (ret < 0) {
> goto out;
> }
> @@ -256,15 +254,22 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
>
> ret = glfs_init(glfs);
> if (ret) {
> - error_setg_errno(errp, errno,
> - "Gluster connection failed for host=%s port=%d "
> - "volume=%s path=%s transport=%s", gconf->host,
> - gconf->port, gconf->volume, gconf->path,
> - gconf->transport);
> + if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> + error_setg(errp,
> + "Gluster connection for volume %s, path %s failed on "
> + "socket %s ", gconf->volume, gconf->path,
> + gconf->server->u.q_unix.path);
> + } else {
> + error_setg(errp,
> + "Gluster connection for volume %s, path %s failed on "
> + "host %s and port %s ", gconf->volume, gconf->path,
> + gconf->server->u.tcp.host, gconf->server->u.tcp.port);
> + }
>
> /* glfs_init sometimes doesn't set errno although docs suggest that */
> - if (errno == 0)
> + if (errno == 0) {
> errno = EINVAL;
> + }
>
> goto out;
> }
> @@ -352,7 +357,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
> BDRVGlusterState *s = bs->opaque;
> int open_flags = 0;
> int ret = 0;
> - GlusterConf *gconf = g_new0(GlusterConf, 1);
> + BlockdevOptionsGluster *gconf = NULL;
> QemuOpts *opts;
> Error *local_err = NULL;
> const char *filename;
> @@ -375,7 +380,9 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
> s->debug_level = GLUSTER_DEBUG_MAX;
> }
>
> + gconf = g_new0(BlockdevOptionsGluster, 1);
> gconf->debug_level = s->debug_level;
> + gconf->has_debug_level = true;
> s->glfs = qemu_gluster_init(gconf, filename, errp);
> if (!s->glfs) {
> ret = -errno;
> @@ -410,7 +417,9 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
>
> out:
> qemu_opts_del(opts);
> - qemu_gluster_gconf_free(gconf);
> + if (gconf) {
> + qapi_free_BlockdevOptionsGluster(gconf);
> + }
> if (!ret) {
> return ret;
> }
> @@ -429,7 +438,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
> int ret = 0;
> BDRVGlusterState *s;
> BDRVGlusterReopenState *reop_s;
> - GlusterConf *gconf = NULL;
> + BlockdevOptionsGluster *gconf;
> int open_flags = 0;
>
> assert(state != NULL);
> @@ -442,9 +451,9 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>
> qemu_gluster_parse_flags(state->flags, &open_flags);
>
> - gconf = g_new0(GlusterConf, 1);
> -
> + gconf = g_new0(BlockdevOptionsGluster, 1);
> gconf->debug_level = s->debug_level;
> + gconf->has_debug_level = true;
> reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
> if (reop_s->glfs == NULL) {
> ret = -errno;
> @@ -470,7 +479,9 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>
> exit:
> /* state->opaque will be freed in either the _abort or _commit */
> - qemu_gluster_gconf_free(gconf);
> + if (gconf) {
> + qapi_free_BlockdevOptionsGluster(gconf);
> + }
> return ret;
> }
>
> @@ -572,14 +583,15 @@ static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
> static int qemu_gluster_create(const char *filename,
> QemuOpts *opts, Error **errp)
> {
> + BlockdevOptionsGluster *gconf;
> struct glfs *glfs;
> struct glfs_fd *fd;
> int ret = 0;
> int prealloc = 0;
> int64_t total_size = 0;
> char *tmp = NULL;
> - GlusterConf *gconf = g_new0(GlusterConf, 1);
>
> + gconf = g_new0(BlockdevOptionsGluster, 1);
> gconf->debug_level = qemu_opt_get_number_del(opts, GLUSTER_OPT_DEBUG,
> GLUSTER_DEBUG_DEFAULT);
> if (gconf->debug_level < 0) {
> @@ -587,6 +599,7 @@ static int qemu_gluster_create(const char *filename,
> } else if (gconf->debug_level > GLUSTER_DEBUG_MAX) {
> gconf->debug_level = GLUSTER_DEBUG_MAX;
> }
> + gconf->has_debug_level = true;
>
> glfs = qemu_gluster_init(gconf, filename, errp);
> if (!glfs) {
> @@ -628,7 +641,9 @@ static int qemu_gluster_create(const char *filename,
> }
> out:
> g_free(tmp);
> - qemu_gluster_gconf_free(gconf);
> + if (gconf) {
> + qapi_free_BlockdevOptionsGluster(gconf);
> + }
> if (glfs) {
> glfs_fini(glfs);
> }
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index a7fdb28..1fa0674 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -1658,13 +1658,14 @@
> # @host_device, @host_cdrom: Since 2.1
> #
> # Since: 2.0
> +# @gluster: Since 2.7
> ##
> { 'enum': 'BlockdevDriver',
> 'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
> - 'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
> - 'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
> - 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
> - 'vmdk', 'vpc', 'vvfat' ] }
> + 'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
> + 'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
> + 'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
> + 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
>
> ##
> # @BlockdevOptionsFile
> @@ -2057,6 +2058,63 @@
> '*read-pattern': 'QuorumReadPattern' } }
>
> ##
> +# @GlusterTransport
> +#
> +# An enumeration of Gluster transport type
> +#
> +# @tcp: TCP - Transmission Control Protocol
> +#
> +# @unix: UNIX - Unix domain socket
> +#
> +# Since: 2.7
> +##
> +{ 'enum': 'GlusterTransport',
> + 'data': [ 'unix', 'tcp' ] }
> +
> +
> +##
> +# @GlusterServer
> +#
> +# Captures the address of a socket
> +#
> +# Details for connecting to a gluster server
> +#
> +# @type: Transport type used for gluster connection
> +#
> +# @unix: socket file
> +#
> +# @tcp: host address and port number
> +#
> +# Since: 2.7
> +##
> +{ 'union': 'GlusterServer',
> + 'base': { 'type': 'GlusterTransport' },
> + 'discriminator': 'type',
> + 'data': { 'unix': 'UnixSocketAddress',
> + 'tcp': 'InetSocketAddress' } }
We might want to add comments describing the relationship to
SocketAddress, but that can be done on top.
> +
> +##
> +# @BlockdevOptionsGluster
> +#
> +# Driver specific block device options for Gluster
> +#
> +# @volume: name of gluster volume where VM image resides
> +#
> +# @path: absolute path to image file in gluster volume
> +#
> +# @server: gluster server description
> +#
> +# @debug_level: #optional libgfapi log level (default '4' which is Error)
> +#
> +# Since: 2.7
> +##
> +{ 'struct': 'BlockdevOptionsGluster',
> + 'data': { 'volume': 'str',
> + 'path': 'str',
> + 'server': 'GlusterServer',
> + '*debug_level': 'int' } }
> +
> +##
> # @BlockdevOptions
> #
> # Options for creating a block device. Many options are available for all
> @@ -2119,7 +2177,7 @@
> 'file': 'BlockdevOptionsFile',
> 'ftp': 'BlockdevOptionsFile',
> 'ftps': 'BlockdevOptionsFile',
> -# TODO gluster: Wait for structured options
> + 'gluster': 'BlockdevOptionsGluster',
> 'host_cdrom': 'BlockdevOptionsFile',
> 'host_device':'BlockdevOptionsFile',
> 'http': 'BlockdevOptionsFile',
Reviewed-by: Markus Armbruster <armbru@redhat.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema Prasanna Kumar Kalever
2016-07-19 17:30 ` Markus Armbruster
@ 2016-07-19 17:37 ` Markus Armbruster
2016-07-19 18:38 ` Eric Blake
2016-07-19 20:39 ` Jeff Cody
3 siblings, 0 replies; 20+ messages in thread
From: Markus Armbruster @ 2016-07-19 17:37 UTC (permalink / raw)
To: Prasanna Kumar Kalever; +Cc: qemu-devel, kwolf, pkrempa, vbellur, jcody, rtalur
One more thing...
Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
> this patch adds 'GlusterServer' related schema in qapi/block-core.json
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
> block/gluster.c | 115 +++++++++++++++++++++++++++++----------------------
> qapi/block-core.json | 68 +++++++++++++++++++++++++++---
> 2 files changed, 128 insertions(+), 55 deletions(-)
>
> diff --git a/block/gluster.c b/block/gluster.c
> index 8a54ad4..c4ca59e 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
[...]
> @@ -628,7 +641,9 @@ static int qemu_gluster_create(const char *filename,
> }
> out:
> g_free(tmp);
> - qemu_gluster_gconf_free(gconf);
> + if (gconf) {
> + qapi_free_BlockdevOptionsGluster(gconf);
> + }
qapi_free_FOO(NULL) is safe. Let's drop the conditional. Could be done
on commit, or as a follow-up cleanup.
> if (glfs) {
> glfs_fini(glfs);
> }
[...]
R-by stands.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2016-07-19 17:55 ` Markus Armbruster
2016-07-19 18:33 ` Markus Armbruster
` (2 subsequent siblings)
3 siblings, 0 replies; 20+ messages in thread
From: Markus Armbruster @ 2016-07-19 17:55 UTC (permalink / raw)
To: Prasanna Kumar Kalever; +Cc: qemu-devel, kwolf, pkrempa, vbellur, jcody, rtalur
Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.
>
> Problem:
>
> Currently VM Image on gluster volume is specified like this:
>
> file=gluster[+tcp]://host[:port]/testvol/a.img
>
> Say we have three hosts in a trusted pool with replica 3 volume in action.
> When the host mentioned in the command above goes down for some reason,
> the other two hosts are still available. But there's currently no way
> to tell QEMU about them.
>
> Solution:
>
> New way of specifying VM Image on gluster volume with volfile servers:
> (We still support old syntax to maintain backward compatibility)
>
> Basic command line syntax looks like:
>
> Pattern I:
> -drive driver=gluster,
> volume=testvol,path=/path/a.raw,[debug=N,]
> server.0.type=tcp,
> server.0.host=1.2.3.4,
> server.0.port=24007,
> server.1.type=unix,
> server.1.socket=/path/socketfile
>
> Pattern II:
> 'json:{"driver":"qcow2","file":{"driver":"gluster",
> "volume":"testvol","path":"/path/a.qcow2",["debug":N,]
> "server":[{hostinfo_1}, ...{hostinfo_N}]}}'
>
> driver => 'gluster' (protocol name)
> volume => name of gluster volume where our VM image resides
> path => absolute path of image in gluster volume
> [debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
>
> {hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
> {type:"unix",socket:"/path/sockfile"}}
>
> type => transport type used to connect to gluster management daemon,
> it can be tcp|unix
> host => host address (hostname/ipv4/ipv6 addresses/socket path)
> port => port number on which glusterd is listening.
> socket => path to socket file
>
> Examples:
> 1.
> -drive driver=qcow2,file.driver=gluster,
> file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
> file.server.0.type=tcp,
> file.server.0.host=1.2.3.4,
> file.server.0.port=24007,
> file.server.1.type=tcp,
> file.server.1.socket=/var/run/glusterd.socket
> 2.
> 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
> "path":"/path/a.qcow2","debug":9,"server":
> [{type:"tcp",host:"1.2.3.4",port=24007},
> {type:"unix",socket:"/var/run/glusterd.socket"}] } }'
I tried using this as argument for -drive, like this:
$ qemu-system-x86_64 -drive 'json:{"driver":"qcow2","file":{"driver":"gluster", ...
but I get a "Must specify either driver or file" error. What am I doing wrong?
> This patch gives a mechanism to provide all the server addresses, which are in
> replica set, so in case host1 is down VM can still boot from any of the
> active hosts.
>
> This is equivalent to the backup-volfile-servers option supported by
> mount.glusterfs (FUSE way of mounting gluster volume)
>
> credits: sincere thanks to all the supporters
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
> block/gluster.c | 397 +++++++++++++++++++++++++++++++++++++++++++++------
> qapi/block-core.json | 2 +-
> 2 files changed, 358 insertions(+), 41 deletions(-)
>
> diff --git a/block/gluster.c b/block/gluster.c
> index c4ca59e..0524789 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -11,15 +11,27 @@
> #include <glusterfs/api/glfs.h>
> #include "block/block_int.h"
> #include "qapi/error.h"
> +#include "qapi/qmp/qerror.h"
> #include "qemu/uri.h"
> #include "qemu/error-report.h"
>
> #define GLUSTER_OPT_FILENAME "filename"
> +#define GLUSTER_OPT_VOLUME "volume"
> +#define GLUSTER_OPT_PATH "path"
> +#define GLUSTER_OPT_TYPE "type"
> +#define GLUSTER_OPT_SERVER_PATTERN "server."
> +#define GLUSTER_OPT_HOST "host"
> +#define GLUSTER_OPT_PORT "port"
> +#define GLUSTER_OPT_TO "to"
> +#define GLUSTER_OPT_IPV4 "ipv4"
> +#define GLUSTER_OPT_IPV6 "ipv6"
> +#define GLUSTER_OPT_SOCKET "socket"
> #define GLUSTER_OPT_DEBUG "debug"
> #define GLUSTER_DEFAULT_PORT 24007
> #define GLUSTER_DEBUG_DEFAULT 4
> #define GLUSTER_DEBUG_MAX 9
>
> +#define GERR_INDEX_HINT "hint: check in 'server' array index '%d'\n"
>
> typedef struct GlusterAIOCB {
> int64_t size;
> @@ -83,6 +95,92 @@ static QemuOptsList runtime_opts = {
> },
> };
>
> +static QemuOptsList runtime_json_opts = {
> + .name = "gluster_json",
> + .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
> + .desc = {
> + {
> + .name = GLUSTER_OPT_VOLUME,
> + .type = QEMU_OPT_STRING,
> + .help = "name of gluster volume where VM image resides",
> + },
> + {
> + .name = GLUSTER_OPT_PATH,
> + .type = QEMU_OPT_STRING,
> + .help = "absolute path to image file in gluster volume",
> + },
> + {
> + .name = GLUSTER_OPT_DEBUG,
> + .type = QEMU_OPT_NUMBER,
> + .help = "Gluster log level, valid range is 0-9",
> + },
> + { /* end of list */ }
> + },
> +};
> +
> +static QemuOptsList runtime_type_opts = {
> + .name = "gluster_type",
> + .head = QTAILQ_HEAD_INITIALIZER(runtime_type_opts.head),
> + .desc = {
> + {
> + .name = GLUSTER_OPT_TYPE,
> + .type = QEMU_OPT_STRING,
> + .help = "tcp|unix",
> + },
> + { /* end of list */ }
> + },
> +};
> +
> +static QemuOptsList runtime_unix_opts = {
> + .name = "gluster_unix",
> + .head = QTAILQ_HEAD_INITIALIZER(runtime_unix_opts.head),
> + .desc = {
> + {
> + .name = GLUSTER_OPT_SOCKET,
> + .type = QEMU_OPT_STRING,
> + .help = "socket file path)",
> + },
> + { /* end of list */ }
> + },
> +};
> +
> +static QemuOptsList runtime_tcp_opts = {
> + .name = "gluster_tcp",
> + .head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
> + .desc = {
> + {
> + .name = GLUSTER_OPT_TYPE,
> + .type = QEMU_OPT_STRING,
> + .help = "tcp|unix",
> + },
> + {
> + .name = GLUSTER_OPT_HOST,
> + .type = QEMU_OPT_STRING,
> + .help = "host address (hostname/ipv4/ipv6 addresses)",
> + },
> + {
> + .name = GLUSTER_OPT_PORT,
> + .type = QEMU_OPT_NUMBER,
> + .help = "port number on which glusterd is listening (default 24007)",
> + },
> + {
> + .name = "to",
> + .type = QEMU_OPT_NUMBER,
> + .help = "max port number, not supported by gluster",
> + },
> + {
> + .name = "ipv4",
> + .type = QEMU_OPT_BOOL,
> + .help = "ipv4 bool value, not supported by gluster",
> + },
> + {
> + .name = "ipv6",
> + .type = QEMU_OPT_BOOL,
> + .help = "ipv6 bool value, not supported by gluster",
> + },
> + { /* end of list */ }
> + },
> +};
>
> static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
> {
> @@ -155,7 +253,8 @@ static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
> return -EINVAL;
> }
>
> - gconf->server = gsconf = g_new0(GlusterServer, 1);
> + gconf->server = g_new0(GlusterServerList, 1);
> + gconf->server->value = gsconf = g_new0(GlusterServer, 1);
>
> /* transport */
> if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
> @@ -212,39 +311,34 @@ out:
> return ret;
> }
>
> -static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
> - const char *filename, Error **errp)
> +static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
> + Error **errp)
> {
> - struct glfs *glfs = NULL;
> + struct glfs *glfs;
> int ret;
> int old_errno;
> -
> - ret = qemu_gluster_parse_uri(gconf, filename);
> - if (ret < 0) {
> - error_setg(errp, "Invalid URI");
> - error_append_hint(errp, "Usage: file=gluster[+transport]://"
> - "[host[:port]]/volume/path[?socket=...]\n");
> - errno = -ret;
> - goto out;
> - }
> + GlusterServerList *server;
>
> glfs = glfs_new(gconf->volume);
> if (!glfs) {
> goto out;
> }
>
> - if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> - ret = glfs_set_volfile_server(glfs,
> - GlusterTransport_lookup[gconf->server->type],
> - gconf->server->u.q_unix.path, 0);
> - } else {
> - ret = glfs_set_volfile_server(glfs,
> - GlusterTransport_lookup[gconf->server->type],
> - gconf->server->u.tcp.host,
> - atoi(gconf->server->u.tcp.port));
> - }
> - if (ret < 0) {
> - goto out;
> + for (server = gconf->server; server; server = server->next) {
> + if (server->value->type == GLUSTER_TRANSPORT_UNIX) {
> + ret = glfs_set_volfile_server(glfs,
> + GlusterTransport_lookup[server->value->type],
> + server->value->u.q_unix.path, 0);
> + } else {
> + ret = glfs_set_volfile_server(glfs,
> + GlusterTransport_lookup[server->value->type],
> + server->value->u.tcp.host,
> + atoi(server->value->u.tcp.port));
> + }
> +
> + if (ret < 0) {
> + goto out;
> + }
> }
>
> ret = glfs_set_logging(glfs, "-", gconf->debug_level);
> @@ -254,18 +348,21 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>
> ret = glfs_init(glfs);
> if (ret) {
> - if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> - error_setg(errp,
> - "Gluster connection for volume %s, path %s failed on "
> - "socket %s ", gconf->volume, gconf->path,
> - gconf->server->u.q_unix.path);
> - } else {
> - error_setg(errp,
> - "Gluster connection for volume %s, path %s failed on "
> - "host %s and port %s ", gconf->volume, gconf->path,
> - gconf->server->u.tcp.host, gconf->server->u.tcp.port);
> + error_setg(errp, "Gluster connection for volume %s, path %s failed"
> + " to connect", gconf->volume, gconf->path);
> + for (server = gconf->server; server; server = server->next) {
> + if (server->value->type == GLUSTER_TRANSPORT_UNIX) {
> + error_append_hint(errp, "hint: failed on socket %s ",
> + server->value->u.q_unix.path);
> + } else {
> + error_append_hint(errp, "hint: failed on host %s and port %s ",
> + server->value->u.tcp.host,
> + server->value->u.tcp.port);
> + }
> }
>
> + error_append_hint(errp, "Please refer to gluster logs for more info\n");
> +
> /* glfs_init sometimes doesn't set errno although docs suggest that */
> if (errno == 0) {
> errno = EINVAL;
> @@ -284,6 +381,226 @@ out:
> return NULL;
> }
>
> +static int qapi_enum_parse(const char *opt)
> +{
> + int i;
> +
> + if (!opt) {
> + return GLUSTER_TRANSPORT__MAX;
> + }
> +
> + for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
> + if (!strcmp(opt, GlusterTransport_lookup[i])) {
> + return i;
> + }
> + }
> +
> + return i;
> +}
> +
> +/*
> + * Convert the json formatted command line into qapi.
> +*/
> +static int qemu_gluster_parse_json(BlockdevOptionsGluster *gconf,
> + QDict *options, Error **errp)
> +{
> + QemuOpts *opts;
> + GlusterServer *gsconf;
> + GlusterServerList *curr = NULL;
> + QDict *backing_options = NULL;
> + Error *local_err = NULL;
> + char *str = NULL;
> + const char *ptr;
> + size_t num_servers;
> + int i;
> +
> + /* create opts info from runtime_json_opts list */
> + opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
> + qemu_opts_absorb_qdict(opts, options, &local_err);
> + if (local_err) {
> + goto out;
> + }
> +
> + num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
> + if (num_servers < 1) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, "server");
> + goto out;
> + }
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_VOLUME);
> + goto out;
> + }
> + gconf->volume = g_strdup(ptr);
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_PATH);
> + goto out;
> + }
> + gconf->path = g_strdup(ptr);
> + qemu_opts_del(opts);
> +
> + for (i = 0; i < num_servers; i++) {
> + str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
> + qdict_extract_subqdict(options, &backing_options, str);
> +
> + /* create opts info from runtime_type_opts list */
> + opts = qemu_opts_create(&runtime_type_opts, NULL, 0, &error_abort);
> + qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> + if (local_err) {
> + goto out;
> + }
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_TYPE);
> + gsconf = g_new0(GlusterServer, 1);
> + gsconf->type = qapi_enum_parse(ptr);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_TYPE);
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
> + goto out;
> +
> + }
> + if (gsconf->type == GLUSTER_TRANSPORT__MAX) {
> + error_setg(&local_err, QERR_INVALID_PARAMETER_VALUE,
> + GLUSTER_OPT_TYPE, "tcp or unix");
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
> + goto out;
> + }
> + qemu_opts_del(opts);
> +
> + if (gsconf->type == GLUSTER_TRANSPORT_TCP) {
> + /* create opts info from runtime_tcp_opts list */
> + opts = qemu_opts_create(&runtime_tcp_opts, NULL, 0, &error_abort);
> + qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> + if (local_err) {
> + goto out;
> + }
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER,
> + GLUSTER_OPT_HOST);
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
> + goto out;
> + }
> + gsconf->u.tcp.host = g_strdup(ptr);
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_PORT);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER,
> + GLUSTER_OPT_PORT);
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
> + goto out;
> + }
> + gsconf->u.tcp.port = g_strdup(ptr);
> +
> + /* defend for unsupported fields in InetSocketAddress,
> + * i.e. @ipv4, @ipv6 and @to
> + */
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_TO);
> + if (ptr) {
> + gsconf->u.tcp.has_to = true;
> + }
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_IPV4);
> + if (ptr) {
> + gsconf->u.tcp.has_ipv4 = true;
> + }
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_IPV6);
> + if (ptr) {
> + gsconf->u.tcp.has_ipv6 = true;
> + }
> + if (gsconf->u.tcp.has_to) {
> + error_setg(&local_err, "Parameter 'to' not supported");
> + goto out;
> + }
> + if (gsconf->u.tcp.has_ipv4 || gsconf->u.tcp.has_ipv6) {
> + error_setg(&local_err, "Parameters 'ipv4/ipv6' not supported");
> + goto out;
> + }
> + qemu_opts_del(opts);
> + } else {
> + /* create opts info from runtime_unix_opts list */
> + opts = qemu_opts_create(&runtime_unix_opts, NULL, 0, &error_abort);
> + qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> + if (local_err) {
> + goto out;
> + }
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_SOCKET);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER,
> + GLUSTER_OPT_SOCKET);
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
> + goto out;
> + }
> + gsconf->u.q_unix.path = g_strdup(ptr);
> + qemu_opts_del(opts);
> + }
> +
> + if (gconf->server == NULL) {
> + gconf->server = g_new0(GlusterServerList, 1);
> + gconf->server->value = gsconf;
> + curr = gconf->server;
> + } else {
> + curr->next = g_new0(GlusterServerList, 1);
> + curr->next->value = gsconf;
> + curr = curr->next;
> + }
> +
> + qdict_del(backing_options, str);
> + g_free(str);
> + str = NULL;
> + }
> +
> + return 0;
> +
> +out:
> + error_propagate(errp, local_err);
> + qemu_opts_del(opts);
> + if (str) {
> + qdict_del(backing_options, str);
> + g_free(str);
> + }
> + errno = EINVAL;
> + return -errno;
> +}
> +
> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
> + const char *filename,
> + QDict *options, Error **errp)
> +{
> + int ret;
> + if (filename) {
> + ret = qemu_gluster_parse_uri(gconf, filename);
> + if (ret < 0) {
> + error_setg(errp, "invalid URI");
> + error_append_hint(errp, "Usage: file=gluster[+transport]://"
> + "[host[:port]]/volume/path[?socket=...]\n");
> + errno = -ret;
> + return NULL;
> + }
> + } else {
> + ret = qemu_gluster_parse_json(gconf, options, errp);
> + if (ret < 0) {
> + error_append_hint(errp, "Usage: "
> + "-drive driver=qcow2,file.driver=gluster,"
> + "file.volume=testvol,file.path=/path/a.qcow2"
> + "[,file.debug=9],file.server.0.type=tcp,"
> + "file.server.0.host=1.2.3.4,"
> + "file.server.0.port=24007,"
> + "file.server.1.transport=unix,"
> + "file.server.1.socket=/var/run/glusterd.socket ..."
> + "\n");
> + errno = -ret;
> + return NULL;
> + }
> +
> + }
> +
> + return qemu_gluster_glfs_init(gconf, errp);
> +}
If @filename is non-null, this function doesn't touch @options. Perhaps
the function should be split into one that takes just @filename and one
that takes just @options. If yes, followup patch.
> +
> static void qemu_gluster_complete_aio(void *opaque)
> {
> GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
> @@ -383,7 +700,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
> gconf = g_new0(BlockdevOptionsGluster, 1);
> gconf->debug_level = s->debug_level;
> gconf->has_debug_level = true;
> - s->glfs = qemu_gluster_init(gconf, filename, errp);
> + s->glfs = qemu_gluster_init(gconf, filename, options, errp);
> if (!s->glfs) {
> ret = -errno;
> goto out;
> @@ -454,7 +771,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
> gconf = g_new0(BlockdevOptionsGluster, 1);
> gconf->debug_level = s->debug_level;
> gconf->has_debug_level = true;
> - reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
> + reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, NULL, errp);
> if (reop_s->glfs == NULL) {
> ret = -errno;
> goto exit;
> @@ -601,7 +918,7 @@ static int qemu_gluster_create(const char *filename,
> }
> gconf->has_debug_level = true;
>
> - glfs = qemu_gluster_init(gconf, filename, errp);
> + glfs = qemu_gluster_init(gconf, filename, NULL, errp);
> if (!glfs) {
> ret = -errno;
> goto out;
> @@ -981,7 +1298,7 @@ static BlockDriver bdrv_gluster = {
> .format_name = "gluster",
> .protocol_name = "gluster",
> .instance_size = sizeof(BDRVGlusterState),
> - .bdrv_needs_filename = true,
> + .bdrv_needs_filename = false,
> .bdrv_file_open = qemu_gluster_open,
> .bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
> .bdrv_reopen_commit = qemu_gluster_reopen_commit,
> @@ -1009,7 +1326,7 @@ static BlockDriver bdrv_gluster_tcp = {
> .format_name = "gluster",
> .protocol_name = "gluster+tcp",
> .instance_size = sizeof(BDRVGlusterState),
> - .bdrv_needs_filename = true,
> + .bdrv_needs_filename = false,
> .bdrv_file_open = qemu_gluster_open,
> .bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
> .bdrv_reopen_commit = qemu_gluster_reopen_commit,
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index 1fa0674..5f8179b 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -2111,7 +2111,7 @@
> { 'struct': 'BlockdevOptionsGluster',
> 'data': { 'volume': 'str',
> 'path': 'str',
> - 'server': 'GlusterServer',
> + 'server': ['GlusterServer'],
> '*debug_level': 'int' } }
>
> ##
Reviewed-by: Markus Armbruster <armbru@redhat.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 3/5] block/gluster: deprecate rdma support
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 3/5] block/gluster: deprecate rdma support Prasanna Kumar Kalever
2016-07-19 17:17 ` Markus Armbruster
@ 2016-07-19 18:09 ` Eric Blake
2016-07-19 18:46 ` Jeff Cody
2 siblings, 0 replies; 20+ messages in thread
From: Eric Blake @ 2016-07-19 18:09 UTC (permalink / raw)
To: Prasanna Kumar Kalever, qemu-devel
Cc: armbru, kwolf, pkrempa, jcody, rtalur, vbellur
[-- Attachment #1: Type: text/plain, Size: 3106 bytes --]
On 07/19/2016 10:57 AM, Prasanna Kumar Kalever wrote:
Grammar suggestions, can be squashed in by the maintainer (in addition
to what Markus pointed out):
> gluster volfile server fetch happens through unix and/or tcp, it doesn't
> support volfile fetch over rdma, the rdma code may actually mislead,
s/rdma, the/rdma. The/
> to make sure things do not break, for now we fallback to tcp when requested
s/to make/so to make/
> for rdma with a warning.
s/rdma with/rdma, with/
>
> If you are wondering how this worked all these days, its the gluster libgfapi
> code which handles anything other than unix transport as socket/tcp, sad but
> true.
>
> Also gluster doesn't support ipv6 addresses, removing the ipv6 related
> comments/docs section
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
> block/gluster.c | 20 ++++++++++++--------
> 1 file changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/block/gluster.c b/block/gluster.c
> index 40ee852..8a54ad4 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -12,6 +12,7 @@
> #include "block/block_int.h"
> #include "qapi/error.h"
> #include "qemu/uri.h"
> +#include "qemu/error-report.h"
>
> #define GLUSTER_OPT_FILENAME "filename"
> #define GLUSTER_OPT_DEBUG "debug"
> @@ -134,12 +135,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
> *
> * 'transport' specifies the transport type used to connect to gluster
> * management daemon (glusterd). Valid transport types are
> - * tcp, unix and rdma. If a transport type isn't specified, then tcp
> - * type is assumed.
> + * tcp, unix. If a transport type isn't specified, then tcp type is assumed.
s/tcp, unix/tcp or unix/
> *
> * 'host' specifies the host where the volume file specification for
> - * the given volume resides. This can be either hostname, ipv4 address
> - * or ipv6 address. ipv6 address needs to be within square brackets [ ].
> + * the given volume resides. This can be either hostname, ipv4 address.
s/either hostname, ipv4 address/either a hostname or an ipv4 address/
> @@ -1048,6 +1046,12 @@ static BlockDriver bdrv_gluster_unix = {
> .create_opts = &qemu_gluster_create_opts,
> };
>
> +/* rdma is deprecated (actually never supported for volfile fetch)
> + * lets maintain for the protocol compatibility, to make sure things
s/fetch) lets maintain for/fetch). Let's maintain it for/
> + * won't break immediately for now gluster+rdma will fall back to gluster+tcp
s/immediately for now gluster+rdma/immediately. For now, gluster+rdma/
> + * protocol with Warning
s/Warning/a warning/
> + * TODO: remove gluster+rdma interface support
> + */
> static BlockDriver bdrv_gluster_rdma = {
> .format_name = "gluster",
> .protocol_name = "gluster+rdma",
>
Reviewed-by: Eric Blake <eblake@redhat.com>
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2016-07-19 17:55 ` Markus Armbruster
@ 2016-07-19 18:33 ` Markus Armbruster
2016-07-19 19:01 ` Prasanna Kalever
2016-07-19 20:46 ` Jeff Cody
2016-07-19 22:32 ` Eric Blake
3 siblings, 1 reply; 20+ messages in thread
From: Markus Armbruster @ 2016-07-19 18:33 UTC (permalink / raw)
To: Prasanna Kumar Kalever; +Cc: qemu-devel, kwolf, pkrempa, vbellur, jcody, rtalur
One more...
Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.
>
> Problem:
>
> Currently VM Image on gluster volume is specified like this:
>
> file=gluster[+tcp]://host[:port]/testvol/a.img
>
> Say we have three hosts in a trusted pool with replica 3 volume in action.
> When the host mentioned in the command above goes down for some reason,
> the other two hosts are still available. But there's currently no way
> to tell QEMU about them.
>
> Solution:
>
> New way of specifying VM Image on gluster volume with volfile servers:
> (We still support old syntax to maintain backward compatibility)
>
> Basic command line syntax looks like:
>
> Pattern I:
> -drive driver=gluster,
> volume=testvol,path=/path/a.raw,[debug=N,]
> server.0.type=tcp,
> server.0.host=1.2.3.4,
> server.0.port=24007,
> server.1.type=unix,
> server.1.socket=/path/socketfile
>
> Pattern II:
> 'json:{"driver":"qcow2","file":{"driver":"gluster",
> "volume":"testvol","path":"/path/a.qcow2",["debug":N,]
> "server":[{hostinfo_1}, ...{hostinfo_N}]}}'
>
> driver => 'gluster' (protocol name)
> volume => name of gluster volume where our VM image resides
> path => absolute path of image in gluster volume
> [debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
>
> {hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
> {type:"unix",socket:"/path/sockfile"}}
>
> type => transport type used to connect to gluster management daemon,
> it can be tcp|unix
> host => host address (hostname/ipv4/ipv6 addresses/socket path)
> port => port number on which glusterd is listening.
> socket => path to socket file
>
> Examples:
> 1.
> -drive driver=qcow2,file.driver=gluster,
> file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
> file.server.0.type=tcp,
> file.server.0.host=1.2.3.4,
> file.server.0.port=24007,
> file.server.1.type=tcp,
> file.server.1.socket=/var/run/glusterd.socket
> 2.
> 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
> "path":"/path/a.qcow2","debug":9,"server":
> [{type:"tcp",host:"1.2.3.4",port=24007},
> {type:"unix",socket:"/var/run/glusterd.socket"}] } }'
This example is 1. confusing, and 2. wrong :)
It's wrong, because several member names lack quotes. Also, the value
of port should be a string.
It confused me, because I didn't realize that this is the non-option
image argument. Two ways to fix that. One, add context:
$ qemu-system-x86_64 'json:{"file":{"driver":"gluster","volume":"sample","path":"/fedora23.qcow2","server":[{"type":"tcp","host":"192.168.1.220","port":"24007"},{"type":"unix","socket":"/var/run/glusterd.socket"}]},"driver":"qcow2"}'
Two, use -drive:
-drive 'file=json:{"file":{"driver":"gluster",,"volume":"sample",,"path":"/fedora23.qcow2",,"server":[{"type":"tcp",,"host":"192.168.1.220",,"port":"24007"},,{"type":"unix",,"socket":"/var/run/glusterd.socket"}]},,"driver":"qcow2"}'
Exquisitely ugly due to the necessary comma escaping.
Hopefully, the maintainer can touch this up on commit.
> This patch gives a mechanism to provide all the server addresses, which are in
> replica set, so in case host1 is down VM can still boot from any of the
> active hosts.
>
> This is equivalent to the backup-volfile-servers option supported by
> mount.glusterfs (FUSE way of mounting gluster volume)
>
> credits: sincere thanks to all the supporters
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
R-by stands.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema Prasanna Kumar Kalever
2016-07-19 17:30 ` Markus Armbruster
2016-07-19 17:37 ` Markus Armbruster
@ 2016-07-19 18:38 ` Eric Blake
2016-07-19 20:39 ` Jeff Cody
3 siblings, 0 replies; 20+ messages in thread
From: Eric Blake @ 2016-07-19 18:38 UTC (permalink / raw)
To: Prasanna Kumar Kalever, qemu-devel
Cc: armbru, kwolf, pkrempa, jcody, rtalur, vbellur
[-- Attachment #1: Type: text/plain, Size: 5007 bytes --]
On 07/19/2016 10:57 AM, Prasanna Kumar Kalever wrote:
> this patch adds 'GlusterServer' related schema in qapi/block-core.json
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
> block/gluster.c | 115 +++++++++++++++++++++++++++++----------------------
> qapi/block-core.json | 68 +++++++++++++++++++++++++++---
> 2 files changed, 128 insertions(+), 55 deletions(-)
>
> @@ -256,15 +254,22 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
>
> ret = glfs_init(glfs);
> if (ret) {
> - error_setg_errno(errp, errno,
> - "Gluster connection failed for host=%s port=%d "
> - "volume=%s path=%s transport=%s", gconf->host,
> - gconf->port, gconf->volume, gconf->path,
> - gconf->transport);
> + if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> + error_setg(errp,
> + "Gluster connection for volume %s, path %s failed on "
> + "socket %s ", gconf->volume, gconf->path,
> + gconf->server->u.q_unix.path);
> + } else {
> + error_setg(errp,
> + "Gluster connection for volume %s, path %s failed on "
> + "host %s and port %s ", gconf->volume, gconf->path,
> + gconf->server->u.tcp.host, gconf->server->u.tcp.port);
> + }
You are throwing away the errno information that used to be reported
here. Is that intentional?
>
> /* glfs_init sometimes doesn't set errno although docs suggest that */
> - if (errno == 0)
> + if (errno == 0) {
> errno = EINVAL;
> + }
Pre-existing, but this is too late for messing with errno. You are not
guaranteed that errno is unchanged by error_setg_errno(). You aren't
even guaranteed that errno == 0 after glfs_init() if it wasn't 0 before
entry. You probably want a separate patch that reorders this to:
errno = 0;
ret = glfs_init(glfs);
if (ret) {
if (!errno) {
errno = EINVAL;
}
error_setg...
> +++ b/qapi/block-core.json
> @@ -1658,13 +1658,14 @@
> # @host_device, @host_cdrom: Since 2.1
> #
> # Since: 2.0
> +# @gluster: Since 2.7
I would stick this line a bit earlier:
# @host_device, @host_cdrom: Since 2.1
# @gluster: Since 2.7
#
# Since: 2.0
> @@ -2057,6 +2058,63 @@
> '*read-pattern': 'QuorumReadPattern' } }
>
> ##
> +# @GlusterTransport
> +#
> +# An enumeration of Gluster transport type
Maybe s/type/types/
> +#
> +# @tcp: TCP - Transmission Control Protocol
> +#
> +# @unix: UNIX - Unix domain socket
> +#
> +# Since: 2.7
> +##
> +{ 'enum': 'GlusterTransport',
> + 'data': [ 'unix', 'tcp' ] }
> +
> +
> +##
> +# @GlusterServer
> +#
> +# Captures the address of a socket
> +#
> +# Details for connecting to a gluster server
> +#
> +# @type: Transport type used for gluster connection
> +#
> +# @unix: socket file
> +#
> +# @tcp: host address and port number
> +#
> +# Since: 2.7
> +##
> +{ 'union': 'GlusterServer',
> + 'base': { 'type': 'GlusterTransport' },
> + 'discriminator': 'type',
> + 'data': { 'unix': 'UnixSocketAddress',
> + 'tcp': 'InetSocketAddress' } }
> +
> +##
> +# @BlockdevOptionsGluster
> +#
> +# Driver specific block device options for Gluster
> +#
> +# @volume: name of gluster volume where VM image resides
> +#
> +# @path: absolute path to image file in gluster volume
> +#
> +# @server: gluster server description
> +#
> +# @debug_level: #optional libgfapi log level (default '4' which is Error)
s/debug_level/debug-level/ - all new QAPI interfaces should prefer '-'
over '_' (the generated C code is the same, though)
> +#
> +# Since: 2.7
> +##
> +{ 'struct': 'BlockdevOptionsGluster',
> + 'data': { 'volume': 'str',
> + 'path': 'str',
> + 'server': 'GlusterServer',
> + '*debug_level': 'int' } }
> +
> +##
> # @BlockdevOptions
> #
> # Options for creating a block device. Many options are available for all
> @@ -2119,7 +2177,7 @@
> 'file': 'BlockdevOptionsFile',
> 'ftp': 'BlockdevOptionsFile',
> 'ftps': 'BlockdevOptionsFile',
> -# TODO gluster: Wait for structured options
> + 'gluster': 'BlockdevOptionsGluster',
> 'host_cdrom': 'BlockdevOptionsFile',
> 'host_device':'BlockdevOptionsFile',
> 'http': 'BlockdevOptionsFile',
>
Between my findings and Markus', it's up to the maintainer whether to
take this as-is and then fix up the problems with a followup patch, or
whether we need a v21.
Reviewed-by: Eric Blake <eblake@redhat.com>
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 3/5] block/gluster: deprecate rdma support
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 3/5] block/gluster: deprecate rdma support Prasanna Kumar Kalever
2016-07-19 17:17 ` Markus Armbruster
2016-07-19 18:09 ` Eric Blake
@ 2016-07-19 18:46 ` Jeff Cody
2 siblings, 0 replies; 20+ messages in thread
From: Jeff Cody @ 2016-07-19 18:46 UTC (permalink / raw)
To: Prasanna Kumar Kalever
Cc: qemu-devel, kwolf, pkrempa, vbellur, armbru, rtalur
On Tue, Jul 19, 2016 at 10:27:31PM +0530, Prasanna Kumar Kalever wrote:
> gluster volfile server fetch happens through unix and/or tcp, it doesn't
> support volfile fetch over rdma, the rdma code may actually mislead,
> to make sure things do not break, for now we fallback to tcp when requested
> for rdma with a warning.
>
> If you are wondering how this worked all these days, its the gluster libgfapi
> code which handles anything other than unix transport as socket/tcp, sad but
> true.
>
> Also gluster doesn't support ipv6 addresses, removing the ipv6 related
> comments/docs section
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
> block/gluster.c | 20 ++++++++++++--------
> 1 file changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/block/gluster.c b/block/gluster.c
> index 40ee852..8a54ad4 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -12,6 +12,7 @@
> #include "block/block_int.h"
> #include "qapi/error.h"
> #include "qemu/uri.h"
> +#include "qemu/error-report.h"
>
> #define GLUSTER_OPT_FILENAME "filename"
> #define GLUSTER_OPT_DEBUG "debug"
> @@ -134,12 +135,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
> *
> * 'transport' specifies the transport type used to connect to gluster
> * management daemon (glusterd). Valid transport types are
> - * tcp, unix and rdma. If a transport type isn't specified, then tcp
> - * type is assumed.
> + * tcp, unix. If a transport type isn't specified, then tcp type is assumed.
> *
> * 'host' specifies the host where the volume file specification for
> - * the given volume resides. This can be either hostname, ipv4 address
> - * or ipv6 address. ipv6 address needs to be within square brackets [ ].
> + * the given volume resides. This can be either hostname, ipv4 address.
> * If transport type is 'unix', then 'host' field should not be specified.
> * The 'socket' field needs to be populated with the path to unix domain
> * socket.
> @@ -158,11 +157,8 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
> * file=gluster://1.2.3.4/testvol/a.img
> * file=gluster+tcp://1.2.3.4/testvol/a.img
> * file=gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
> - * file=gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
> - * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
> * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
> * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
> - * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
> */
> static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> {
> @@ -185,7 +181,9 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> gconf->transport = g_strdup("unix");
> is_unix = true;
> } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> - gconf->transport = g_strdup("rdma");
> + gconf->transport = g_strdup("tcp");
> + error_report("Warning: rdma feature is not supported falling "
> + "back to tcp");
> } else {
> ret = -EINVAL;
> goto out;
> @@ -1048,6 +1046,12 @@ static BlockDriver bdrv_gluster_unix = {
> .create_opts = &qemu_gluster_create_opts,
> };
>
> +/* rdma is deprecated (actually never supported for volfile fetch)
> + * lets maintain for the protocol compatibility, to make sure things
> + * won't break immediately for now gluster+rdma will fall back to gluster+tcp
> + * protocol with Warning
> + * TODO: remove gluster+rdma interface support
> + */
> static BlockDriver bdrv_gluster_rdma = {
> .format_name = "gluster",
> .protocol_name = "gluster+rdma",
> --
> 2.7.4
>
>
Modulo Markus's and Eric's comments:
Reviewed-by: Jeff Cody <jcody@redhat.com>
(I'll make the comment changes on commit)
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers
2016-07-19 18:33 ` Markus Armbruster
@ 2016-07-19 19:01 ` Prasanna Kalever
0 siblings, 0 replies; 20+ messages in thread
From: Prasanna Kalever @ 2016-07-19 19:01 UTC (permalink / raw)
To: Markus Armbruster
Cc: Prasanna Kumar Kalever, qemu-devel, Kevin Wolf, Peter Krempa,
Vijay Bellur, Jeffrey Cody, Raghavendra Talur
On Wed, Jul 20, 2016 at 12:03 AM, Markus Armbruster <armbru@redhat.com> wrote:
>
> One more...
>
> Prasanna Kumar Kalever <prasanna.kalever@redhat.com> writes:
>
> > This patch adds a way to specify multiple volfile servers to the gluster
> > block backend of QEMU with tcp|rdma transport types and their port numbers.
> >
> > Problem:
> >
> > Currently VM Image on gluster volume is specified like this:
> >
> > file=gluster[+tcp]://host[:port]/testvol/a.img
> >
> > Say we have three hosts in a trusted pool with replica 3 volume in action.
> > When the host mentioned in the command above goes down for some reason,
> > the other two hosts are still available. But there's currently no way
> > to tell QEMU about them.
> >
> > Solution:
> >
> > New way of specifying VM Image on gluster volume with volfile servers:
> > (We still support old syntax to maintain backward compatibility)
> >
> > Basic command line syntax looks like:
> >
> > Pattern I:
> > -drive driver=gluster,
> > volume=testvol,path=/path/a.raw,[debug=N,]
> > server.0.type=tcp,
> > server.0.host=1.2.3.4,
> > server.0.port=24007,
> > server.1.type=unix,
> > server.1.socket=/path/socketfile
> >
> > Pattern II:
> > 'json:{"driver":"qcow2","file":{"driver":"gluster",
> > "volume":"testvol","path":"/path/a.qcow2",["debug":N,]
> > "server":[{hostinfo_1}, ...{hostinfo_N}]}}'
> >
> > driver => 'gluster' (protocol name)
> > volume => name of gluster volume where our VM image resides
> > path => absolute path of image in gluster volume
> > [debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
> >
> > {hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
> > {type:"unix",socket:"/path/sockfile"}}
> >
> > type => transport type used to connect to gluster management daemon,
> > it can be tcp|unix
> > host => host address (hostname/ipv4/ipv6 addresses/socket path)
> > port => port number on which glusterd is listening.
> > socket => path to socket file
> >
> > Examples:
> > 1.
> > -drive driver=qcow2,file.driver=gluster,
> > file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
> > file.server.0.type=tcp,
> > file.server.0.host=1.2.3.4,
> > file.server.0.port=24007,
> > file.server.1.type=tcp,
> > file.server.1.socket=/var/run/glusterd.socket
> > 2.
> > 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
> > "path":"/path/a.qcow2","debug":9,"server":
> > [{type:"tcp",host:"1.2.3.4",port=24007},
> > {type:"unix",socket:"/var/run/glusterd.socket"}] } }'
>
> This example is 1. confusing, and 2. wrong :)
>
> It's wrong, because several member names lack quotes. Also, the value
> of port should be a string.
>
> It confused me, because I didn't realize that this is the non-option
> image argument. Two ways to fix that. One, add context:
>
> $ qemu-system-x86_64 'json:{"file":{"driver":"gluster","volume":"sample","path":"/fedora23.qcow2","server":[{"type":"tcp","host":"192.168.1.220","port":"24007"},{"type":"unix","socket":"/var/run/glusterd.socket"}]},"driver":"qcow2"}'
>
> Two, use -drive:
>
> -drive 'file=json:{"file":{"driver":"gluster",,"volume":"sample",,"path":"/fedora23.qcow2",,"server":[{"type":"tcp",,"host":"192.168.1.220",,"port":"24007"},,{"type":"unix",,"socket":"/var/run/glusterd.socket"}]},,"driver":"qcow2"}'
>
> Exquisitely ugly due to the necessary comma escaping.
or
Examples:
1.
-drive driver=qcow2,file.driver=gluster,
file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
file.server.0.type=tcp,
file.server.0.host=1.2.3.4,
file.server.0.port=24007,
file.server.1.type=unix,
file.server.1.socket=/var/run/glusterd.socket
2.
'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
"path":"/path/a.qcow2","debug":9,"server":
[{"type":"tcp","host":"1.2.3.4","port":"24007"},
{"type":"unix","socket":"/var/run/glusterd.socket"}
]}}'
I have tested this and they are working now after a small alteration
sorry about that
--
Prasanna
>
> Hopefully, the maintainer can touch this up on commit.
>
> > This patch gives a mechanism to provide all the server addresses, which are in
> > replica set, so in case host1 is down VM can still boot from any of the
> > active hosts.
> >
> > This is equivalent to the backup-volfile-servers option supported by
> > mount.glusterfs (FUSE way of mounting gluster volume)
> >
> > credits: sincere thanks to all the supporters
> >
> > Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
>
> R-by stands.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema Prasanna Kumar Kalever
` (2 preceding siblings ...)
2016-07-19 18:38 ` Eric Blake
@ 2016-07-19 20:39 ` Jeff Cody
2016-07-19 21:31 ` Jeff Cody
3 siblings, 1 reply; 20+ messages in thread
From: Jeff Cody @ 2016-07-19 20:39 UTC (permalink / raw)
To: Prasanna Kumar Kalever
Cc: qemu-devel, armbru, kwolf, eblake, pkrempa, rtalur, vbellur
On Tue, Jul 19, 2016 at 10:27:32PM +0530, Prasanna Kumar Kalever wrote:
> this patch adds 'GlusterServer' related schema in qapi/block-core.json
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
> block/gluster.c | 115 +++++++++++++++++++++++++++++----------------------
> qapi/block-core.json | 68 +++++++++++++++++++++++++++---
> 2 files changed, 128 insertions(+), 55 deletions(-)
>
> diff --git a/block/gluster.c b/block/gluster.c
> index 8a54ad4..c4ca59e 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -16,6 +16,7 @@
>
> #define GLUSTER_OPT_FILENAME "filename"
> #define GLUSTER_OPT_DEBUG "debug"
> +#define GLUSTER_DEFAULT_PORT 24007
> #define GLUSTER_DEBUG_DEFAULT 4
> #define GLUSTER_DEBUG_MAX 9
>
> @@ -40,15 +41,6 @@ typedef struct BDRVGlusterReopenState {
> struct glfs_fd *fd;
> } BDRVGlusterReopenState;
>
> -typedef struct GlusterConf {
> - char *host;
> - int port;
> - char *volume;
> - char *path;
> - char *transport;
> - int debug_level;
> -} GlusterConf;
> -
>
> static QemuOptsList qemu_gluster_create_opts = {
> .name = "qemu-gluster-create-opts",
> @@ -92,18 +84,7 @@ static QemuOptsList runtime_opts = {
> };
>
>
> -static void qemu_gluster_gconf_free(GlusterConf *gconf)
> -{
> - if (gconf) {
> - g_free(gconf->host);
> - g_free(gconf->volume);
> - g_free(gconf->path);
> - g_free(gconf->transport);
> - g_free(gconf);
> - }
> -}
> -
> -static int parse_volume_options(GlusterConf *gconf, char *path)
> +static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
> {
> char *p, *q;
>
> @@ -160,8 +141,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
> * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
> * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
> */
> -static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> +static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
> + const char *filename)
> {
> + GlusterServer *gsconf;
> URI *uri;
> QueryParams *qp = NULL;
> bool is_unix = false;
> @@ -172,16 +155,18 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> return -EINVAL;
> }
>
> + gconf->server = gsconf = g_new0(GlusterServer, 1);
> +
> /* transport */
> if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
> - gconf->transport = g_strdup("tcp");
> + gsconf->type = GLUSTER_TRANSPORT_TCP;
> } else if (!strcmp(uri->scheme, "gluster+tcp")) {
> - gconf->transport = g_strdup("tcp");
> + gsconf->type = GLUSTER_TRANSPORT_TCP;
> } else if (!strcmp(uri->scheme, "gluster+unix")) {
> - gconf->transport = g_strdup("unix");
> + gsconf->type = GLUSTER_TRANSPORT_UNIX;
> is_unix = true;
> } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> - gconf->transport = g_strdup("tcp");
> + gsconf->type = GLUSTER_TRANSPORT_TCP;
> error_report("Warning: rdma feature is not supported falling "
> "back to tcp");
> } else {
> @@ -209,10 +194,14 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> ret = -EINVAL;
> goto out;
> }
> - gconf->host = g_strdup(qp->p[0].value);
> + gsconf->u.q_unix.path = g_strdup(qp->p[0].value);
> } else {
> - gconf->host = g_strdup(uri->server ? uri->server : "localhost");
> - gconf->port = uri->port;
> + gsconf->u.tcp.host = g_strdup(uri->server ? uri->server : "localhost");
> + if (uri->port) {
> + gsconf->u.tcp.port = g_strdup_printf("%d", uri->port);
> + } else {
> + gsconf->u.tcp.port = g_strdup_printf("%d", GLUSTER_DEFAULT_PORT);
> + }
> }
>
> out:
> @@ -223,17 +212,18 @@ out:
> return ret;
> }
>
> -static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
> - Error **errp)
> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
> + const char *filename, Error **errp)
> {
> struct glfs *glfs = NULL;
> int ret;
> int old_errno;
>
> - ret = qemu_gluster_parseuri(gconf, filename);
> + ret = qemu_gluster_parse_uri(gconf, filename);
> if (ret < 0) {
> - error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> - "volume/path[?socket=...]");
> + error_setg(errp, "Invalid URI");
> + error_append_hint(errp, "Usage: file=gluster[+transport]://"
> + "[host[:port]]/volume/path[?socket=...]\n");
> errno = -ret;
> goto out;
> }
> @@ -243,8 +233,16 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
> goto out;
> }
>
> - ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
> - gconf->port);
> + if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> + ret = glfs_set_volfile_server(glfs,
> + GlusterTransport_lookup[gconf->server->type],
Formatting issue found by checkpatch; exceeds 80 char. Will fix on commit,
no need for respin.
> + gconf->server->u.q_unix.path, 0);
> + } else {
> + ret = glfs_set_volfile_server(glfs,
> + GlusterTransport_lookup[gconf->server->type],
Formatting issue found by checkpatch; exceeds 80 char. Will fix on commit,
no need for respin.
> + gconf->server->u.tcp.host,
> + atoi(gconf->server->u.tcp.port));
> + }
> if (ret < 0) {
> goto out;
> }
> @@ -256,15 +254,22 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
>
> ret = glfs_init(glfs);
> if (ret) {
> - error_setg_errno(errp, errno,
> - "Gluster connection failed for host=%s port=%d "
> - "volume=%s path=%s transport=%s", gconf->host,
> - gconf->port, gconf->volume, gconf->path,
> - gconf->transport);
> + if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> + error_setg(errp,
> + "Gluster connection for volume %s, path %s failed on "
> + "socket %s ", gconf->volume, gconf->path,
> + gconf->server->u.q_unix.path);
> + } else {
> + error_setg(errp,
> + "Gluster connection for volume %s, path %s failed on "
> + "host %s and port %s ", gconf->volume, gconf->path,
> + gconf->server->u.tcp.host, gconf->server->u.tcp.port);
> + }
>
> /* glfs_init sometimes doesn't set errno although docs suggest that */
> - if (errno == 0)
> + if (errno == 0) {
> errno = EINVAL;
> + }
Eric pointed out the errno risk here; as this is pre-existing, nothing to
hold up this patch (can be fixed in later patch).
>
> goto out;
> }
> @@ -352,7 +357,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
> BDRVGlusterState *s = bs->opaque;
> int open_flags = 0;
> int ret = 0;
> - GlusterConf *gconf = g_new0(GlusterConf, 1);
> + BlockdevOptionsGluster *gconf = NULL;
> QemuOpts *opts;
> Error *local_err = NULL;
> const char *filename;
> @@ -375,7 +380,9 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
> s->debug_level = GLUSTER_DEBUG_MAX;
> }
>
> + gconf = g_new0(BlockdevOptionsGluster, 1);
> gconf->debug_level = s->debug_level;
> + gconf->has_debug_level = true;
> s->glfs = qemu_gluster_init(gconf, filename, errp);
> if (!s->glfs) {
> ret = -errno;
> @@ -410,7 +417,9 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
>
> out:
> qemu_opts_del(opts);
> - qemu_gluster_gconf_free(gconf);
> + if (gconf) {
> + qapi_free_BlockdevOptionsGluster(gconf);
> + }
Per Markus's suggestion, will remove conditional on commit.
> if (!ret) {
> return ret;
> }
> @@ -429,7 +438,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
> int ret = 0;
> BDRVGlusterState *s;
> BDRVGlusterReopenState *reop_s;
> - GlusterConf *gconf = NULL;
> + BlockdevOptionsGluster *gconf;
> int open_flags = 0;
>
> assert(state != NULL);
> @@ -442,9 +451,9 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>
> qemu_gluster_parse_flags(state->flags, &open_flags);
>
> - gconf = g_new0(GlusterConf, 1);
> -
> + gconf = g_new0(BlockdevOptionsGluster, 1);
> gconf->debug_level = s->debug_level;
> + gconf->has_debug_level = true;
> reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
> if (reop_s->glfs == NULL) {
> ret = -errno;
> @@ -470,7 +479,9 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
>
> exit:
> /* state->opaque will be freed in either the _abort or _commit */
> - qemu_gluster_gconf_free(gconf);
> + if (gconf) {
> + qapi_free_BlockdevOptionsGluster(gconf);
> + }
Here too.
> return ret;
> }
>
> @@ -572,14 +583,15 @@ static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
> static int qemu_gluster_create(const char *filename,
> QemuOpts *opts, Error **errp)
> {
> + BlockdevOptionsGluster *gconf;
> struct glfs *glfs;
> struct glfs_fd *fd;
> int ret = 0;
> int prealloc = 0;
> int64_t total_size = 0;
> char *tmp = NULL;
> - GlusterConf *gconf = g_new0(GlusterConf, 1);
>
> + gconf = g_new0(BlockdevOptionsGluster, 1);
> gconf->debug_level = qemu_opt_get_number_del(opts, GLUSTER_OPT_DEBUG,
> GLUSTER_DEBUG_DEFAULT);
> if (gconf->debug_level < 0) {
> @@ -587,6 +599,7 @@ static int qemu_gluster_create(const char *filename,
> } else if (gconf->debug_level > GLUSTER_DEBUG_MAX) {
> gconf->debug_level = GLUSTER_DEBUG_MAX;
> }
> + gconf->has_debug_level = true;
>
> glfs = qemu_gluster_init(gconf, filename, errp);
> if (!glfs) {
> @@ -628,7 +641,9 @@ static int qemu_gluster_create(const char *filename,
> }
> out:
> g_free(tmp);
> - qemu_gluster_gconf_free(gconf);
> + if (gconf) {
> + qapi_free_BlockdevOptionsGluster(gconf);
> + }
> if (glfs) {
> glfs_fini(glfs);
> }
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index a7fdb28..1fa0674 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -1658,13 +1658,14 @@
> # @host_device, @host_cdrom: Since 2.1
> #
> # Since: 2.0
> +# @gluster: Since 2.7
> ##
> { 'enum': 'BlockdevDriver',
> 'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
> - 'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
> - 'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
> - 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
> - 'vmdk', 'vpc', 'vvfat' ] }
> + 'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
> + 'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
> + 'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
> + 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
>
> ##
> # @BlockdevOptionsFile
> @@ -2057,6 +2058,63 @@
> '*read-pattern': 'QuorumReadPattern' } }
>
> ##
> +# @GlusterTransport
> +#
> +# An enumeration of Gluster transport type
> +#
> +# @tcp: TCP - Transmission Control Protocol
> +#
> +# @unix: UNIX - Unix domain socket
> +#
> +# Since: 2.7
> +##
> +{ 'enum': 'GlusterTransport',
> + 'data': [ 'unix', 'tcp' ] }
> +
> +
> +##
> +# @GlusterServer
> +#
> +# Captures the address of a socket
> +#
> +# Details for connecting to a gluster server
> +#
> +# @type: Transport type used for gluster connection
> +#
> +# @unix: socket file
> +#
> +# @tcp: host address and port number
> +#
> +# Since: 2.7
> +##
> +{ 'union': 'GlusterServer',
> + 'base': { 'type': 'GlusterTransport' },
> + 'discriminator': 'type',
> + 'data': { 'unix': 'UnixSocketAddress',
> + 'tcp': 'InetSocketAddress' } }
> +
> +##
> +# @BlockdevOptionsGluster
> +#
> +# Driver specific block device options for Gluster
> +#
> +# @volume: name of gluster volume where VM image resides
> +#
> +# @path: absolute path to image file in gluster volume
> +#
> +# @server: gluster server description
> +#
> +# @debug_level: #optional libgfapi log level (default '4' which is Error)
> +#
Per Eric's suggestion, will change this to debug-level on commit.
> +# Since: 2.7
> +##
> +{ 'struct': 'BlockdevOptionsGluster',
> + 'data': { 'volume': 'str',
> + 'path': 'str',
> + 'server': 'GlusterServer',
> + '*debug_level': 'int' } }
> +
> +##
> # @BlockdevOptions
> #
> # Options for creating a block device. Many options are available for all
> @@ -2119,7 +2177,7 @@
> 'file': 'BlockdevOptionsFile',
> 'ftp': 'BlockdevOptionsFile',
> 'ftps': 'BlockdevOptionsFile',
> -# TODO gluster: Wait for structured options
> + 'gluster': 'BlockdevOptionsGluster',
> 'host_cdrom': 'BlockdevOptionsFile',
> 'host_device':'BlockdevOptionsFile',
> 'http': 'BlockdevOptionsFile',
> --
> 2.7.4
>
After fix-ups:
Reviewed-by: Jeff Cody <jcody@redhat.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2016-07-19 17:55 ` Markus Armbruster
2016-07-19 18:33 ` Markus Armbruster
@ 2016-07-19 20:46 ` Jeff Cody
2016-07-19 22:32 ` Eric Blake
3 siblings, 0 replies; 20+ messages in thread
From: Jeff Cody @ 2016-07-19 20:46 UTC (permalink / raw)
To: Prasanna Kumar Kalever
Cc: qemu-devel, kwolf, pkrempa, vbellur, armbru, rtalur
On Tue, Jul 19, 2016 at 10:27:33PM +0530, Prasanna Kumar Kalever wrote:
> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.
>
> Problem:
>
> Currently VM Image on gluster volume is specified like this:
>
> file=gluster[+tcp]://host[:port]/testvol/a.img
>
> Say we have three hosts in a trusted pool with replica 3 volume in action.
> When the host mentioned in the command above goes down for some reason,
> the other two hosts are still available. But there's currently no way
> to tell QEMU about them.
>
> Solution:
>
> New way of specifying VM Image on gluster volume with volfile servers:
> (We still support old syntax to maintain backward compatibility)
>
> Basic command line syntax looks like:
>
> Pattern I:
> -drive driver=gluster,
> volume=testvol,path=/path/a.raw,[debug=N,]
> server.0.type=tcp,
> server.0.host=1.2.3.4,
> server.0.port=24007,
> server.1.type=unix,
> server.1.socket=/path/socketfile
>
> Pattern II:
> 'json:{"driver":"qcow2","file":{"driver":"gluster",
> "volume":"testvol","path":"/path/a.qcow2",["debug":N,]
> "server":[{hostinfo_1}, ...{hostinfo_N}]}}'
>
> driver => 'gluster' (protocol name)
> volume => name of gluster volume where our VM image resides
> path => absolute path of image in gluster volume
> [debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
>
> {hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
> {type:"unix",socket:"/path/sockfile"}}
>
> type => transport type used to connect to gluster management daemon,
> it can be tcp|unix
> host => host address (hostname/ipv4/ipv6 addresses/socket path)
> port => port number on which glusterd is listening.
> socket => path to socket file
>
> Examples:
> 1.
> -drive driver=qcow2,file.driver=gluster,
> file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
> file.server.0.type=tcp,
> file.server.0.host=1.2.3.4,
> file.server.0.port=24007,
> file.server.1.type=tcp,
> file.server.1.socket=/var/run/glusterd.socket
> 2.
> 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
> "path":"/path/a.qcow2","debug":9,"server":
> [{type:"tcp",host:"1.2.3.4",port=24007},
> {type:"unix",socket:"/var/run/glusterd.socket"}] } }'
>
I will use your revised wording in the commit, from your follow-up email.
> This patch gives a mechanism to provide all the server addresses, which are in
> replica set, so in case host1 is down VM can still boot from any of the
> active hosts.
>
> This is equivalent to the backup-volfile-servers option supported by
> mount.glusterfs (FUSE way of mounting gluster volume)
>
> credits: sincere thanks to all the supporters
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
> block/gluster.c | 397 +++++++++++++++++++++++++++++++++++++++++++++------
> qapi/block-core.json | 2 +-
> 2 files changed, 358 insertions(+), 41 deletions(-)
>
> diff --git a/block/gluster.c b/block/gluster.c
> index c4ca59e..0524789 100644
> --- a/block/gluster.c
> +++ b/block/gluster.c
> @@ -11,15 +11,27 @@
> #include <glusterfs/api/glfs.h>
> #include "block/block_int.h"
> #include "qapi/error.h"
> +#include "qapi/qmp/qerror.h"
> #include "qemu/uri.h"
> #include "qemu/error-report.h"
>
> #define GLUSTER_OPT_FILENAME "filename"
> +#define GLUSTER_OPT_VOLUME "volume"
> +#define GLUSTER_OPT_PATH "path"
> +#define GLUSTER_OPT_TYPE "type"
> +#define GLUSTER_OPT_SERVER_PATTERN "server."
> +#define GLUSTER_OPT_HOST "host"
> +#define GLUSTER_OPT_PORT "port"
> +#define GLUSTER_OPT_TO "to"
> +#define GLUSTER_OPT_IPV4 "ipv4"
> +#define GLUSTER_OPT_IPV6 "ipv6"
> +#define GLUSTER_OPT_SOCKET "socket"
> #define GLUSTER_OPT_DEBUG "debug"
> #define GLUSTER_DEFAULT_PORT 24007
> #define GLUSTER_DEBUG_DEFAULT 4
> #define GLUSTER_DEBUG_MAX 9
>
> +#define GERR_INDEX_HINT "hint: check in 'server' array index '%d'\n"
>
> typedef struct GlusterAIOCB {
> int64_t size;
> @@ -83,6 +95,92 @@ static QemuOptsList runtime_opts = {
> },
> };
>
> +static QemuOptsList runtime_json_opts = {
> + .name = "gluster_json",
> + .head = QTAILQ_HEAD_INITIALIZER(runtime_json_opts.head),
> + .desc = {
> + {
> + .name = GLUSTER_OPT_VOLUME,
> + .type = QEMU_OPT_STRING,
> + .help = "name of gluster volume where VM image resides",
> + },
> + {
> + .name = GLUSTER_OPT_PATH,
> + .type = QEMU_OPT_STRING,
> + .help = "absolute path to image file in gluster volume",
> + },
> + {
> + .name = GLUSTER_OPT_DEBUG,
> + .type = QEMU_OPT_NUMBER,
> + .help = "Gluster log level, valid range is 0-9",
> + },
> + { /* end of list */ }
> + },
> +};
> +
> +static QemuOptsList runtime_type_opts = {
> + .name = "gluster_type",
> + .head = QTAILQ_HEAD_INITIALIZER(runtime_type_opts.head),
> + .desc = {
> + {
> + .name = GLUSTER_OPT_TYPE,
> + .type = QEMU_OPT_STRING,
> + .help = "tcp|unix",
> + },
> + { /* end of list */ }
> + },
> +};
> +
> +static QemuOptsList runtime_unix_opts = {
> + .name = "gluster_unix",
> + .head = QTAILQ_HEAD_INITIALIZER(runtime_unix_opts.head),
> + .desc = {
> + {
> + .name = GLUSTER_OPT_SOCKET,
> + .type = QEMU_OPT_STRING,
> + .help = "socket file path)",
> + },
> + { /* end of list */ }
> + },
> +};
> +
> +static QemuOptsList runtime_tcp_opts = {
> + .name = "gluster_tcp",
> + .head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
> + .desc = {
> + {
> + .name = GLUSTER_OPT_TYPE,
> + .type = QEMU_OPT_STRING,
> + .help = "tcp|unix",
> + },
> + {
> + .name = GLUSTER_OPT_HOST,
> + .type = QEMU_OPT_STRING,
> + .help = "host address (hostname/ipv4/ipv6 addresses)",
> + },
> + {
> + .name = GLUSTER_OPT_PORT,
> + .type = QEMU_OPT_NUMBER,
> + .help = "port number on which glusterd is listening (default 24007)",
Per checkpatch.pl, exceeds 80 chars; no need for respin, will fix on commit.
> + },
> + {
> + .name = "to",
> + .type = QEMU_OPT_NUMBER,
> + .help = "max port number, not supported by gluster",
> + },
> + {
> + .name = "ipv4",
> + .type = QEMU_OPT_BOOL,
> + .help = "ipv4 bool value, not supported by gluster",
> + },
> + {
> + .name = "ipv6",
> + .type = QEMU_OPT_BOOL,
> + .help = "ipv6 bool value, not supported by gluster",
> + },
> + { /* end of list */ }
> + },
> +};
>
> static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
> {
> @@ -155,7 +253,8 @@ static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
> return -EINVAL;
> }
>
> - gconf->server = gsconf = g_new0(GlusterServer, 1);
> + gconf->server = g_new0(GlusterServerList, 1);
> + gconf->server->value = gsconf = g_new0(GlusterServer, 1);
>
> /* transport */
> if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
> @@ -212,39 +311,34 @@ out:
> return ret;
> }
>
> -static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
> - const char *filename, Error **errp)
> +static struct glfs *qemu_gluster_glfs_init(BlockdevOptionsGluster *gconf,
> + Error **errp)
> {
> - struct glfs *glfs = NULL;
> + struct glfs *glfs;
> int ret;
> int old_errno;
> -
> - ret = qemu_gluster_parse_uri(gconf, filename);
> - if (ret < 0) {
> - error_setg(errp, "Invalid URI");
> - error_append_hint(errp, "Usage: file=gluster[+transport]://"
> - "[host[:port]]/volume/path[?socket=...]\n");
> - errno = -ret;
> - goto out;
> - }
> + GlusterServerList *server;
>
> glfs = glfs_new(gconf->volume);
> if (!glfs) {
> goto out;
> }
>
> - if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> - ret = glfs_set_volfile_server(glfs,
> - GlusterTransport_lookup[gconf->server->type],
> - gconf->server->u.q_unix.path, 0);
> - } else {
> - ret = glfs_set_volfile_server(glfs,
> - GlusterTransport_lookup[gconf->server->type],
> - gconf->server->u.tcp.host,
> - atoi(gconf->server->u.tcp.port));
> - }
> - if (ret < 0) {
> - goto out;
> + for (server = gconf->server; server; server = server->next) {
> + if (server->value->type == GLUSTER_TRANSPORT_UNIX) {
> + ret = glfs_set_volfile_server(glfs,
> + GlusterTransport_lookup[server->value->type],
Per checkpatch.pl, exceeds 80 chars; no need for respin, will fix on commit.
> + server->value->u.q_unix.path, 0);
> + } else {
> + ret = glfs_set_volfile_server(glfs,
> + GlusterTransport_lookup[server->value->type],
Per checkpatch.pl, exceeds 80 chars; no need for respin, will fix on commit.
> + server->value->u.tcp.host,
> + atoi(server->value->u.tcp.port));
> + }
> +
> + if (ret < 0) {
> + goto out;
> + }
> }
>
> ret = glfs_set_logging(glfs, "-", gconf->debug_level);
> @@ -254,18 +348,21 @@ static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
>
> ret = glfs_init(glfs);
> if (ret) {
> - if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> - error_setg(errp,
> - "Gluster connection for volume %s, path %s failed on "
> - "socket %s ", gconf->volume, gconf->path,
> - gconf->server->u.q_unix.path);
> - } else {
> - error_setg(errp,
> - "Gluster connection for volume %s, path %s failed on "
> - "host %s and port %s ", gconf->volume, gconf->path,
> - gconf->server->u.tcp.host, gconf->server->u.tcp.port);
> + error_setg(errp, "Gluster connection for volume %s, path %s failed"
> + " to connect", gconf->volume, gconf->path);
> + for (server = gconf->server; server; server = server->next) {
> + if (server->value->type == GLUSTER_TRANSPORT_UNIX) {
> + error_append_hint(errp, "hint: failed on socket %s ",
> + server->value->u.q_unix.path);
> + } else {
> + error_append_hint(errp, "hint: failed on host %s and port %s ",
> + server->value->u.tcp.host,
> + server->value->u.tcp.port);
> + }
> }
>
> + error_append_hint(errp, "Please refer to gluster logs for more info\n");
> +
> /* glfs_init sometimes doesn't set errno although docs suggest that */
> if (errno == 0) {
> errno = EINVAL;
> @@ -284,6 +381,226 @@ out:
> return NULL;
> }
>
> +static int qapi_enum_parse(const char *opt)
> +{
> + int i;
> +
> + if (!opt) {
> + return GLUSTER_TRANSPORT__MAX;
> + }
> +
> + for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
> + if (!strcmp(opt, GlusterTransport_lookup[i])) {
> + return i;
> + }
> + }
> +
> + return i;
> +}
> +
> +/*
> + * Convert the json formatted command line into qapi.
> +*/
> +static int qemu_gluster_parse_json(BlockdevOptionsGluster *gconf,
> + QDict *options, Error **errp)
> +{
> + QemuOpts *opts;
> + GlusterServer *gsconf;
> + GlusterServerList *curr = NULL;
> + QDict *backing_options = NULL;
> + Error *local_err = NULL;
> + char *str = NULL;
> + const char *ptr;
> + size_t num_servers;
> + int i;
> +
> + /* create opts info from runtime_json_opts list */
> + opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
> + qemu_opts_absorb_qdict(opts, options, &local_err);
> + if (local_err) {
> + goto out;
> + }
> +
> + num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
> + if (num_servers < 1) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, "server");
> + goto out;
> + }
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_VOLUME);
> + goto out;
> + }
> + gconf->volume = g_strdup(ptr);
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_PATH);
> + goto out;
> + }
> + gconf->path = g_strdup(ptr);
> + qemu_opts_del(opts);
> +
> + for (i = 0; i < num_servers; i++) {
> + str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
> + qdict_extract_subqdict(options, &backing_options, str);
> +
> + /* create opts info from runtime_type_opts list */
> + opts = qemu_opts_create(&runtime_type_opts, NULL, 0, &error_abort);
> + qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> + if (local_err) {
> + goto out;
> + }
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_TYPE);
> + gsconf = g_new0(GlusterServer, 1);
> + gsconf->type = qapi_enum_parse(ptr);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_TYPE);
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
> + goto out;
> +
> + }
> + if (gsconf->type == GLUSTER_TRANSPORT__MAX) {
> + error_setg(&local_err, QERR_INVALID_PARAMETER_VALUE,
> + GLUSTER_OPT_TYPE, "tcp or unix");
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
> + goto out;
> + }
> + qemu_opts_del(opts);
> +
> + if (gsconf->type == GLUSTER_TRANSPORT_TCP) {
> + /* create opts info from runtime_tcp_opts list */
> + opts = qemu_opts_create(&runtime_tcp_opts, NULL, 0, &error_abort);
> + qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> + if (local_err) {
> + goto out;
> + }
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_HOST);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER,
> + GLUSTER_OPT_HOST);
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
> + goto out;
> + }
> + gsconf->u.tcp.host = g_strdup(ptr);
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_PORT);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER,
> + GLUSTER_OPT_PORT);
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
> + goto out;
> + }
> + gsconf->u.tcp.port = g_strdup(ptr);
> +
> + /* defend for unsupported fields in InetSocketAddress,
> + * i.e. @ipv4, @ipv6 and @to
> + */
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_TO);
> + if (ptr) {
> + gsconf->u.tcp.has_to = true;
> + }
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_IPV4);
> + if (ptr) {
> + gsconf->u.tcp.has_ipv4 = true;
> + }
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_IPV6);
> + if (ptr) {
> + gsconf->u.tcp.has_ipv6 = true;
> + }
> + if (gsconf->u.tcp.has_to) {
> + error_setg(&local_err, "Parameter 'to' not supported");
> + goto out;
> + }
> + if (gsconf->u.tcp.has_ipv4 || gsconf->u.tcp.has_ipv6) {
> + error_setg(&local_err, "Parameters 'ipv4/ipv6' not supported");
> + goto out;
> + }
> + qemu_opts_del(opts);
> + } else {
> + /* create opts info from runtime_unix_opts list */
> + opts = qemu_opts_create(&runtime_unix_opts, NULL, 0, &error_abort);
> + qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> + if (local_err) {
> + goto out;
> + }
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_SOCKET);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER,
> + GLUSTER_OPT_SOCKET);
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
> + goto out;
> + }
> + gsconf->u.q_unix.path = g_strdup(ptr);
> + qemu_opts_del(opts);
> + }
> +
> + if (gconf->server == NULL) {
> + gconf->server = g_new0(GlusterServerList, 1);
> + gconf->server->value = gsconf;
> + curr = gconf->server;
> + } else {
> + curr->next = g_new0(GlusterServerList, 1);
> + curr->next->value = gsconf;
> + curr = curr->next;
> + }
> +
> + qdict_del(backing_options, str);
> + g_free(str);
> + str = NULL;
> + }
> +
> + return 0;
> +
> +out:
> + error_propagate(errp, local_err);
> + qemu_opts_del(opts);
> + if (str) {
> + qdict_del(backing_options, str);
> + g_free(str);
> + }
> + errno = EINVAL;
> + return -errno;
> +}
> +
> +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
> + const char *filename,
> + QDict *options, Error **errp)
> +{
> + int ret;
> + if (filename) {
> + ret = qemu_gluster_parse_uri(gconf, filename);
> + if (ret < 0) {
> + error_setg(errp, "invalid URI");
> + error_append_hint(errp, "Usage: file=gluster[+transport]://"
> + "[host[:port]]/volume/path[?socket=...]\n");
> + errno = -ret;
> + return NULL;
> + }
> + } else {
> + ret = qemu_gluster_parse_json(gconf, options, errp);
> + if (ret < 0) {
> + error_append_hint(errp, "Usage: "
> + "-drive driver=qcow2,file.driver=gluster,"
> + "file.volume=testvol,file.path=/path/a.qcow2"
> + "[,file.debug=9],file.server.0.type=tcp,"
> + "file.server.0.host=1.2.3.4,"
> + "file.server.0.port=24007,"
> + "file.server.1.transport=unix,"
> + "file.server.1.socket=/var/run/glusterd.socket ..."
> + "\n");
> + errno = -ret;
> + return NULL;
> + }
> +
> + }
> +
> + return qemu_gluster_glfs_init(gconf, errp);
> +}
> +
> static void qemu_gluster_complete_aio(void *opaque)
> {
> GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
> @@ -383,7 +700,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
> gconf = g_new0(BlockdevOptionsGluster, 1);
> gconf->debug_level = s->debug_level;
> gconf->has_debug_level = true;
> - s->glfs = qemu_gluster_init(gconf, filename, errp);
> + s->glfs = qemu_gluster_init(gconf, filename, options, errp);
> if (!s->glfs) {
> ret = -errno;
> goto out;
> @@ -454,7 +771,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
> gconf = g_new0(BlockdevOptionsGluster, 1);
> gconf->debug_level = s->debug_level;
> gconf->has_debug_level = true;
> - reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
> + reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, NULL, errp);
> if (reop_s->glfs == NULL) {
> ret = -errno;
> goto exit;
> @@ -601,7 +918,7 @@ static int qemu_gluster_create(const char *filename,
> }
> gconf->has_debug_level = true;
>
> - glfs = qemu_gluster_init(gconf, filename, errp);
> + glfs = qemu_gluster_init(gconf, filename, NULL, errp);
> if (!glfs) {
> ret = -errno;
> goto out;
> @@ -981,7 +1298,7 @@ static BlockDriver bdrv_gluster = {
> .format_name = "gluster",
> .protocol_name = "gluster",
> .instance_size = sizeof(BDRVGlusterState),
> - .bdrv_needs_filename = true,
> + .bdrv_needs_filename = false,
> .bdrv_file_open = qemu_gluster_open,
> .bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
> .bdrv_reopen_commit = qemu_gluster_reopen_commit,
> @@ -1009,7 +1326,7 @@ static BlockDriver bdrv_gluster_tcp = {
> .format_name = "gluster",
> .protocol_name = "gluster+tcp",
> .instance_size = sizeof(BDRVGlusterState),
> - .bdrv_needs_filename = true,
> + .bdrv_needs_filename = false,
> .bdrv_file_open = qemu_gluster_open,
> .bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
> .bdrv_reopen_commit = qemu_gluster_reopen_commit,
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index 1fa0674..5f8179b 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -2111,7 +2111,7 @@
> { 'struct': 'BlockdevOptionsGluster',
> 'data': { 'volume': 'str',
> 'path': 'str',
> - 'server': 'GlusterServer',
> + 'server': ['GlusterServer'],
> '*debug_level': 'int' } }
>
> ##
> --
> 2.7.4
>
>
With fixups:
Reviewed-by: Jeff Cody <jcody@redhat.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema
2016-07-19 20:39 ` Jeff Cody
@ 2016-07-19 21:31 ` Jeff Cody
0 siblings, 0 replies; 20+ messages in thread
From: Jeff Cody @ 2016-07-19 21:31 UTC (permalink / raw)
To: Prasanna Kumar Kalever
Cc: qemu-devel, armbru, kwolf, eblake, pkrempa, rtalur, vbellur
On Tue, Jul 19, 2016 at 04:39:39PM -0400, Jeff Cody wrote:
> On Tue, Jul 19, 2016 at 10:27:32PM +0530, Prasanna Kumar Kalever wrote:
> > this patch adds 'GlusterServer' related schema in qapi/block-core.json
> >
> > Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> > ---
> > block/gluster.c | 115 +++++++++++++++++++++++++++++----------------------
> > qapi/block-core.json | 68 +++++++++++++++++++++++++++---
> > 2 files changed, 128 insertions(+), 55 deletions(-)
> >
> > diff --git a/block/gluster.c b/block/gluster.c
> > index 8a54ad4..c4ca59e 100644
> > --- a/block/gluster.c
> > +++ b/block/gluster.c
> > @@ -16,6 +16,7 @@
> >
> > #define GLUSTER_OPT_FILENAME "filename"
> > #define GLUSTER_OPT_DEBUG "debug"
> > +#define GLUSTER_DEFAULT_PORT 24007
> > #define GLUSTER_DEBUG_DEFAULT 4
> > #define GLUSTER_DEBUG_MAX 9
> >
> > @@ -40,15 +41,6 @@ typedef struct BDRVGlusterReopenState {
> > struct glfs_fd *fd;
> > } BDRVGlusterReopenState;
> >
> > -typedef struct GlusterConf {
> > - char *host;
> > - int port;
> > - char *volume;
> > - char *path;
> > - char *transport;
> > - int debug_level;
> > -} GlusterConf;
> > -
> >
> > static QemuOptsList qemu_gluster_create_opts = {
> > .name = "qemu-gluster-create-opts",
> > @@ -92,18 +84,7 @@ static QemuOptsList runtime_opts = {
> > };
> >
> >
> > -static void qemu_gluster_gconf_free(GlusterConf *gconf)
> > -{
> > - if (gconf) {
> > - g_free(gconf->host);
> > - g_free(gconf->volume);
> > - g_free(gconf->path);
> > - g_free(gconf->transport);
> > - g_free(gconf);
> > - }
> > -}
> > -
> > -static int parse_volume_options(GlusterConf *gconf, char *path)
> > +static int parse_volume_options(BlockdevOptionsGluster *gconf, char *path)
> > {
> > char *p, *q;
> >
> > @@ -160,8 +141,10 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
> > * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
> > * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
> > */
> > -static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> > +static int qemu_gluster_parse_uri(BlockdevOptionsGluster *gconf,
> > + const char *filename)
> > {
> > + GlusterServer *gsconf;
> > URI *uri;
> > QueryParams *qp = NULL;
> > bool is_unix = false;
> > @@ -172,16 +155,18 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> > return -EINVAL;
> > }
> >
> > + gconf->server = gsconf = g_new0(GlusterServer, 1);
> > +
> > /* transport */
> > if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
> > - gconf->transport = g_strdup("tcp");
> > + gsconf->type = GLUSTER_TRANSPORT_TCP;
> > } else if (!strcmp(uri->scheme, "gluster+tcp")) {
> > - gconf->transport = g_strdup("tcp");
> > + gsconf->type = GLUSTER_TRANSPORT_TCP;
> > } else if (!strcmp(uri->scheme, "gluster+unix")) {
> > - gconf->transport = g_strdup("unix");
> > + gsconf->type = GLUSTER_TRANSPORT_UNIX;
> > is_unix = true;
> > } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> > - gconf->transport = g_strdup("tcp");
> > + gsconf->type = GLUSTER_TRANSPORT_TCP;
> > error_report("Warning: rdma feature is not supported falling "
> > "back to tcp");
> > } else {
> > @@ -209,10 +194,14 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
> > ret = -EINVAL;
> > goto out;
> > }
> > - gconf->host = g_strdup(qp->p[0].value);
> > + gsconf->u.q_unix.path = g_strdup(qp->p[0].value);
> > } else {
> > - gconf->host = g_strdup(uri->server ? uri->server : "localhost");
> > - gconf->port = uri->port;
> > + gsconf->u.tcp.host = g_strdup(uri->server ? uri->server : "localhost");
> > + if (uri->port) {
> > + gsconf->u.tcp.port = g_strdup_printf("%d", uri->port);
> > + } else {
> > + gsconf->u.tcp.port = g_strdup_printf("%d", GLUSTER_DEFAULT_PORT);
> > + }
> > }
> >
> > out:
> > @@ -223,17 +212,18 @@ out:
> > return ret;
> > }
> >
> > -static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
> > - Error **errp)
> > +static struct glfs *qemu_gluster_init(BlockdevOptionsGluster *gconf,
> > + const char *filename, Error **errp)
> > {
> > struct glfs *glfs = NULL;
> > int ret;
> > int old_errno;
> >
> > - ret = qemu_gluster_parseuri(gconf, filename);
> > + ret = qemu_gluster_parse_uri(gconf, filename);
> > if (ret < 0) {
> > - error_setg(errp, "Usage: file=gluster[+transport]://[host[:port]]/"
> > - "volume/path[?socket=...]");
> > + error_setg(errp, "Invalid URI");
> > + error_append_hint(errp, "Usage: file=gluster[+transport]://"
> > + "[host[:port]]/volume/path[?socket=...]\n");
> > errno = -ret;
> > goto out;
> > }
> > @@ -243,8 +233,16 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
> > goto out;
> > }
> >
> > - ret = glfs_set_volfile_server(glfs, gconf->transport, gconf->host,
> > - gconf->port);
> > + if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> > + ret = glfs_set_volfile_server(glfs,
> > + GlusterTransport_lookup[gconf->server->type],
>
> Formatting issue found by checkpatch; exceeds 80 char. Will fix on commit,
> no need for respin.
>
> > + gconf->server->u.q_unix.path, 0);
> > + } else {
> > + ret = glfs_set_volfile_server(glfs,
> > + GlusterTransport_lookup[gconf->server->type],
>
> Formatting issue found by checkpatch; exceeds 80 char. Will fix on commit,
> no need for respin.
>
>
> > + gconf->server->u.tcp.host,
> > + atoi(gconf->server->u.tcp.port));
> > + }
> > if (ret < 0) {
> > goto out;
> > }
> > @@ -256,15 +254,22 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
> >
> > ret = glfs_init(glfs);
> > if (ret) {
> > - error_setg_errno(errp, errno,
> > - "Gluster connection failed for host=%s port=%d "
> > - "volume=%s path=%s transport=%s", gconf->host,
> > - gconf->port, gconf->volume, gconf->path,
> > - gconf->transport);
> > + if (gconf->server->type == GLUSTER_TRANSPORT_UNIX) {
> > + error_setg(errp,
> > + "Gluster connection for volume %s, path %s failed on "
> > + "socket %s ", gconf->volume, gconf->path,
> > + gconf->server->u.q_unix.path);
> > + } else {
> > + error_setg(errp,
> > + "Gluster connection for volume %s, path %s failed on "
> > + "host %s and port %s ", gconf->volume, gconf->path,
> > + gconf->server->u.tcp.host, gconf->server->u.tcp.port);
> > + }
> >
> > /* glfs_init sometimes doesn't set errno although docs suggest that */
> > - if (errno == 0)
> > + if (errno == 0) {
> > errno = EINVAL;
> > + }
>
> Eric pointed out the errno risk here; as this is pre-existing, nothing to
> hold up this patch (can be fixed in later patch).
>
> >
> > goto out;
> > }
> > @@ -352,7 +357,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
> > BDRVGlusterState *s = bs->opaque;
> > int open_flags = 0;
> > int ret = 0;
> > - GlusterConf *gconf = g_new0(GlusterConf, 1);
> > + BlockdevOptionsGluster *gconf = NULL;
> > QemuOpts *opts;
> > Error *local_err = NULL;
> > const char *filename;
> > @@ -375,7 +380,9 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
> > s->debug_level = GLUSTER_DEBUG_MAX;
> > }
> >
> > + gconf = g_new0(BlockdevOptionsGluster, 1);
> > gconf->debug_level = s->debug_level;
> > + gconf->has_debug_level = true;
> > s->glfs = qemu_gluster_init(gconf, filename, errp);
> > if (!s->glfs) {
> > ret = -errno;
> > @@ -410,7 +417,9 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
> >
> > out:
> > qemu_opts_del(opts);
> > - qemu_gluster_gconf_free(gconf);
> > + if (gconf) {
> > + qapi_free_BlockdevOptionsGluster(gconf);
> > + }
>
> Per Markus's suggestion, will remove conditional on commit.
>
> > if (!ret) {
> > return ret;
> > }
> > @@ -429,7 +438,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
> > int ret = 0;
> > BDRVGlusterState *s;
> > BDRVGlusterReopenState *reop_s;
> > - GlusterConf *gconf = NULL;
> > + BlockdevOptionsGluster *gconf;
> > int open_flags = 0;
> >
> > assert(state != NULL);
> > @@ -442,9 +451,9 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
> >
> > qemu_gluster_parse_flags(state->flags, &open_flags);
> >
> > - gconf = g_new0(GlusterConf, 1);
> > -
> > + gconf = g_new0(BlockdevOptionsGluster, 1);
> > gconf->debug_level = s->debug_level;
> > + gconf->has_debug_level = true;
> > reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
> > if (reop_s->glfs == NULL) {
> > ret = -errno;
> > @@ -470,7 +479,9 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
> >
> > exit:
> > /* state->opaque will be freed in either the _abort or _commit */
> > - qemu_gluster_gconf_free(gconf);
> > + if (gconf) {
> > + qapi_free_BlockdevOptionsGluster(gconf);
> > + }
>
> Here too.
>
> > return ret;
> > }
> >
> > @@ -572,14 +583,15 @@ static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
> > static int qemu_gluster_create(const char *filename,
> > QemuOpts *opts, Error **errp)
> > {
> > + BlockdevOptionsGluster *gconf;
> > struct glfs *glfs;
> > struct glfs_fd *fd;
> > int ret = 0;
> > int prealloc = 0;
> > int64_t total_size = 0;
> > char *tmp = NULL;
> > - GlusterConf *gconf = g_new0(GlusterConf, 1);
> >
> > + gconf = g_new0(BlockdevOptionsGluster, 1);
> > gconf->debug_level = qemu_opt_get_number_del(opts, GLUSTER_OPT_DEBUG,
> > GLUSTER_DEBUG_DEFAULT);
> > if (gconf->debug_level < 0) {
> > @@ -587,6 +599,7 @@ static int qemu_gluster_create(const char *filename,
> > } else if (gconf->debug_level > GLUSTER_DEBUG_MAX) {
> > gconf->debug_level = GLUSTER_DEBUG_MAX;
> > }
> > + gconf->has_debug_level = true;
> >
> > glfs = qemu_gluster_init(gconf, filename, errp);
> > if (!glfs) {
> > @@ -628,7 +641,9 @@ static int qemu_gluster_create(const char *filename,
> > }
> > out:
> > g_free(tmp);
> > - qemu_gluster_gconf_free(gconf);
> > + if (gconf) {
> > + qapi_free_BlockdevOptionsGluster(gconf);
> > + }
I almost missed this one; will remove this conditional as well.
> > if (glfs) {
> > glfs_fini(glfs);
> > }
> > diff --git a/qapi/block-core.json b/qapi/block-core.json
> > index a7fdb28..1fa0674 100644
> > --- a/qapi/block-core.json
> > +++ b/qapi/block-core.json
> > @@ -1658,13 +1658,14 @@
> > # @host_device, @host_cdrom: Since 2.1
> > #
> > # Since: 2.0
> > +# @gluster: Since 2.7
> > ##
> > { 'enum': 'BlockdevDriver',
> > 'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
> > - 'dmg', 'file', 'ftp', 'ftps', 'host_cdrom', 'host_device',
> > - 'http', 'https', 'luks', 'null-aio', 'null-co', 'parallels',
> > - 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp', 'vdi', 'vhdx',
> > - 'vmdk', 'vpc', 'vvfat' ] }
> > + 'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
> > + 'host_device', 'http', 'https', 'luks', 'null-aio', 'null-co',
> > + 'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'tftp',
> > + 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat' ] }
> >
> > ##
> > # @BlockdevOptionsFile
> > @@ -2057,6 +2058,63 @@
> > '*read-pattern': 'QuorumReadPattern' } }
> >
> > ##
> > +# @GlusterTransport
> > +#
> > +# An enumeration of Gluster transport type
> > +#
> > +# @tcp: TCP - Transmission Control Protocol
> > +#
> > +# @unix: UNIX - Unix domain socket
> > +#
> > +# Since: 2.7
> > +##
> > +{ 'enum': 'GlusterTransport',
> > + 'data': [ 'unix', 'tcp' ] }
> > +
> > +
> > +##
> > +# @GlusterServer
> > +#
> > +# Captures the address of a socket
> > +#
> > +# Details for connecting to a gluster server
> > +#
> > +# @type: Transport type used for gluster connection
> > +#
> > +# @unix: socket file
> > +#
> > +# @tcp: host address and port number
> > +#
> > +# Since: 2.7
> > +##
> > +{ 'union': 'GlusterServer',
> > + 'base': { 'type': 'GlusterTransport' },
> > + 'discriminator': 'type',
> > + 'data': { 'unix': 'UnixSocketAddress',
> > + 'tcp': 'InetSocketAddress' } }
> > +
> > +##
> > +# @BlockdevOptionsGluster
> > +#
> > +# Driver specific block device options for Gluster
> > +#
> > +# @volume: name of gluster volume where VM image resides
> > +#
> > +# @path: absolute path to image file in gluster volume
> > +#
> > +# @server: gluster server description
> > +#
> > +# @debug_level: #optional libgfapi log level (default '4' which is Error)
> > +#
>
> Per Eric's suggestion, will change this to debug-level on commit.
>
> > +# Since: 2.7
> > +##
> > +{ 'struct': 'BlockdevOptionsGluster',
> > + 'data': { 'volume': 'str',
> > + 'path': 'str',
> > + 'server': 'GlusterServer',
> > + '*debug_level': 'int' } }
> > +
> > +##
> > # @BlockdevOptions
> > #
> > # Options for creating a block device. Many options are available for all
> > @@ -2119,7 +2177,7 @@
> > 'file': 'BlockdevOptionsFile',
> > 'ftp': 'BlockdevOptionsFile',
> > 'ftps': 'BlockdevOptionsFile',
> > -# TODO gluster: Wait for structured options
> > + 'gluster': 'BlockdevOptionsGluster',
> > 'host_cdrom': 'BlockdevOptionsFile',
> > 'host_device':'BlockdevOptionsFile',
> > 'http': 'BlockdevOptionsFile',
> > --
> > 2.7.4
> >
>
> After fix-ups:
>
> Reviewed-by: Jeff Cody <jcody@redhat.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 0/5] block/gluster: add support for multiple gluster servers
2016-07-19 16:57 [Qemu-devel] [PATCH v20 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
` (4 preceding siblings ...)
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
@ 2016-07-19 21:40 ` Jeff Cody
5 siblings, 0 replies; 20+ messages in thread
From: Jeff Cody @ 2016-07-19 21:40 UTC (permalink / raw)
To: Prasanna Kumar Kalever
Cc: qemu-devel, armbru, kwolf, eblake, pkrempa, rtalur, vbellur
On Tue, Jul 19, 2016 at 10:27:28PM +0530, Prasanna Kumar Kalever wrote:
> This version of patches are rebased repo at
> git://repo.or.cz/qemu/armbru.git qapi-not-next
>
> Prasanna Kumar Kalever (5):
> block/gluster: rename [server, volname, image] -> [host, volume, path]
> block/gluster: code cleanup
> block/gluster: deprecate rdma support
> block/gluster: using new qapi schema
> block/gluster: add support for multiple gluster servers
>
> v1:
> multiple host addresses but common port number and transport type
> pattern: URI syntax with query (?) delimitor
> syntax:
> file=gluster[+transport-type]://host1:24007/testvol/a.img\
> ?server=host2&server=host3
>
> v2:
> multiple host addresses each have their own port number, but all use
> common transport type
> pattern: URI syntax with query (?) delimiter
> syntax:
> file=gluster[+transport-type]://[host[:port]]/testvol/a.img\
> [?server=host1[:port]\
> &server=host2[:port]]
>
> v3:
> multiple host addresses each have their own port number and transport type
> pattern: changed to json
> syntax:
> 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
> "path":"/path/a.qcow2","server":
> [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
> {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'
>
> v4, v5:
> address comments from "Eric Blake" <eblake@redhat.com>
> renamed:
> 'backup-volfile-servers' -> 'volfile-servers'
>
> v6:
> address comments from Peter Krempa <pkrempa@redhat.com>
> renamed:
> 'volname' -> 'volume'
> 'image-path' -> 'path'
> 'server' -> 'host'
>
> v7:
> fix for v6 (initialize num_servers to 1 and other typos)
>
> v8:
> split patch set v7 into series of 3 as per Peter Krempa <pkrempa@redhat.com>
> review comments
>
> v9:
> reorder the series of patches addressing "Eric Blake" <eblake@redhat.com>
> review comments
>
> v10:
> fix mem-leak as per Peter Krempa <pkrempa@redhat.com> review comments
>
> v11:
> using qapi-types* defined structures as per "Eric Blake" <eblake@redhat.com>
> review comments.
>
> v12:
> fix crash caused in qapi_free_BlockdevOptionsGluster
>
> v13:
> address comments from "Jeff Cody" <jcody@redhat.com>
>
> v14:
> address comments from "Eric Blake" <eblake@redhat.com>
> split patch 3/3 into two
> rename input option and variable from 'servers' to 'server'
>
> v15:
> patch 1/4 changed the commit message as per Eric's comment
> patch 2/4 are unchanged
> patch 3/4 addressed Jeff's comments
> patch 4/4 concentrates on unix transport related help info,
> rename 'parse_transport_option()' to 'qapi_enum_parse()',
> address memory leaks and other comments given by Jeff and Eric
>
> v16:
> In patch 4/4 fixed segfault on glfs_init() error case, as per Jeff's comments
> other patches in this series remain unchanged
>
> v17:
> rebase of v16 on latest master
>
> v18:
> rebase of v17 on latest master
> rebase has demanded type conversion of 'qemu_gluster_init()'[s] first argument
> from 'BlockdevOptionsGluster**' to 'BlockdevOptionsGluster*' and all its callees
> both in 3/4 and 4/4 patches
>
> v19:
> patches 1/5, 2/5 remains unchanged
>
> patch 3/5 is something new, in which the rdma deadcode is removed
>
> patch 4/5 (i.e. 3/4 in v18) now uses union discriminator, I have made a choice
> to use gluster with custom schema since @UnixSocketAddress uses 'path' as key,
> which may be confusing with gluster, and in @InetSocketAddress port was str
> again I have made a choice to keep it uint16 which really make sense.
> Hmmm.. As Markus suggested in v18 qemu_gluster_parseuri() is *parse_uri() same
> with *parse_json() (in 5/5)
>
> patch 5/5 (i.e 4/4 in v18) adds a list of servers and json parser functionality
> as usual
>
> Thanks to Markus and Eric for help in understanding the new schema changes.
>
> v20:
> address comments from Markus and Eric on v19
> patch 4/5 and 5/5 Use InetSocketAddress instead of GlusterInetSocketAddress
> Port is not optional anymore
>
> block/gluster.c | 637 +++++++++++++++++++++++++++++++++++++++------------
> qapi/block-core.json | 68 +++++-
> 2 files changed, 553 insertions(+), 152 deletions(-)
>
> --
> 2.7.4
>
Thanks,
Applied to my block branch:
git://github.com/codyprime/qemu-kvm-jtc.git block
-Jeff
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
` (2 preceding siblings ...)
2016-07-19 20:46 ` Jeff Cody
@ 2016-07-19 22:32 ` Eric Blake
3 siblings, 0 replies; 20+ messages in thread
From: Eric Blake @ 2016-07-19 22:32 UTC (permalink / raw)
To: Prasanna Kumar Kalever, qemu-devel
Cc: armbru, kwolf, pkrempa, jcody, rtalur, vbellur
[-- Attachment #1: Type: text/plain, Size: 10704 bytes --]
On 07/19/2016 10:57 AM, Prasanna Kumar Kalever wrote:
> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.
If rdma is deprecated, we don't need to mention it here.
>
> Problem:
>
> Currently VM Image on gluster volume is specified like this:
>
> file=gluster[+tcp]://host[:port]/testvol/a.img
>
> Say we have three hosts in a trusted pool with replica 3 volume in action.
> When the host mentioned in the command above goes down for some reason,
> the other two hosts are still available. But there's currently no way
> to tell QEMU about them.
>
> Solution:
>
> New way of specifying VM Image on gluster volume with volfile servers:
> (We still support old syntax to maintain backward compatibility)
>
> Basic command line syntax looks like:
>
> Pattern I:
> -drive driver=gluster,
> volume=testvol,path=/path/a.raw,[debug=N,]
> server.0.type=tcp,
> server.0.host=1.2.3.4,
> server.0.port=24007,
> server.1.type=unix,
> server.1.socket=/path/socketfile
So, I haven't checked this yet, but if I'm correct, the old syntax was:
-drive driver=gluster,
volume=testvol,path=/path/a.raw,
server.type=tcp,
server.host=1.2.3.4,
server.port=24007
Is that syntax still supported? That is, if I only specify one server,
does 'server.FOO' mean the same as 'server.0.FOO'? Or am I completely
wrong? Maybe listing the old syntax for comparison is in order,
especially since you claim above that the old syntax is still supported.
>
> Pattern II:
> 'json:{"driver":"qcow2","file":{"driver":"gluster",
> "volume":"testvol","path":"/path/a.qcow2",["debug":N,]
> "server":[{hostinfo_1}, ...{hostinfo_N}]}}'
>
> driver => 'gluster' (protocol name)
> volume => name of gluster volume where our VM image resides
> path => absolute path of image in gluster volume
> [debug] => libgfapi loglevel [(0 - 9) default 4 -> Error]
>
> {hostinfo} => {{type:"tcp",host:"1.2.3.4"[,port=24007]},
> {type:"unix",socket:"/path/sockfile"}}
>
> type => transport type used to connect to gluster management daemon,
> it can be tcp|unix
> host => host address (hostname/ipv4/ipv6 addresses/socket path)
You dropped ipv6 in the last patch, if you want to update this comment.
> port => port number on which glusterd is listening.
> socket => path to socket file
>
> Examples:
> 1.
> -drive driver=qcow2,file.driver=gluster,
> file.volume=testvol,file.path=/path/a.qcow2,file.debug=9,
> file.server.0.type=tcp,
> file.server.0.host=1.2.3.4,
> file.server.0.port=24007,
> file.server.1.type=tcp,
> file.server.1.socket=/var/run/glusterd.socket
> 2.
> 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
> "path":"/path/a.qcow2","debug":9,"server":
> [{type:"tcp",host:"1.2.3.4",port=24007},
> {type:"unix",socket:"/var/run/glusterd.socket"}] } }'
>
> This patch gives a mechanism to provide all the server addresses, which are in
> replica set, so in case host1 is down VM can still boot from any of the
> active hosts.
>
> This is equivalent to the backup-volfile-servers option supported by
> mount.glusterfs (FUSE way of mounting gluster volume)
>
> credits: sincere thanks to all the supporters
>
> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
> ---
> block/gluster.c | 397 +++++++++++++++++++++++++++++++++++++++++++++------
> qapi/block-core.json | 2 +-
> 2 files changed, 358 insertions(+), 41 deletions(-)
>
> +static QemuOptsList runtime_tcp_opts = {
> + .name = "gluster_tcp",
> + .head = QTAILQ_HEAD_INITIALIZER(runtime_tcp_opts.head),
> + .desc = {
> + {
> + .name = GLUSTER_OPT_TYPE,
> + .type = QEMU_OPT_STRING,
> + .help = "tcp|unix",
> + },
> + {
> + .name = GLUSTER_OPT_HOST,
> + .type = QEMU_OPT_STRING,
> + .help = "host address (hostname/ipv4/ipv6 addresses)",
> + },
Awkward to state ipv6 here,
> + {
> + .name = GLUSTER_OPT_PORT,
> + .type = QEMU_OPT_NUMBER,
> + .help = "port number on which glusterd is listening (default 24007)",
> + },
> + {
> + .name = "to",
> + .type = QEMU_OPT_NUMBER,
> + .help = "max port number, not supported by gluster",
> + },
> + {
> + .name = "ipv4",
> + .type = QEMU_OPT_BOOL,
> + .help = "ipv4 bool value, not supported by gluster",
> + },
> + {
> + .name = "ipv6",
> + .type = QEMU_OPT_BOOL,
> + .help = "ipv6 bool value, not supported by gluster",
but then to state it is not supported here.
Do we actually have to provide QemuOpt listings for the options that we
don't actually support? That is, must QemuOpt precisely match the
InetAddressSocket struct we are parsing into, or can it be merely a
subset where we omit the portions we won't use?
> @@ -284,6 +381,226 @@ out:
> return NULL;
> }
>
> +static int qapi_enum_parse(const char *opt)
> +{
> + int i;
> +
> + if (!opt) {
> + return GLUSTER_TRANSPORT__MAX;
> + }
> +
> + for (i = 0; i < GLUSTER_TRANSPORT__MAX; i++) {
> + if (!strcmp(opt, GlusterTransport_lookup[i])) {
> + return i;
> + }
> + }
> +
> + return i;
Is this duplicating any functionality that we already have?
> +}
> +
> +/*
> + * Convert the json formatted command line into qapi.
> +*/
> +static int qemu_gluster_parse_json(BlockdevOptionsGluster *gconf,
> + QDict *options, Error **errp)
Indentation is off.
> +{
> + QemuOpts *opts;
> + GlusterServer *gsconf;
> + GlusterServerList *curr = NULL;
> + QDict *backing_options = NULL;
> + Error *local_err = NULL;
> + char *str = NULL;
> + const char *ptr;
> + size_t num_servers;
> + int i;
> +
> + /* create opts info from runtime_json_opts list */
> + opts = qemu_opts_create(&runtime_json_opts, NULL, 0, &error_abort);
> + qemu_opts_absorb_qdict(opts, options, &local_err);
In fact, if we were to use opts_visitor_new(), could we let the
already-existing generated QAPI code parse from QemuOpts into QAPI form
without having to do it by hand here? Hmm, maybe not...
> + if (local_err) {
> + goto out;
> + }
> +
> + num_servers = qdict_array_entries(options, GLUSTER_OPT_SERVER_PATTERN);
> + if (num_servers < 1) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, "server");
> + goto out;
> + }
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_VOLUME);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_VOLUME);
> + goto out;
> + }
> + gconf->volume = g_strdup(ptr);
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_PATH);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_PATH);
> + goto out;
> + }
> + gconf->path = g_strdup(ptr);
> + qemu_opts_del(opts);
> +
> + for (i = 0; i < num_servers; i++) {
> + str = g_strdup_printf(GLUSTER_OPT_SERVER_PATTERN"%d.", i);
> + qdict_extract_subqdict(options, &backing_options, str);
...since the OptsVisitor doesn't handle structs, but we are definitely
parsing a sub-struct. I also wonder if Dan's work on a new string-based
qmp-input-visitor, and/or his qdict_crumple() work, could reduce the
amount of effort needed here.
It's not to say that your patch is wrong, only that we may have
follow-up patches that can improve it.
> +
> + /* create opts info from runtime_type_opts list */
> + opts = qemu_opts_create(&runtime_type_opts, NULL, 0, &error_abort);
> + qemu_opts_absorb_qdict(opts, backing_options, &local_err);
> + if (local_err) {
> + goto out;
> + }
> +
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_TYPE);
> + gsconf = g_new0(GlusterServer, 1);
> + gsconf->type = qapi_enum_parse(ptr);
> + if (!ptr) {
> + error_setg(&local_err, QERR_MISSING_PARAMETER, GLUSTER_OPT_TYPE);
> + error_append_hint(&local_err, GERR_INDEX_HINT, i);
Nothing else in our code base uses GERR_INDEX_HINT. I'd MUCH prefer
open-coding the string instead of relying on some unknown string literal
from glib, since we have less control over whether glib will always keep
the format markers lined up with what we are using.
> + /* defend for unsupported fields in InetSocketAddress,
> + * i.e. @ipv4, @ipv6 and @to
> + */
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_TO);
> + if (ptr) {
> + gsconf->u.tcp.has_to = true;
> + }
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_IPV4);
> + if (ptr) {
> + gsconf->u.tcp.has_ipv4 = true;
> + }
> + ptr = qemu_opt_get(opts, GLUSTER_OPT_IPV6);
> + if (ptr) {
> + gsconf->u.tcp.has_ipv6 = true;
> + }
> + if (gsconf->u.tcp.has_to) {
> + error_setg(&local_err, "Parameter 'to' not supported");
> + goto out;
> + }
> + if (gsconf->u.tcp.has_ipv4 || gsconf->u.tcp.has_ipv6) {
> + error_setg(&local_err, "Parameters 'ipv4/ipv6' not supported");
I know we are rejecting ipv6 until gluster supports it, but do we really
have to reject ipv4? On the other hand, it's always
backwards-compatible to relax this restriction later on, but harder to
add it in after a release where it was not present.
> +++ b/qapi/block-core.json
> @@ -2111,7 +2111,7 @@
> { 'struct': 'BlockdevOptionsGluster',
> 'data': { 'volume': 'str',
> 'path': 'str',
> - 'server': 'GlusterServer',
> + 'server': ['GlusterServer'],
Documentation should probably be tweaked to mention that 'server' is now
a list of servers, and should not be empty. For that matter, did you
test that the error message is sane when there are no servers listed?
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2016-07-19 22:33 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-07-19 16:57 [Qemu-devel] [PATCH v20 0/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 1/5] block/gluster: rename [server, volname, image] -> [host, volume, path] Prasanna Kumar Kalever
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 2/5] block/gluster: code cleanup Prasanna Kumar Kalever
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 3/5] block/gluster: deprecate rdma support Prasanna Kumar Kalever
2016-07-19 17:17 ` Markus Armbruster
2016-07-19 18:09 ` Eric Blake
2016-07-19 18:46 ` Jeff Cody
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 4/5] block/gluster: using new qapi schema Prasanna Kumar Kalever
2016-07-19 17:30 ` Markus Armbruster
2016-07-19 17:37 ` Markus Armbruster
2016-07-19 18:38 ` Eric Blake
2016-07-19 20:39 ` Jeff Cody
2016-07-19 21:31 ` Jeff Cody
2016-07-19 16:57 ` [Qemu-devel] [PATCH v20 5/5] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2016-07-19 17:55 ` Markus Armbruster
2016-07-19 18:33 ` Markus Armbruster
2016-07-19 19:01 ` Prasanna Kalever
2016-07-19 20:46 ` Jeff Cody
2016-07-19 22:32 ` Eric Blake
2016-07-19 21:40 ` [Qemu-devel] [PATCH v20 0/5] " Jeff Cody
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).