qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v5 0/2] allow blockdev-add for NFS
@ 2016-10-28 17:09 Ashijeet Acharya
  2016-10-28 17:09 ` [Qemu-devel] [PATCH v5 1/2] block/nfs: Introduce runtime_opts in NFS Ashijeet Acharya
  2016-10-28 17:09 ` [Qemu-devel] [PATCH v5 2/2] qapi: allow blockdev-add for NFS Ashijeet Acharya
  0 siblings, 2 replies; 5+ messages in thread
From: Ashijeet Acharya @ 2016-10-28 17:09 UTC (permalink / raw)
  To: kwolf
  Cc: eblake, pl, jcody, mreitz, armbru, qemu-devel, qemu-block,
	Ashijeet Acharya

Previously posted series patches:
v4: https://lists.gnu.org/archive/html/qemu-devel/2016-10/msg07449.html
v3: https://lists.gnu.org/archive/html/qemu-devel/2016-10/msg06903.html
v2: https://lists.gnu.org/archive/html/qemu-devel/2016-10/msg05844.html
v1: https://lists.gnu.org/archive/html/qemu-devel/2016-10/msg04487.html

This series adds blockdev-add support for NFS block driver.

Patch 1 helps to prepare NFS driver to make use of several runtime_opts
as they appear in the URI. This will make NFS to do things similar to
the way other drivers available in the block layer do. It also adds support
to handle new option "server".

Patch 2 helps to allow blockdev-add support for the NFS block driver
by making the NFS option available.


*** This series depends on the following patch: ***
"qdict: implement a qdict_crumple method for un-flattening a dict"
from Daniel's "QAPI/QOM work for non-scalar object properties"
series.

Changes in v5:
- drop use of legacy options
- set bs->exact_filename in bdrv_refresh_filename
- change client->inet to client->server
- fix qdict_crumple() bug

Changes in v4:
- add support for option "server" in nfs driver
- add bdrv_refresh_filename fix
- fix the comments in json

Changes in v3:
- minor coding style fix
- set ret=-EINVAL in nfs_parse_uri()
- fix the bug of setting errp twice
- make all error paths 'goto fail'
- pass 0 as a default value in qemu_opt_get_number()
- drop nfs_set_pagecache_ttl()
- introduce new enum NFSTransport and set 'type' to use it NFSServer
- mention default values of query parameters
- change the names of query parameters in JSON

Changes in v2:
- drop strcmp() condition check for host and path in nfs_parse_uri()
- drop "export" completely
- initialize client->context bedore setting query parameters
- fix the QDict options being passed to nfs_client_open() and make use of url

Ashijeet Acharya (2):
  block/nfs: Introduce runtime_opts in NFS
  qapi: allow blockdev-add for NFS

 block/nfs.c          | 430 ++++++++++++++++++++++++++++++++++++++++-----------
 qapi/block-core.json |  74 ++++++++-
 2 files changed, 408 insertions(+), 96 deletions(-)

-- 
2.6.2

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [Qemu-devel] [PATCH v5 1/2] block/nfs: Introduce runtime_opts in NFS
  2016-10-28 17:09 [Qemu-devel] [PATCH v5 0/2] allow blockdev-add for NFS Ashijeet Acharya
@ 2016-10-28 17:09 ` Ashijeet Acharya
  2016-10-31 11:53   ` Kevin Wolf
  2016-10-28 17:09 ` [Qemu-devel] [PATCH v5 2/2] qapi: allow blockdev-add for NFS Ashijeet Acharya
  1 sibling, 1 reply; 5+ messages in thread
From: Ashijeet Acharya @ 2016-10-28 17:09 UTC (permalink / raw)
  To: kwolf
  Cc: eblake, pl, jcody, mreitz, armbru, qemu-devel, qemu-block,
	Ashijeet Acharya

Make NFS block driver use various fine grained runtime_opts.
Set .bdrv_parse_filename() to nfs_parse_filename() and introduce two
new functions nfs_parse_filename() and nfs_parse_uri() to help parsing
the URI.
Add a new option "server" which then accepts a new struct NFSServer.
"host" is supported as a legacy option and is mapped to its NFSServer
representation.

Signed-off-by: Ashijeet Acharya <ashijeetacharya@gmail.com>
---
 block/nfs.c | 430 +++++++++++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 337 insertions(+), 93 deletions(-)

diff --git a/block/nfs.c b/block/nfs.c
index c3db2ec..5b66602 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -35,8 +35,15 @@
 #include "qemu/uri.h"
 #include "qemu/cutils.h"
 #include "sysemu/sysemu.h"
+#include "qapi/qmp/qdict.h"
+#include "qapi/qmp/qint.h"
+#include "qapi/qmp/qstring.h"
+#include "qapi-visit.h"
+#include "qapi/qobject-input-visitor.h"
+#include "qapi/qobject-output-visitor.h"
 #include <nfsc/libnfs.h>
 
+
 #define QEMU_NFS_MAX_READAHEAD_SIZE 1048576
 #define QEMU_NFS_MAX_PAGECACHE_SIZE (8388608 / NFS_BLKSIZE)
 #define QEMU_NFS_MAX_DEBUG_LEVEL 2
@@ -49,6 +56,9 @@ typedef struct NFSClient {
     AioContext *aio_context;
     blkcnt_t st_blocks;
     bool cache_used;
+    NFSServer *server;
+    char *path;
+    int64_t uid, gid, tcp_syncnt, readahead, pagecache, debug;
 } NFSClient;
 
 typedef struct NFSRPC {
@@ -60,6 +70,122 @@ typedef struct NFSRPC {
     NFSClient *client;
 } NFSRPC;
 
+static int nfs_parse_uri(const char *filename, QDict *options, Error **errp)
+{
+    URI *uri = NULL;
+    QueryParams *qp = NULL;
+    int ret = -EINVAL, i;
+
+    uri = uri_parse(filename);
+    if (!uri) {
+        error_setg(errp, "Invalid URI specified");
+        goto out;
+    }
+    if (strcmp(uri->scheme, "nfs") != 0) {
+        error_setg(errp, "URI scheme must be 'nfs'");
+        goto out;
+    }
+
+    if (!uri->server) {
+        error_setg(errp, "missing hostname in URI");
+        goto out;
+    }
+
+    if (!uri->path) {
+        error_setg(errp, "missing file path in URI");
+        goto out;
+    }
+
+    qp = query_params_parse(uri->query);
+    if (!qp) {
+        error_setg(errp, "could not parse query parameters");
+        goto out;
+    }
+
+    qdict_put(options, "server.host", qstring_from_str(uri->server));
+    qdict_put(options, "server.type", qstring_from_str("inet"));
+    qdict_put(options, "path", qstring_from_str(uri->path));
+
+    for (i = 0; i < qp->n; i++) {
+        if (!qp->p[i].value) {
+            error_setg(errp, "Value for NFS parameter expected: %s",
+                       qp->p[i].name);
+            goto out;
+        }
+        if (parse_uint_full(qp->p[i].value, NULL, 0)) {
+            error_setg(errp, "Illegal value for NFS parameter: %s",
+                       qp->p[i].name);
+            goto out;
+        }
+        if (!strcmp(qp->p[i].name, "uid")) {
+            qdict_put(options, "user",
+                      qstring_from_str(qp->p[i].value));
+        } else if (!strcmp(qp->p[i].name, "gid")) {
+            qdict_put(options, "group",
+                      qstring_from_str(qp->p[i].value));
+        } else if (!strcmp(qp->p[i].name, "tcp-syncnt")) {
+            qdict_put(options, "tcp-syn-count",
+                      qstring_from_str(qp->p[i].value));
+        } else if (!strcmp(qp->p[i].name, "readahead")) {
+            qdict_put(options, "readahead-size",
+                      qstring_from_str(qp->p[i].value));
+        } else if (!strcmp(qp->p[i].name, "pagecache")) {
+            qdict_put(options, "page-cache-size",
+                      qstring_from_str(qp->p[i].value));
+        } else if (!strcmp(qp->p[i].name, "debug")) {
+            qdict_put(options, "debug-level",
+                      qstring_from_str(qp->p[i].value));
+        } else {
+            error_setg(errp, "Unknown NFS parameter name: %s",
+                       qp->p[i].name);
+            goto out;
+        }
+    }
+    ret = 0;
+out:
+    if (qp) {
+        query_params_free(qp);
+    }
+    if (uri) {
+        uri_free(uri);
+    }
+    return ret;
+}
+
+static bool nfs_has_filename_options_conflict(QDict *options, Error **errp)
+{
+    const QDictEntry *qe;
+
+    for (qe = qdict_first(options); qe; qe = qdict_next(options, qe)) {
+        if (!strcmp(qe->key, "host") ||
+            !strcmp(qe->key, "path") ||
+            !strcmp(qe->key, "user") ||
+            !strcmp(qe->key, "group") ||
+            !strcmp(qe->key, "tcp-syn-count") ||
+            !strcmp(qe->key, "readahead-size") ||
+            !strcmp(qe->key, "page-cache-size") ||
+            !strcmp(qe->key, "debug-level") ||
+            strstart(qe->key, "server.", NULL))
+        {
+            error_setg(errp, "Option %s cannot be used with a filename",
+                       qe->key);
+            return true;
+        }
+    }
+
+    return false;
+}
+
+static void nfs_parse_filename(const char *filename, QDict *options,
+                               Error **errp)
+{
+    if (nfs_has_filename_options_conflict(options, errp)) {
+        return;
+    }
+
+    nfs_parse_uri(filename, options, errp);
+}
+
 static void nfs_process_read(void *arg);
 static void nfs_process_write(void *arg);
 
@@ -225,15 +351,44 @@ static int coroutine_fn nfs_co_flush(BlockDriverState *bs)
     return task.ret;
 }
 
-/* TODO Convert to fine grained options */
 static QemuOptsList runtime_opts = {
     .name = "nfs",
     .head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
     .desc = {
         {
-            .name = "filename",
+            .name = "path",
             .type = QEMU_OPT_STRING,
-            .help = "URL to the NFS file",
+            .help = "Path of the image on the host",
+        },
+        {
+            .name = "uid",
+            .type = QEMU_OPT_NUMBER,
+            .help = "UID value to use when talking to the server",
+        },
+        {
+            .name = "gid",
+            .type = QEMU_OPT_NUMBER,
+            .help = "GID value to use when talking to the server",
+        },
+        {
+            .name = "tcp-syncnt",
+            .type = QEMU_OPT_NUMBER,
+            .help = "Number of SYNs to send during the session establish",
+        },
+        {
+            .name = "readahead",
+            .type = QEMU_OPT_NUMBER,
+            .help = "Set the readahead size in bytes",
+        },
+        {
+            .name = "pagecache",
+            .type = QEMU_OPT_NUMBER,
+            .help = "Set the pagecache size in bytes",
+        },
+        {
+            .name = "debug",
+            .type = QEMU_OPT_NUMBER,
+            .help = "Set the NFS debug level (max 2)",
         },
         { /* end of list */ }
     },
@@ -276,25 +431,65 @@ static void nfs_file_close(BlockDriverState *bs)
     nfs_client_close(client);
 }
 
-static int64_t nfs_client_open(NFSClient *client, const char *filename,
+static NFSServer *nfs_config(QDict *options, Error **errp)
+{
+    NFSServer *server = NULL;
+    QDict *addr = NULL;
+    QObject *crumpled_addr = NULL;
+    Visitor *iv = NULL;
+    Error *local_error = NULL;
+
+    qdict_extract_subqdict(options, &addr, "server.");
+    if (!qdict_size(addr)) {
+        error_setg(errp, "NFS server address missing");
+        goto out;
+    }
+
+    crumpled_addr = qdict_crumple(addr, errp);
+    if (!crumpled_addr) {
+        goto out;
+    }
+
+    iv = qobject_input_visitor_new(crumpled_addr, true);
+    visit_type_NFSServer(iv, NULL, &server, &local_error);
+    if (local_error) {
+        error_propagate(errp, local_error);
+        goto out;
+    }
+
+out:
+    QDECREF(addr);
+    qobject_decref(crumpled_addr);
+    visit_free(iv);
+    return server;
+}
+
+
+static int64_t nfs_client_open(NFSClient *client, QDict *options,
                                int flags, Error **errp, int open_flags)
 {
-    int ret = -EINVAL, i;
+    int ret = -EINVAL;
+    QemuOpts *opts = NULL;
+    Error *local_err = NULL;
     struct stat st;
-    URI *uri;
-    QueryParams *qp = NULL;
     char *file = NULL, *strp = NULL;
 
-    uri = uri_parse(filename);
-    if (!uri) {
-        error_setg(errp, "Invalid URL specified");
+    opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
+    qemu_opts_absorb_qdict(opts, options, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        ret = -EINVAL;
         goto fail;
     }
-    if (!uri->server) {
-        error_setg(errp, "Invalid URL specified");
+
+    client->path = g_strdup(qemu_opt_get(opts, "path"));
+    if (!client->path) {
+        ret = -EINVAL;
+        error_setg(errp, "No path was specified");
         goto fail;
     }
-    strp = strrchr(uri->path, '/');
+
+    strp = strrchr(client->path, '/');
     if (strp == NULL) {
         error_setg(errp, "Invalid URL specified");
         goto fail;
@@ -302,85 +497,89 @@ static int64_t nfs_client_open(NFSClient *client, const char *filename,
     file = g_strdup(strp);
     *strp = 0;
 
+    /* Pop the config into our state object, Exit if invalid */
+    client->server = nfs_config(options, errp);
+    if (!client->server) {
+        ret = -EINVAL;
+        goto fail;
+    }
+
     client->context = nfs_init_context();
     if (client->context == NULL) {
         error_setg(errp, "Failed to init NFS context");
         goto fail;
     }
 
-    qp = query_params_parse(uri->query);
-    for (i = 0; i < qp->n; i++) {
-        unsigned long long val;
-        if (!qp->p[i].value) {
-            error_setg(errp, "Value for NFS parameter expected: %s",
-                       qp->p[i].name);
+    if (qemu_opt_get(opts, "uid")) {
+        client->uid = qemu_opt_get_number(opts, "uid", 0);
+        nfs_set_uid(client->context, client->uid);
+    }
+
+    if (qemu_opt_get(opts, "gid")) {
+        client->gid = qemu_opt_get_number(opts, "gid", 0);
+        nfs_set_gid(client->context, client->gid);
+    }
+
+    if (qemu_opt_get(opts, "tcp-syncnt")) {
+        client->tcp_syncnt = qemu_opt_get_number(opts, "tcp-syncnt", 0);
+        nfs_set_tcp_syncnt(client->context, client->tcp_syncnt);
+    }
+
+#ifdef LIBNFS_FEATURE_READAHEAD
+    if (qemu_opt_get(opts, "readahead")) {
+        if (open_flags & BDRV_O_NOCACHE) {
+            error_setg(errp, "Cannot enable NFS readahead "
+                             "if cache.direct = on");
             goto fail;
         }
-        if (parse_uint_full(qp->p[i].value, &val, 0)) {
-            error_setg(errp, "Illegal value for NFS parameter: %s",
-                       qp->p[i].name);
-            goto fail;
+        client->readahead = qemu_opt_get_number(opts, "readahead", 0);
+        if (client->readahead > QEMU_NFS_MAX_READAHEAD_SIZE) {
+            error_report("NFS Warning: Truncating NFS readahead "
+                         "size to %d", QEMU_NFS_MAX_READAHEAD_SIZE);
+            client->readahead = QEMU_NFS_MAX_READAHEAD_SIZE;
         }
-        if (!strcmp(qp->p[i].name, "uid")) {
-            nfs_set_uid(client->context, val);
-        } else if (!strcmp(qp->p[i].name, "gid")) {
-            nfs_set_gid(client->context, val);
-        } else if (!strcmp(qp->p[i].name, "tcp-syncnt")) {
-            nfs_set_tcp_syncnt(client->context, val);
-#ifdef LIBNFS_FEATURE_READAHEAD
-        } else if (!strcmp(qp->p[i].name, "readahead")) {
-            if (open_flags & BDRV_O_NOCACHE) {
-                error_setg(errp, "Cannot enable NFS readahead "
-                                 "if cache.direct = on");
-                goto fail;
-            }
-            if (val > QEMU_NFS_MAX_READAHEAD_SIZE) {
-                error_report("NFS Warning: Truncating NFS readahead"
-                             " size to %d", QEMU_NFS_MAX_READAHEAD_SIZE);
-                val = QEMU_NFS_MAX_READAHEAD_SIZE;
-            }
-            nfs_set_readahead(client->context, val);
+        nfs_set_readahead(client->context, client->readahead);
 #ifdef LIBNFS_FEATURE_PAGECACHE
-            nfs_set_pagecache_ttl(client->context, 0);
+        nfs_set_pagecache_ttl(client->context, 0);
 #endif
-            client->cache_used = true;
+        client->cache_used = true;
+    }
 #endif
+
 #ifdef LIBNFS_FEATURE_PAGECACHE
-            nfs_set_pagecache_ttl(client->context, 0);
-        } else if (!strcmp(qp->p[i].name, "pagecache")) {
-            if (open_flags & BDRV_O_NOCACHE) {
-                error_setg(errp, "Cannot enable NFS pagecache "
-                                 "if cache.direct = on");
-                goto fail;
-            }
-            if (val > QEMU_NFS_MAX_PAGECACHE_SIZE) {
-                error_report("NFS Warning: Truncating NFS pagecache"
-                             " size to %d pages", QEMU_NFS_MAX_PAGECACHE_SIZE);
-                val = QEMU_NFS_MAX_PAGECACHE_SIZE;
-            }
-            nfs_set_pagecache(client->context, val);
-            nfs_set_pagecache_ttl(client->context, 0);
-            client->cache_used = true;
+    if (qemu_opt_get(opts, "pagecache")) {
+        if (open_flags & BDRV_O_NOCACHE) {
+            error_setg(errp, "Cannot enable NFS pagecache "
+                             "if cache.direct = on");
+            goto fail;
+        }
+        client->pagecache = qemu_opt_get_number(opts, "pagecache", 0);
+        if (client->pagecache > QEMU_NFS_MAX_PAGECACHE_SIZE) {
+            error_report("NFS Warning: Truncating NFS pagecache "
+                         "size to %d pages", QEMU_NFS_MAX_PAGECACHE_SIZE);
+            client->pagecache = QEMU_NFS_MAX_PAGECACHE_SIZE;
+        }
+        nfs_set_pagecache(client->context, client->pagecache);
+        nfs_set_pagecache_ttl(client->context, 0);
+        client->cache_used = true;
+    }
 #endif
+
 #ifdef LIBNFS_FEATURE_DEBUG
-        } else if (!strcmp(qp->p[i].name, "debug")) {
-            /* limit the maximum debug level to avoid potential flooding
-             * of our log files. */
-            if (val > QEMU_NFS_MAX_DEBUG_LEVEL) {
-                error_report("NFS Warning: Limiting NFS debug level"
-                             " to %d", QEMU_NFS_MAX_DEBUG_LEVEL);
-                val = QEMU_NFS_MAX_DEBUG_LEVEL;
-            }
-            nfs_set_debug(client->context, val);
-#endif
-        } else {
-            error_setg(errp, "Unknown NFS parameter name: %s",
-                       qp->p[i].name);
-            goto fail;
+    if (qemu_opt_get(opts, "debug")) {
+        client->debug = qemu_opt_get_number(opts, "debug", 0);
+        /* limit the maximum debug level to avoid potential flooding
+         * of our log files. */
+        if (client->debug > QEMU_NFS_MAX_DEBUG_LEVEL) {
+            error_report("NFS Warning: Limiting NFS debug level "
+                         "to %d", QEMU_NFS_MAX_DEBUG_LEVEL);
+            client->debug = QEMU_NFS_MAX_DEBUG_LEVEL;
         }
+        nfs_set_debug(client->context, client->debug);
     }
+#endif
 
-    ret = nfs_mount(client->context, uri->server, uri->path);
+    ret = nfs_mount(client->context, client->server->host, client->path);
     if (ret < 0) {
         error_setg(errp, "Failed to mount nfs share: %s",
                    nfs_get_error(client->context));
@@ -414,13 +613,11 @@ static int64_t nfs_client_open(NFSClient *client, const char *filename,
     client->st_blocks = st.st_blocks;
     client->has_zero_init = S_ISREG(st.st_mode);
     goto out;
+
 fail:
     nfs_client_close(client);
 out:
-    if (qp) {
-        query_params_free(qp);
-    }
-    uri_free(uri);
+    qemu_opts_del(opts);
     g_free(file);
     return ret;
 }
@@ -429,28 +626,17 @@ static int nfs_file_open(BlockDriverState *bs, QDict *options, int flags,
                          Error **errp) {
     NFSClient *client = bs->opaque;
     int64_t ret;
-    QemuOpts *opts;
-    Error *local_err = NULL;
 
     client->aio_context = bdrv_get_aio_context(bs);
 
-    opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
-    qemu_opts_absorb_qdict(opts, options, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
-        ret = -EINVAL;
-        goto out;
-    }
-    ret = nfs_client_open(client, qemu_opt_get(opts, "filename"),
+    ret = nfs_client_open(client, options,
                           (flags & BDRV_O_RDWR) ? O_RDWR : O_RDONLY,
                           errp, bs->open_flags);
     if (ret < 0) {
-        goto out;
+        return ret;
     }
     bs->total_sectors = ret;
     ret = 0;
-out:
-    qemu_opts_del(opts);
     return ret;
 }
 
@@ -472,6 +658,7 @@ static int nfs_file_create(const char *url, QemuOpts *opts, Error **errp)
     int ret = 0;
     int64_t total_size = 0;
     NFSClient *client = g_new0(NFSClient, 1);
+    QDict *options = NULL;
 
     client->aio_context = qemu_get_aio_context();
 
@@ -479,7 +666,13 @@ static int nfs_file_create(const char *url, QemuOpts *opts, Error **errp)
     total_size = ROUND_UP(qemu_opt_get_size_del(opts, BLOCK_OPT_SIZE, 0),
                           BDRV_SECTOR_SIZE);
 
-    ret = nfs_client_open(client, url, O_CREAT, errp, 0);
+    options = qdict_new();
+    ret = nfs_parse_uri(url, options, errp);
+    if (ret < 0) {
+        goto out;
+    }
+
+    ret = nfs_client_open(client, options, O_CREAT, errp, 0);
     if (ret < 0) {
         goto out;
     }
@@ -561,6 +754,56 @@ static int nfs_reopen_prepare(BDRVReopenState *state,
     return 0;
 }
 
+static void nfs_refresh_filename(BlockDriverState *bs, QDict *options)
+{
+    NFSClient *client = bs->opaque;
+    QDict *opts = qdict_new();
+    QObject *server_qdict;
+    Visitor *ov;
+
+    qdict_put(opts, "driver", qstring_from_str("nfs"));
+
+    snprintf(bs->exact_filename, sizeof(bs->exact_filename),
+             "nfs://%s/%s?uid=%" PRId64 "&gid=%" PRId64 "&tcp-syncnt=%" PRId64
+             "&readahead=%" PRId64 "&pagecache=%" PRId64 "&debug=%" PRId64,
+             client->server->host, client->path, client->uid, client->gid,
+             client->tcp_syncnt, client->readahead, client->pagecache,
+             client->debug);
+
+    ov = qobject_output_visitor_new(&server_qdict);
+    visit_type_NFSServer(ov, NULL, &client->server, &error_abort);
+    visit_complete(ov, &client->server);
+    assert(qobject_type(server_qdict) == QTYPE_QDICT);
+
+    qdict_put_obj(opts, "server", server_qdict);
+    qdict_put(opts, "path", qstring_from_str(client->path));
+
+    if (client->uid) {
+        qdict_put(opts, "uid", qint_from_int(client->uid));
+    }
+    if (client->gid) {
+        qdict_put(opts, "gid", qint_from_int(client->gid));
+    }
+    if (client->tcp_syncnt) {
+        qdict_put(opts, "tcp-syncnt",
+                      qint_from_int(client->tcp_syncnt));
+    }
+    if (client->readahead) {
+        qdict_put(opts, "readahead",
+                      qint_from_int(client->readahead));
+    }
+    if (client->pagecache) {
+        qdict_put(opts, "pagecache",
+                      qint_from_int(client->pagecache));
+    }
+    if (client->debug) {
+        qdict_put(opts, "debug", qint_from_int(client->debug));
+    }
+
+    qdict_flatten(opts);
+    bs->full_open_options = opts;
+}
+
 #ifdef LIBNFS_FEATURE_PAGECACHE
 static void nfs_invalidate_cache(BlockDriverState *bs,
                                  Error **errp)
@@ -575,7 +818,7 @@ static BlockDriver bdrv_nfs = {
     .protocol_name                  = "nfs",
 
     .instance_size                  = sizeof(NFSClient),
-    .bdrv_needs_filename            = true,
+    .bdrv_parse_filename            = nfs_parse_filename,
     .create_opts                    = &nfs_create_opts,
 
     .bdrv_has_zero_init             = nfs_has_zero_init,
@@ -593,6 +836,7 @@ static BlockDriver bdrv_nfs = {
 
     .bdrv_detach_aio_context        = nfs_detach_aio_context,
     .bdrv_attach_aio_context        = nfs_attach_aio_context,
+    .bdrv_refresh_filename          = nfs_refresh_filename,
 
 #ifdef LIBNFS_FEATURE_PAGECACHE
     .bdrv_invalidate_cache          = nfs_invalidate_cache,
-- 
2.6.2

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [Qemu-devel] [PATCH v5 2/2] qapi: allow blockdev-add for NFS
  2016-10-28 17:09 [Qemu-devel] [PATCH v5 0/2] allow blockdev-add for NFS Ashijeet Acharya
  2016-10-28 17:09 ` [Qemu-devel] [PATCH v5 1/2] block/nfs: Introduce runtime_opts in NFS Ashijeet Acharya
@ 2016-10-28 17:09 ` Ashijeet Acharya
  2016-10-28 20:08   ` Eric Blake
  1 sibling, 1 reply; 5+ messages in thread
From: Ashijeet Acharya @ 2016-10-28 17:09 UTC (permalink / raw)
  To: kwolf
  Cc: eblake, pl, jcody, mreitz, armbru, qemu-devel, qemu-block,
	Ashijeet Acharya

Introduce new object 'BlockdevOptionsNFS' in qapi/block-core.json to
support blockdev-add for NFS network protocol driver. Also make a new
struct NFSServer to support tcp connection.

Signed-off-by: Ashijeet Acharya <ashijeetacharya@gmail.com>
---
 qapi/block-core.json | 74 +++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 71 insertions(+), 3 deletions(-)

diff --git a/qapi/block-core.json b/qapi/block-core.json
index 3592a9d..c0c9b48 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -1714,14 +1714,14 @@
 #
 # @host_device, @host_cdrom: Since 2.1
 # @gluster: Since 2.7
-# @nbd, @ssh: Since 2.8
+# @nbd, @ssh, @nfs: Since 2.8
 #
 # Since: 2.0
 ##
 { 'enum': 'BlockdevDriver',
   'data': [ 'archipelago', 'blkdebug', 'blkverify', 'bochs', 'cloop',
             'dmg', 'file', 'ftp', 'ftps', 'gluster', 'host_cdrom',
-            'host_device', 'http', 'https', 'luks', 'nbd', 'null-aio',
+            'host_device', 'http', 'https', 'luks', 'nbd', 'nfs', 'null-aio',
             'null-co', 'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw',
             'replication', 'ssh', 'tftp', 'vdi', 'vhdx', 'vmdk', 'vpc',
             'vvfat' ] }
@@ -2240,6 +2240,74 @@
             '*top-id': 'str' } }
 
 ##
+# @NFSTransport
+#
+# An enumeration of NFS transport types
+#
+# @inet:        TCP transport
+#
+# Since 2.8
+##
+{ 'enum': 'NFSTransport',
+  'data': [ 'inet' ] }
+
+##
+# @NFSServer
+#
+# Captures the address of the socket
+#
+# @type:        transport type used for NFS (only TCP supported)
+#
+# @host:        host address for NFS server
+#
+# Since 2.8
+##
+{ 'struct': 'NFSServer',
+  'data': { 'type': 'NFSTransport',
+            'host': 'str' } }
+
+##
+# @BlockdevOptionsNfs
+#
+# Driver specific block device option for NFS
+#
+# @server:                  host address
+#
+# @path:                    path of the image on the host
+#
+# @user:                    #optional UID value to use when talking to the
+#                           server (defaults to 65534 on Windows and getuid()
+#                           on unix)
+#
+# @group:                   #optional GID value to use when talking to the
+#                           server (defaults to 65534 on Windows and getgid()
+#                           in unix)
+#
+# @tcp-syn-count:           #optional number of SYNs during the session
+#                           establishment (defaults to libnfs default)
+#
+# @readahead-size:          #optional set the readahead size in bytes (defaults
+#                           to libnfs default)
+#
+# @page-cache-size:         #optional set the pagecache size in bytes (defaults
+#                           to libnfs default)
+#
+# @debug-level:             #optional set the NFS debug level (max 2) (defaults
+#                           to libnfs default)
+#
+# Since 2.8
+##
+{ 'struct': 'BlockdevOptionsNfs',
+  'data': { 'server': 'NFSServer',
+            'path': 'str',
+            '*user': 'int',
+            '*group': 'int',
+            '*tcp-syn-count': 'int',
+            '*readahead-size': 'int',
+            '*page-cache-size': 'int',
+            '*debug-level': 'int' } }
+
+##
 # @BlockdevOptionsCurl
 #
 # Driver specific block device options for the curl backend.
@@ -2315,7 +2383,7 @@
 # TODO iscsi: Wait for structured options
       'luks':       'BlockdevOptionsLUKS',
       'nbd':        'BlockdevOptionsNbd',
-# TODO nfs: Wait for structured options
+      'nfs':        'BlockdevOptionsNfs',
       'null-aio':   'BlockdevOptionsNull',
       'null-co':    'BlockdevOptionsNull',
       'parallels':  'BlockdevOptionsGenericFormat',
-- 
2.6.2

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] [PATCH v5 2/2] qapi: allow blockdev-add for NFS
  2016-10-28 17:09 ` [Qemu-devel] [PATCH v5 2/2] qapi: allow blockdev-add for NFS Ashijeet Acharya
@ 2016-10-28 20:08   ` Eric Blake
  0 siblings, 0 replies; 5+ messages in thread
From: Eric Blake @ 2016-10-28 20:08 UTC (permalink / raw)
  To: Ashijeet Acharya, kwolf; +Cc: pl, jcody, mreitz, armbru, qemu-devel, qemu-block

[-- Attachment #1: Type: text/plain, Size: 4228 bytes --]

On 10/28/2016 12:09 PM, Ashijeet Acharya wrote:
> Introduce new object 'BlockdevOptionsNFS' in qapi/block-core.json to
> support blockdev-add for NFS network protocol driver. Also make a new
> struct NFSServer to support tcp connection.
> 
> Signed-off-by: Ashijeet Acharya <ashijeetacharya@gmail.com>
> ---
>  qapi/block-core.json | 74 +++++++++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 71 insertions(+), 3 deletions(-)
> 
> diff --git a/qapi/block-core.json b/qapi/block-core.json
> index 3592a9d..c0c9b48 100644
> --- a/qapi/block-core.json
> +++ b/qapi/block-core.json
> @@ -1714,14 +1714,14 @@
>  #
>  # @host_device, @host_cdrom: Since 2.1
>  # @gluster: Since 2.7
> -# @nbd, @ssh: Since 2.8
> +# @nbd, @ssh, @nfs: Since 2.8

Might be nice to keep the list alphabetical per release (nfs before
ssh).  Also minor conflict with a patch to mention 'replication' in this
list.


>  
>  ##
> +# @NFSTransport
> +#
> +# An enumeration of NFS transport types
> +#
> +# @inet:        TCP transport
> +#
> +# Since 2.8
> +##
> +{ 'enum': 'NFSTransport',
> +  'data': [ 'inet' ] }
> +
> +##
> +# @NFSServer
> +#
> +# Captures the address of the socket

Might be worth a mention that this type is explicitly modeled to be a
subset of SocketAddress and/or GlusterServer (except see below - I'm not
sure it is a subset of either).

> +#
> +# @type:        transport type used for NFS (only TCP supported)
> +#
> +# @host:        host address for NFS server
> +#
> +# Since 2.8
> +##
> +{ 'struct': 'NFSServer',
> +  'data': { 'type': 'NFSTransport',
> +            'host': 'str' } }

Uggh. We used 'type':'inet' in SocketAddress, but 'type':'tcp' for
GlusterServer.  Compared to SocketAddress (the 'inet' branch), we have
no 'port' or 'to' parameter, and no optional 'ipv4' or 'ipv6' bools.  Do
we NEED to be able to distinguish between ipv4 and ipv6 addresses?

Did we really botch GlusterServer in the 2.7 release?

> +
> +##
> +# @BlockdevOptionsNfs
> +#
> +# Driver specific block device option for NFS
> +#
> +# @server:                  host address
> +#
> +# @path:                    path of the image on the host
> +#
> +# @user:                    #optional UID value to use when talking to the
> +#                           server (defaults to 65534 on Windows and getuid()
> +#                           on unix)
> +#
> +# @group:                   #optional GID value to use when talking to the
> +#                           server (defaults to 65534 on Windows and getgid()
> +#                           in unix)
> +#
> +# @tcp-syn-count:           #optional number of SYNs during the session
> +#                           establishment (defaults to libnfs default)
> +#
> +# @readahead-size:          #optional set the readahead size in bytes (defaults
> +#                           to libnfs default)
> +#
> +# @page-cache-size:         #optional set the pagecache size in bytes (defaults
> +#                           to libnfs default)
> +#
> +# @debug-level:             #optional set the NFS debug level (max 2) (defaults
> +#                           to libnfs default)
> +#
> +# Since 2.8
> +##
> +{ 'struct': 'BlockdevOptionsNfs',
> +  'data': { 'server': 'NFSServer',
> +            'path': 'str',
> +            '*user': 'int',
> +            '*group': 'int',
> +            '*tcp-syn-count': 'int',
> +            '*readahead-size': 'int',
> +            '*page-cache-size': 'int',
> +            '*debug-level': 'int' } }

This looks good.

> +
> +##
>  # @BlockdevOptionsCurl
>  #
>  # Driver specific block device options for the curl backend.
> @@ -2315,7 +2383,7 @@
>  # TODO iscsi: Wait for structured options
>        'luks':       'BlockdevOptionsLUKS',
>        'nbd':        'BlockdevOptionsNbd',
> -# TODO nfs: Wait for structured options
> +      'nfs':        'BlockdevOptionsNfs',
>        'null-aio':   'BlockdevOptionsNull',
>        'null-co':    'BlockdevOptionsNull',
>        'parallels':  'BlockdevOptionsGenericFormat',
> 

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 604 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] [PATCH v5 1/2] block/nfs: Introduce runtime_opts in NFS
  2016-10-28 17:09 ` [Qemu-devel] [PATCH v5 1/2] block/nfs: Introduce runtime_opts in NFS Ashijeet Acharya
@ 2016-10-31 11:53   ` Kevin Wolf
  0 siblings, 0 replies; 5+ messages in thread
From: Kevin Wolf @ 2016-10-31 11:53 UTC (permalink / raw)
  To: Ashijeet Acharya
  Cc: eblake, pl, jcody, mreitz, armbru, qemu-devel, qemu-block

Am 28.10.2016 um 19:09 hat Ashijeet Acharya geschrieben:
> Make NFS block driver use various fine grained runtime_opts.
> Set .bdrv_parse_filename() to nfs_parse_filename() and introduce two
> new functions nfs_parse_filename() and nfs_parse_uri() to help parsing
> the URI.
> Add a new option "server" which then accepts a new struct NFSServer.
> "host" is supported as a legacy option and is mapped to its NFSServer
> representation.

This sentence isn't up to date any more and should be removed.

> Signed-off-by: Ashijeet Acharya <ashijeetacharya@gmail.com>
> ---
>  block/nfs.c | 430 +++++++++++++++++++++++++++++++++++++++++++++++-------------
>  1 file changed, 337 insertions(+), 93 deletions(-)

> +static void nfs_refresh_filename(BlockDriverState *bs, QDict *options)
> +{
> +    NFSClient *client = bs->opaque;
> +    QDict *opts = qdict_new();
> +    QObject *server_qdict;
> +    Visitor *ov;
> +
> +    qdict_put(opts, "driver", qstring_from_str("nfs"));
> +
> +    snprintf(bs->exact_filename, sizeof(bs->exact_filename),
> +             "nfs://%s/%s?uid=%" PRId64 "&gid=%" PRId64 "&tcp-syncnt=%" PRId64
> +             "&readahead=%" PRId64 "&pagecache=%" PRId64 "&debug=%" PRId64,
> +             client->server->host, client->path, client->uid, client->gid,
> +             client->tcp_syncnt, client->readahead, client->pagecache,
> +             client->debug);

bs->exact_filename should contain only the parameters that are actually
necessary to identify the image, but not general runtime options. This
is what the comment for bdrv_refresh_filename() says:

 *  - exact_filename: A filename which may be used for opening a block device
 *                    which (mostly) equals the given BDS (even without any
 *                    other options; so reading and writing must return the same
 *                    results, but caching etc. may be different)

So I believe you can remove most of the query parameters here.

'uid' and 'gid' aren't completely clear, they could possibly be argued
to be part of identifying the image, but we should still omit them if
the defaults were used.

More importantly, the path you're returning doesn't include the
filename! Also, the double slash between host and path doesn't look
perfect:

    $ ./qemu-img info nfs://localhost/home/kwolf/images/hd.img
    image: nfs://localhost//home/kwolf/images?uid=0&gid=0&tcp-syncnt=0&readahead=0&pagecache=0&debug=0
    file format: raw
    virtual size: 64M (67108864 bytes)
    disk size: 64M

> +    ov = qobject_output_visitor_new(&server_qdict);

The visitor is leaked, you're missing a visit_free().

> +    visit_type_NFSServer(ov, NULL, &client->server, &error_abort);
> +    visit_complete(ov, &client->server);
> +    assert(qobject_type(server_qdict) == QTYPE_QDICT);

This even fails an assertion for me, I got the above result only after
fixing this:

    $ ./qemu-img info nfs://localhost/home/kwolf/images/hd.img
    qemu-img: qapi/qobject-output-visitor.c:206: qobject_output_complete: Assertion `opaque == qov->result' failed.
    Abgebrochen (Speicherabzug geschrieben)

The reason is that visit_complete() should get &server_qdict rather than
&client->server.

Kevin

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-10-31 11:53 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-10-28 17:09 [Qemu-devel] [PATCH v5 0/2] allow blockdev-add for NFS Ashijeet Acharya
2016-10-28 17:09 ` [Qemu-devel] [PATCH v5 1/2] block/nfs: Introduce runtime_opts in NFS Ashijeet Acharya
2016-10-31 11:53   ` Kevin Wolf
2016-10-28 17:09 ` [Qemu-devel] [PATCH v5 2/2] qapi: allow blockdev-add for NFS Ashijeet Acharya
2016-10-28 20:08   ` Eric Blake

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).