qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 00/17] qemu patches for 64-bit NBD extensions
@ 2023-08-29 17:58 Eric Blake
  2023-08-29 17:58 ` [PATCH v6 01/17] nbd: Replace bool structured_reply with mode enum Eric Blake
                   ` (16 more replies)
  0 siblings, 17 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov

v5 was here:
https://lists.gnu.org/archive/html/qemu-devel/2023-08/msg01883.html

Since then, just minor changes:
001/17:[----] [--] 'nbd: Replace bool structured_reply with mode enum'
002/17:[----] [--] 'nbd/client: Pass mode through to nbd_send_request'
003/17:[----] [--] 'nbd: Add types for extended headers'
004/17:[0006] [FC] 'nbd: Prepare for 64-bit request effect lengths'
005/17:[0004] [FC] 'nbd/server: Refactor handling of command sanity checks'
006/17:[----] [--] 'nbd/server: Support a request payload'
007/17:[0004] [FC] 'nbd/server: Prepare to receive extended header requests'
008/17:[----] [--] 'nbd/server: Prepare to send extended header replies'
009/17:[----] [--] 'nbd/server: Support 64-bit block status'
010/17:[----] [--] 'nbd/server: Enable initial support for extended headers'
011/17:[----] [--] 'nbd/client: Plumb errp through nbd_receive_replies'
012/17:[----] [--] 'nbd/client: Initial support for extended headers'
013/17:[----] [--] 'nbd/client: Accept 64-bit block status chunks'
014/17:[----] [--] 'nbd/client: Request extended headers during negotiation'
015/17:[----] [--] 'nbd/server: Refactor list of negotiated meta contexts'
016/17:[----] [--] 'nbd/server: Prepare for per-request filtering of BLOCK_STATUS'
017/17:[0002] [FC] 'nbd/server: Add FLAG_PAYLOAD support to CMD_BLOCK_STATUS'

The following patches still need review; all others already have
R-b. I plan on sending this in a pull request soon.

1, 4, 5, 6, 11, 12, 16, 17

Eric Blake (17):
  nbd: Replace bool structured_reply with mode enum
  nbd/client: Pass mode through to nbd_send_request
  nbd: Add types for extended headers
  nbd: Prepare for 64-bit request effect lengths
  nbd/server: Refactor handling of command sanity checks
  nbd/server: Support a request payload
  nbd/server: Prepare to receive extended header requests
  nbd/server: Prepare to send extended header replies
  nbd/server: Support 64-bit block status
  nbd/server: Enable initial support for extended headers
  nbd/client: Plumb errp through nbd_receive_replies
  nbd/client: Initial support for extended headers
  nbd/client: Accept 64-bit block status chunks
  nbd/client: Request extended headers during negotiation
  nbd/server: Refactor list of negotiated meta contexts
  nbd/server: Prepare for per-request filtering of BLOCK_STATUS
  nbd/server: Add FLAG_PAYLOAD support to CMD_BLOCK_STATUS

 docs/interop/nbd.txt                          |   1 +
 include/block/nbd.h                           | 147 +++--
 nbd/nbd-internal.h                            |   8 +-
 block/nbd.c                                   | 105 +++-
 nbd/client-connection.c                       |   4 +-
 nbd/client.c                                  | 140 +++--
 nbd/common.c                                  |  12 +-
 nbd/server.c                                  | 560 +++++++++++++-----
 qemu-nbd.c                                    |   8 +-
 block/trace-events                            |   3 +-
 nbd/trace-events                              |  19 +-
 tests/qemu-iotests/223.out                    |  18 +-
 tests/qemu-iotests/233.out                    |   4 +
 tests/qemu-iotests/241.out                    |   3 +
 tests/qemu-iotests/307.out                    |  15 +-
 .../tests/nbd-qemu-allocation.out             |   3 +-
 16 files changed, 763 insertions(+), 287 deletions(-)

-- 
2.41.0



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v6 01/17] nbd: Replace bool structured_reply with mode enum
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-09-04 15:23   ` Vladimir Sementsov-Ogievskiy
  2023-08-29 17:58 ` [PATCH v6 02/17] nbd/client: Pass mode through to nbd_send_request Eric Blake
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov, Kevin Wolf, Hanna Reitz

The upcoming patches for 64-bit extensions requires various points in
the protocol to make decisions based on what was negotiated.  While we
could easily add a 'bool extended_headers' alongside the existing
'bool structured_reply', this does not scale well if more modes are
added in the future.  Better is to expose the mode enum added in the
recent commit bfe04d0a7d out to a wider use in the code base.

Where the code previously checked for structured_reply being set or
clear, it now prefers checking for an inequality; this works because
the nodes are in a continuum of increasing abilities, and allows us to
touch fewer places if we ever insert other modes in the middle of the
enum.  There should be no semantic change in this patch.

Signed-off-by: Eric Blake <eblake@redhat.com>
---

v5: rebase to master, populate correct mode during server handshake
[Vladimir], fix stray comment leaked in commit 66d4f4fe

v4: new patch, expanding enum idea from v3 4/14
---
 include/block/nbd.h     |  2 +-
 block/nbd.c             |  8 +++++---
 nbd/client-connection.c |  4 ++--
 nbd/client.c            | 18 +++++++++---------
 nbd/server.c            | 31 ++++++++++++++++++-------------
 qemu-nbd.c              |  4 +++-
 6 files changed, 38 insertions(+), 29 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 4428bcffbb9..abf6030b513 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -305,7 +305,7 @@ typedef struct NBDExportInfo {

     /* In-out fields, set by client before nbd_receive_negotiate() and
      * updated by server results during nbd_receive_negotiate() */
-    bool structured_reply;
+    NBDMode mode; /* input maximum mode tolerated; output actual mode chosen */
     bool base_allocation; /* base:allocation context for NBD_CMD_BLOCK_STATUS */

     /* Set by server results during nbd_receive_negotiate() and
diff --git a/block/nbd.c b/block/nbd.c
index 5322e66166c..5f88f7a819b 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -464,7 +464,8 @@ static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t cookie)
             nbd_channel_error(s, ret);
             return ret;
         }
-        if (nbd_reply_is_structured(&s->reply) && !s->info.structured_reply) {
+        if (nbd_reply_is_structured(&s->reply) &&
+            s->info.mode < NBD_MODE_STRUCTURED) {
             nbd_channel_error(s, -EINVAL);
             return -EINVAL;
         }
@@ -867,7 +868,7 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
     }

     /* handle structured reply chunk */
-    assert(s->info.structured_reply);
+    assert(s->info.mode >= NBD_MODE_STRUCTURED);
     chunk = &s->reply.structured;

     if (chunk->type == NBD_REPLY_TYPE_NONE) {
@@ -1071,7 +1072,8 @@ nbd_co_receive_cmdread_reply(BDRVNBDState *s, uint64_t cookie,
     void *payload = NULL;
     Error *local_err = NULL;

-    NBD_FOREACH_REPLY_CHUNK(s, iter, cookie, s->info.structured_reply,
+    NBD_FOREACH_REPLY_CHUNK(s, iter, cookie,
+                            s->info.mode >= NBD_MODE_STRUCTURED,
                             qiov, &reply, &payload)
     {
         int ret;
diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index 3d14296c042..13e4cb6684b 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -1,5 +1,5 @@
 /*
- * QEMU Block driver for  NBD
+ * QEMU Block driver for NBD
  *
  * Copyright (c) 2021 Virtuozzo International GmbH.
  *
@@ -93,7 +93,7 @@ NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
         .do_negotiation = do_negotiation,

         .initial_info.request_sizes = true,
-        .initial_info.structured_reply = true,
+        .initial_info.mode = NBD_MODE_STRUCTURED,
         .initial_info.base_allocation = true,
         .initial_info.x_dirty_bitmap = g_strdup(x_dirty_bitmap),
         .initial_info.name = g_strdup(export_name ?: "")
diff --git a/nbd/client.c b/nbd/client.c
index 479208d5d9d..faa054c4527 100644
--- a/nbd/client.c
+++ b/nbd/client.c
@@ -880,7 +880,7 @@ static int nbd_list_meta_contexts(QIOChannel *ioc,
 static int nbd_start_negotiate(AioContext *aio_context, QIOChannel *ioc,
                                QCryptoTLSCreds *tlscreds,
                                const char *hostname, QIOChannel **outioc,
-                               bool structured_reply, bool *zeroes,
+                               NBDMode max_mode, bool *zeroes,
                                Error **errp)
 {
     ERRP_GUARD();
@@ -958,7 +958,7 @@ static int nbd_start_negotiate(AioContext *aio_context, QIOChannel *ioc,
         if (fixedNewStyle) {
             int result = 0;

-            if (structured_reply) {
+            if (max_mode >= NBD_MODE_STRUCTURED) {
                 result = nbd_request_simple_option(ioc,
                                                    NBD_OPT_STRUCTURED_REPLY,
                                                    false, errp);
@@ -1028,20 +1028,19 @@ int nbd_receive_negotiate(AioContext *aio_context, QIOChannel *ioc,
     trace_nbd_receive_negotiate_name(info->name);

     result = nbd_start_negotiate(aio_context, ioc, tlscreds, hostname, outioc,
-                                 info->structured_reply, &zeroes, errp);
+                                 info->mode, &zeroes, errp);
     if (result < 0) {
         return result;
     }

-    info->structured_reply = false;
+    info->mode = result;
     info->base_allocation = false;
     if (tlscreds && *outioc) {
         ioc = *outioc;
     }

-    switch ((NBDMode)result) {
+    switch (info->mode) {
     case NBD_MODE_STRUCTURED:
-        info->structured_reply = true;
         if (base_allocation) {
             result = nbd_negotiate_simple_meta_context(ioc, info, errp);
             if (result < 0) {
@@ -1150,8 +1149,8 @@ int nbd_receive_export_list(QIOChannel *ioc, QCryptoTLSCreds *tlscreds,
     QIOChannel *sioc = NULL;

     *info = NULL;
-    result = nbd_start_negotiate(NULL, ioc, tlscreds, hostname, &sioc, true,
-                                 NULL, errp);
+    result = nbd_start_negotiate(NULL, ioc, tlscreds, hostname, &sioc,
+                                 NBD_MODE_STRUCTURED, NULL, errp);
     if (tlscreds && sioc) {
         ioc = sioc;
     }
@@ -1182,7 +1181,7 @@ int nbd_receive_export_list(QIOChannel *ioc, QCryptoTLSCreds *tlscreds,
             memset(&array[count - 1], 0, sizeof(*array));
             array[count - 1].name = name;
             array[count - 1].description = desc;
-            array[count - 1].structured_reply = result == NBD_MODE_STRUCTURED;
+            array[count - 1].mode = result;
         }

         for (i = 0; i < count; i++) {
@@ -1215,6 +1214,7 @@ int nbd_receive_export_list(QIOChannel *ioc, QCryptoTLSCreds *tlscreds,
         /* Lone export name is implied, but we can parse length and flags */
         array = g_new0(NBDExportInfo, 1);
         array->name = g_strdup("");
+        array->mode = NBD_MODE_OLDSTYLE;
         count = 1;

         if (nbd_negotiate_finish_oldstyle(ioc, array, errp) < 0) {
diff --git a/nbd/server.c b/nbd/server.c
index 8486b64b15d..f4a7a169ae3 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -143,7 +143,7 @@ struct NBDClient {

     uint32_t check_align; /* If non-zero, check for aligned client requests */

-    bool structured_reply;
+    NBDMode mode;
     NBDExportMetaContexts export_meta;

     uint32_t opt; /* Current option being negotiated */
@@ -502,7 +502,7 @@ static int nbd_negotiate_handle_export_name(NBDClient *client, bool no_zeroes,
     }

     myflags = client->exp->nbdflags;
-    if (client->structured_reply) {
+    if (client->mode >= NBD_MODE_STRUCTURED) {
         myflags |= NBD_FLAG_SEND_DF;
     }
     trace_nbd_negotiate_new_style_size_flags(client->exp->size, myflags);
@@ -687,7 +687,7 @@ static int nbd_negotiate_handle_info(NBDClient *client, Error **errp)

     /* Send NBD_INFO_EXPORT always */
     myflags = exp->nbdflags;
-    if (client->structured_reply) {
+    if (client->mode >= NBD_MODE_STRUCTURED) {
         myflags |= NBD_FLAG_SEND_DF;
     }
     trace_nbd_negotiate_new_style_size_flags(exp->size, myflags);
@@ -985,7 +985,8 @@ static int nbd_negotiate_meta_queries(NBDClient *client,
     size_t i;
     size_t count = 0;

-    if (client->opt == NBD_OPT_SET_META_CONTEXT && !client->structured_reply) {
+    if (client->opt == NBD_OPT_SET_META_CONTEXT &&
+        client->mode < NBD_MODE_STRUCTURED) {
         return nbd_opt_invalid(client, errp,
                                "request option '%s' when structured reply "
                                "is not negotiated",
@@ -1122,10 +1123,12 @@ static int nbd_negotiate_options(NBDClient *client, Error **errp)
     if (nbd_read32(client->ioc, &flags, "flags", errp) < 0) {
         return -EIO;
     }
+    client->mode = NBD_MODE_EXPORT_NAME;
     trace_nbd_negotiate_options_flags(flags);
     if (flags & NBD_FLAG_C_FIXED_NEWSTYLE) {
         fixedNewstyle = true;
         flags &= ~NBD_FLAG_C_FIXED_NEWSTYLE;
+        client->mode = NBD_MODE_SIMPLE;
     }
     if (flags & NBD_FLAG_C_NO_ZEROES) {
         no_zeroes = true;
@@ -1261,13 +1264,13 @@ static int nbd_negotiate_options(NBDClient *client, Error **errp)
             case NBD_OPT_STRUCTURED_REPLY:
                 if (length) {
                     ret = nbd_reject_length(client, false, errp);
-                } else if (client->structured_reply) {
+                } else if (client->mode >= NBD_MODE_STRUCTURED) {
                     ret = nbd_negotiate_send_rep_err(
                         client, NBD_REP_ERR_INVALID, errp,
                         "structured reply already negotiated");
                 } else {
                     ret = nbd_negotiate_send_rep(client, NBD_REP_ACK, errp);
-                    client->structured_reply = true;
+                    client->mode = NBD_MODE_STRUCTURED;
                 }
                 break;

@@ -1907,7 +1910,9 @@ static int coroutine_fn nbd_co_send_simple_reply(NBDClient *client,
     };

     assert(!len || !nbd_err);
-    assert(!client->structured_reply || request->type != NBD_CMD_READ);
+    assert(client->mode < NBD_MODE_STRUCTURED ||
+           (client->mode == NBD_MODE_STRUCTURED &&
+            request->type != NBD_CMD_READ));
     trace_nbd_co_send_simple_reply(request->cookie, nbd_err,
                                    nbd_err_lookup(nbd_err), len);
     set_be_simple_reply(&reply, nbd_err, request->cookie);
@@ -1983,7 +1988,7 @@ static int coroutine_fn nbd_co_send_chunk_read(NBDClient *client,

     return nbd_co_send_iov(client, iov, 3, errp);
 }
-/*ebb*/
+
 static int coroutine_fn nbd_co_send_chunk_error(NBDClient *client,
                                                 NBDRequest *request,
                                                 uint32_t error,
@@ -2409,7 +2414,7 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req, NBDRequest *
                                               client->check_align);
     }
     valid_flags = NBD_CMD_FLAG_FUA;
-    if (request->type == NBD_CMD_READ && client->structured_reply) {
+    if (request->type == NBD_CMD_READ && client->mode >= NBD_MODE_STRUCTURED) {
         valid_flags |= NBD_CMD_FLAG_DF;
     } else if (request->type == NBD_CMD_WRITE_ZEROES) {
         valid_flags |= NBD_CMD_FLAG_NO_HOLE | NBD_CMD_FLAG_FAST_ZERO;
@@ -2435,7 +2440,7 @@ static coroutine_fn int nbd_send_generic_reply(NBDClient *client,
                                                const char *error_msg,
                                                Error **errp)
 {
-    if (client->structured_reply && ret < 0) {
+    if (client->mode >= NBD_MODE_STRUCTURED && ret < 0) {
         return nbd_co_send_chunk_error(client, request, -ret, error_msg, errp);
     } else {
         return nbd_co_send_simple_reply(client, request, ret < 0 ? -ret : 0,
@@ -2463,8 +2468,8 @@ static coroutine_fn int nbd_do_cmd_read(NBDClient *client, NBDRequest *request,
         }
     }

-    if (client->structured_reply && !(request->flags & NBD_CMD_FLAG_DF) &&
-        request->len)
+    if (client->mode >= NBD_MODE_STRUCTURED &&
+        !(request->flags & NBD_CMD_FLAG_DF) && request->len)
     {
         return nbd_co_send_sparse_read(client, request, request->from,
                                        data, request->len, errp);
@@ -2476,7 +2481,7 @@ static coroutine_fn int nbd_do_cmd_read(NBDClient *client, NBDRequest *request,
                                       "reading from file failed", errp);
     }

-    if (client->structured_reply) {
+    if (client->mode >= NBD_MODE_STRUCTURED) {
         if (request->len) {
             return nbd_co_send_chunk_read(client, request, request->from, data,
                                           request->len, true, errp);
diff --git a/qemu-nbd.c b/qemu-nbd.c
index aaccaa33184..32c5a349e06 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -280,7 +280,9 @@ struct NbdClientOpts {
 static void *nbd_client_thread(void *arg)
 {
     struct NbdClientOpts *opts = arg;
-    NBDExportInfo info = { .request_sizes = false, .name = g_strdup("") };
+    /* TODO: Revisit this if nbd.ko ever gains support for structured reply */
+    NBDExportInfo info = { .request_sizes = false, .name = g_strdup(""),
+                           .mode = NBD_MODE_SIMPLE };
     QIOChannelSocket *sioc;
     int fd = -1;
     int ret = EXIT_FAILURE;
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 02/17] nbd/client: Pass mode through to nbd_send_request
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
  2023-08-29 17:58 ` [PATCH v6 01/17] nbd: Replace bool structured_reply with mode enum Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-08-29 17:58 ` [PATCH v6 03/17] nbd: Add types for extended headers Eric Blake
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov, Kevin Wolf, Hanna Reitz

Once the 64-bit headers extension is enabled, the data layout we send
over the wire for a client request depends on the mode negotiated with
the server.  Rather than adding a parameter to nbd_send_request, we
can add a member to struct NBDRequest, since it already does not
reflect on-wire format.  Some callers initialize it directly; many
others rely on a common initialization point during
nbd_co_send_request().  At this point, there is no semantic change.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---

v5: R-b added
v4: new patch, based on ideas in v3 4/14, but by modifying NBDRequest
instead of adding a parameter
---
 include/block/nbd.h | 12 +++++++-----
 block/nbd.c         |  5 +++--
 nbd/client.c        |  3 ++-
 3 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index abf6030b513..c4cbe130e07 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -63,17 +63,19 @@ typedef enum NBDMode {
     /* TODO add NBD_MODE_EXTENDED */
 } NBDMode;

-/* Transmission phase structs
- *
- * Note: these are _NOT_ the same as the network representation of an NBD
- * request and reply!
+/* Transmission phase structs */
+
+/*
+ * Note: NBDRequest is _NOT_ the same as the network representation of an NBD
+ * request!
  */
 typedef struct NBDRequest {
     uint64_t cookie;
     uint64_t from;
     uint32_t len;
     uint16_t flags; /* NBD_CMD_FLAG_* */
-    uint16_t type; /* NBD_CMD_* */
+    uint16_t type;  /* NBD_CMD_* */
+    NBDMode mode;   /* Determines which network representation to use */
 } NBDRequest;

 typedef struct NBDSimpleReply {
diff --git a/block/nbd.c b/block/nbd.c
index 5f88f7a819b..ca5991f868a 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -339,7 +339,7 @@ int coroutine_fn nbd_co_do_establish_connection(BlockDriverState *bs,
          * We have connected, but must fail for other reasons.
          * Send NBD_CMD_DISC as a courtesy to the server.
          */
-        NBDRequest request = { .type = NBD_CMD_DISC };
+        NBDRequest request = { .type = NBD_CMD_DISC, .mode = s->info.mode };

         nbd_send_request(s->ioc, &request);

@@ -521,6 +521,7 @@ nbd_co_send_request(BlockDriverState *bs, NBDRequest *request,

     qemu_co_mutex_lock(&s->send_mutex);
     request->cookie = INDEX_TO_COOKIE(i);
+    request->mode = s->info.mode;

     assert(s->ioc);

@@ -1466,7 +1467,7 @@ static void nbd_yank(void *opaque)
 static void nbd_client_close(BlockDriverState *bs)
 {
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
-    NBDRequest request = { .type = NBD_CMD_DISC };
+    NBDRequest request = { .type = NBD_CMD_DISC, .mode = s->info.mode };

     if (s->ioc) {
         nbd_send_request(s->ioc, &request);
diff --git a/nbd/client.c b/nbd/client.c
index faa054c4527..40a1eb72346 100644
--- a/nbd/client.c
+++ b/nbd/client.c
@@ -1224,7 +1224,7 @@ int nbd_receive_export_list(QIOChannel *ioc, QCryptoTLSCreds *tlscreds,
         /* Send NBD_CMD_DISC as a courtesy to the server, but ignore all
          * errors now that we have the information we wanted. */
         if (nbd_drop(ioc, 124, NULL) == 0) {
-            NBDRequest request = { .type = NBD_CMD_DISC };
+            NBDRequest request = { .type = NBD_CMD_DISC, .mode = result };

             nbd_send_request(ioc, &request);
         }
@@ -1354,6 +1354,7 @@ int nbd_send_request(QIOChannel *ioc, NBDRequest *request)
 {
     uint8_t buf[NBD_REQUEST_SIZE];

+    assert(request->mode <= NBD_MODE_STRUCTURED); /* TODO handle extended */
     trace_nbd_send_request(request->from, request->len, request->cookie,
                            request->flags, request->type,
                            nbd_cmd_lookup(request->type));
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 03/17] nbd: Add types for extended headers
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
  2023-08-29 17:58 ` [PATCH v6 01/17] nbd: Replace bool structured_reply with mode enum Eric Blake
  2023-08-29 17:58 ` [PATCH v6 02/17] nbd/client: Pass mode through to nbd_send_request Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-08-29 17:58 ` [PATCH v6 04/17] nbd: Prepare for 64-bit request effect lengths Eric Blake
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov, Kevin Wolf, Hanna Reitz

Add the constants and structs necessary for later patches to start
implementing the NBD_OPT_EXTENDED_HEADERS extension in both the client
and server, matching recent upstream nbd.git (through commit
e6f3b94a934).  This patch does not change any existing behavior, but
merely sets the stage for upcoming patches.

This patch does not change the status quo that neither the client nor
server use a packed-struct representation for the request header.
While most of the patch adds new types, there is also some churn for
renaming the existing NBDExtent to NBDExtent32 to contrast it with
NBDExtent64, which I thought was a nicer name than NBDExtentExt.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---

v5: Add R-b

v4: Hoist earlier in series, tweak a few comments, defer docs/interop
change to when feature is actually turned on, NBDExtent rename, add
QEMU_BUG_BUILD_ON for sanity sake, hoist in block status payload bits
from v3 14/14; R-b dropped
---
 include/block/nbd.h | 124 +++++++++++++++++++++++++++++++-------------
 nbd/nbd-internal.h  |   3 +-
 block/nbd.c         |   6 +--
 nbd/common.c        |  12 ++++-
 nbd/server.c        |   6 +--
 5 files changed, 106 insertions(+), 45 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index c4cbe130e07..b2fb8ab44d5 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -60,7 +60,7 @@ typedef enum NBDMode {
     NBD_MODE_EXPORT_NAME,  /* newstyle but only OPT_EXPORT_NAME safe */
     NBD_MODE_SIMPLE,       /* newstyle but only simple replies */
     NBD_MODE_STRUCTURED,   /* newstyle, structured replies enabled */
-    /* TODO add NBD_MODE_EXTENDED */
+    NBD_MODE_EXTENDED,     /* newstyle, extended headers enabled */
 } NBDMode;

 /* Transmission phase structs */
@@ -93,20 +93,36 @@ typedef struct NBDStructuredReplyChunk {
     uint32_t length; /* length of payload */
 } QEMU_PACKED NBDStructuredReplyChunk;

+typedef struct NBDExtendedReplyChunk {
+    uint32_t magic;  /* NBD_EXTENDED_REPLY_MAGIC */
+    uint16_t flags;  /* combination of NBD_REPLY_FLAG_* */
+    uint16_t type;   /* NBD_REPLY_TYPE_* */
+    uint64_t cookie; /* request handle */
+    uint64_t offset; /* request offset */
+    uint64_t length; /* length of payload */
+} QEMU_PACKED NBDExtendedReplyChunk;
+
 typedef union NBDReply {
     NBDSimpleReply simple;
     NBDStructuredReplyChunk structured;
+    NBDExtendedReplyChunk extended;
     struct {
         /*
-         * @magic and @cookie fields have the same offset and size both in
-         * simple reply and structured reply chunk, so let them be accessible
-         * without ".simple." or ".structured." specification
+         * @magic and @cookie fields have the same offset and size in all
+         * forms of replies, so let them be accessible without ".simple.",
+         * ".structured.", or ".extended." specifications.
          */
         uint32_t magic;
         uint32_t _skip;
         uint64_t cookie;
-    } QEMU_PACKED;
+    };
 } NBDReply;
+QEMU_BUILD_BUG_ON(offsetof(NBDReply, simple.cookie) !=
+                  offsetof(NBDReply, cookie));
+QEMU_BUILD_BUG_ON(offsetof(NBDReply, structured.cookie) !=
+                  offsetof(NBDReply, cookie));
+QEMU_BUILD_BUG_ON(offsetof(NBDReply, extended.cookie) !=
+                  offsetof(NBDReply, cookie));

 /* Header of chunk for NBD_REPLY_TYPE_OFFSET_DATA */
 typedef struct NBDStructuredReadData {
@@ -133,14 +149,34 @@ typedef struct NBDStructuredError {
 typedef struct NBDStructuredMeta {
     /* header's length >= 12 (at least one extent) */
     uint32_t context_id;
-    /* extents follows */
+    /* NBDExtent32 extents[] follows, array length implied by header */
 } QEMU_PACKED NBDStructuredMeta;

-/* Extent chunk for NBD_REPLY_TYPE_BLOCK_STATUS */
-typedef struct NBDExtent {
+/* Extent array element for NBD_REPLY_TYPE_BLOCK_STATUS */
+typedef struct NBDExtent32 {
     uint32_t length;
     uint32_t flags; /* NBD_STATE_* */
-} QEMU_PACKED NBDExtent;
+} QEMU_PACKED NBDExtent32;
+
+/* Header of NBD_REPLY_TYPE_BLOCK_STATUS_EXT */
+typedef struct NBDExtendedMeta {
+    /* header's length >= 24 (at least one extent) */
+    uint32_t context_id;
+    uint32_t count; /* header length must be count * 16 + 8 */
+    /* NBDExtent64 extents[count] follows */
+} QEMU_PACKED NBDExtendedMeta;
+
+/* Extent array element for NBD_REPLY_TYPE_BLOCK_STATUS_EXT */
+typedef struct NBDExtent64 {
+    uint64_t length;
+    uint64_t flags; /* NBD_STATE_* */
+} QEMU_PACKED NBDExtent64;
+
+/* Client payload for limiting NBD_CMD_BLOCK_STATUS reply */
+typedef struct NBDBlockStatusPayload {
+    uint64_t effect_length;
+    /* uint32_t ids[] follows, array length implied by header */
+} QEMU_PACKED NBDBlockStatusPayload;

 /* Transmission (export) flags: sent from server to client during handshake,
    but describe what will happen during transmission */
@@ -158,20 +194,22 @@ enum {
     NBD_FLAG_SEND_RESIZE_BIT        =  9, /* Send resize */
     NBD_FLAG_SEND_CACHE_BIT         = 10, /* Send CACHE (prefetch) */
     NBD_FLAG_SEND_FAST_ZERO_BIT     = 11, /* FAST_ZERO flag for WRITE_ZEROES */
+    NBD_FLAG_BLOCK_STAT_PAYLOAD_BIT = 12, /* PAYLOAD flag for BLOCK_STATUS */
 };

-#define NBD_FLAG_HAS_FLAGS         (1 << NBD_FLAG_HAS_FLAGS_BIT)
-#define NBD_FLAG_READ_ONLY         (1 << NBD_FLAG_READ_ONLY_BIT)
-#define NBD_FLAG_SEND_FLUSH        (1 << NBD_FLAG_SEND_FLUSH_BIT)
-#define NBD_FLAG_SEND_FUA          (1 << NBD_FLAG_SEND_FUA_BIT)
-#define NBD_FLAG_ROTATIONAL        (1 << NBD_FLAG_ROTATIONAL_BIT)
-#define NBD_FLAG_SEND_TRIM         (1 << NBD_FLAG_SEND_TRIM_BIT)
-#define NBD_FLAG_SEND_WRITE_ZEROES (1 << NBD_FLAG_SEND_WRITE_ZEROES_BIT)
-#define NBD_FLAG_SEND_DF           (1 << NBD_FLAG_SEND_DF_BIT)
-#define NBD_FLAG_CAN_MULTI_CONN    (1 << NBD_FLAG_CAN_MULTI_CONN_BIT)
-#define NBD_FLAG_SEND_RESIZE       (1 << NBD_FLAG_SEND_RESIZE_BIT)
-#define NBD_FLAG_SEND_CACHE        (1 << NBD_FLAG_SEND_CACHE_BIT)
-#define NBD_FLAG_SEND_FAST_ZERO    (1 << NBD_FLAG_SEND_FAST_ZERO_BIT)
+#define NBD_FLAG_HAS_FLAGS          (1 << NBD_FLAG_HAS_FLAGS_BIT)
+#define NBD_FLAG_READ_ONLY          (1 << NBD_FLAG_READ_ONLY_BIT)
+#define NBD_FLAG_SEND_FLUSH         (1 << NBD_FLAG_SEND_FLUSH_BIT)
+#define NBD_FLAG_SEND_FUA           (1 << NBD_FLAG_SEND_FUA_BIT)
+#define NBD_FLAG_ROTATIONAL         (1 << NBD_FLAG_ROTATIONAL_BIT)
+#define NBD_FLAG_SEND_TRIM          (1 << NBD_FLAG_SEND_TRIM_BIT)
+#define NBD_FLAG_SEND_WRITE_ZEROES  (1 << NBD_FLAG_SEND_WRITE_ZEROES_BIT)
+#define NBD_FLAG_SEND_DF            (1 << NBD_FLAG_SEND_DF_BIT)
+#define NBD_FLAG_CAN_MULTI_CONN     (1 << NBD_FLAG_CAN_MULTI_CONN_BIT)
+#define NBD_FLAG_SEND_RESIZE        (1 << NBD_FLAG_SEND_RESIZE_BIT)
+#define NBD_FLAG_SEND_CACHE         (1 << NBD_FLAG_SEND_CACHE_BIT)
+#define NBD_FLAG_SEND_FAST_ZERO     (1 << NBD_FLAG_SEND_FAST_ZERO_BIT)
+#define NBD_FLAG_BLOCK_STAT_PAYLOAD (1 << NBD_FLAG_BLOCK_STAT_PAYLOAD_BIT)

 /* New-style handshake (global) flags, sent from server to client, and
    control what will happen during handshake phase. */
@@ -194,6 +232,7 @@ enum {
 #define NBD_OPT_STRUCTURED_REPLY  (8)
 #define NBD_OPT_LIST_META_CONTEXT (9)
 #define NBD_OPT_SET_META_CONTEXT  (10)
+#define NBD_OPT_EXTENDED_HEADERS  (11)

 /* Option reply types. */
 #define NBD_REP_ERR(value) ((UINT32_C(1) << 31) | (value))
@@ -211,6 +250,8 @@ enum {
 #define NBD_REP_ERR_UNKNOWN         NBD_REP_ERR(6)  /* Export unknown */
 #define NBD_REP_ERR_SHUTDOWN        NBD_REP_ERR(7)  /* Server shutting down */
 #define NBD_REP_ERR_BLOCK_SIZE_REQD NBD_REP_ERR(8)  /* Need INFO_BLOCK_SIZE */
+#define NBD_REP_ERR_TOO_BIG         NBD_REP_ERR(9)  /* Payload size overflow */
+#define NBD_REP_ERR_EXT_HEADER_REQD NBD_REP_ERR(10) /* Need extended headers */

 /* Info types, used during NBD_REP_INFO */
 #define NBD_INFO_EXPORT         0
@@ -219,12 +260,14 @@ enum {
 #define NBD_INFO_BLOCK_SIZE     3

 /* Request flags, sent from client to server during transmission phase */
-#define NBD_CMD_FLAG_FUA        (1 << 0) /* 'force unit access' during write */
-#define NBD_CMD_FLAG_NO_HOLE    (1 << 1) /* don't punch hole on zero run */
-#define NBD_CMD_FLAG_DF         (1 << 2) /* don't fragment structured read */
-#define NBD_CMD_FLAG_REQ_ONE    (1 << 3) /* only one extent in BLOCK_STATUS
-                                          * reply chunk */
-#define NBD_CMD_FLAG_FAST_ZERO  (1 << 4) /* fail if WRITE_ZEROES is not fast */
+#define NBD_CMD_FLAG_FUA         (1 << 0) /* 'force unit access' during write */
+#define NBD_CMD_FLAG_NO_HOLE     (1 << 1) /* don't punch hole on zero run */
+#define NBD_CMD_FLAG_DF          (1 << 2) /* don't fragment structured read */
+#define NBD_CMD_FLAG_REQ_ONE     (1 << 3) \
+    /* only one extent in BLOCK_STATUS reply chunk */
+#define NBD_CMD_FLAG_FAST_ZERO   (1 << 4) /* fail if WRITE_ZEROES is not fast */
+#define NBD_CMD_FLAG_PAYLOAD_LEN (1 << 5) \
+    /* length describes payload, not effect; only with ext header */

 /* Supported request types */
 enum {
@@ -250,22 +293,31 @@ enum {
  */
 #define NBD_MAX_STRING_SIZE 4096

-/* Two types of reply structures */
+/* Two types of request structures, a given client will only use 1 */
+#define NBD_REQUEST_MAGIC           0x25609513
+#define NBD_EXTENDED_REQUEST_MAGIC  0x21e41c71
+
+/*
+ * Three types of reply structures, but what a client expects depends
+ * on NBD_OPT_STRUCTURED_REPLY and NBD_OPT_EXTENDED_HEADERS.
+ */
 #define NBD_SIMPLE_REPLY_MAGIC      0x67446698
 #define NBD_STRUCTURED_REPLY_MAGIC  0x668e33ef
+#define NBD_EXTENDED_REPLY_MAGIC    0x6e8a278c

-/* Structured reply flags */
+/* Chunk reply flags (for structured and extended replies) */
 #define NBD_REPLY_FLAG_DONE          (1 << 0) /* This reply-chunk is last */

-/* Structured reply types */
+/* Chunk reply types */
 #define NBD_REPLY_ERR(value)         ((1 << 15) | (value))

-#define NBD_REPLY_TYPE_NONE          0
-#define NBD_REPLY_TYPE_OFFSET_DATA   1
-#define NBD_REPLY_TYPE_OFFSET_HOLE   2
-#define NBD_REPLY_TYPE_BLOCK_STATUS  5
-#define NBD_REPLY_TYPE_ERROR         NBD_REPLY_ERR(1)
-#define NBD_REPLY_TYPE_ERROR_OFFSET  NBD_REPLY_ERR(2)
+#define NBD_REPLY_TYPE_NONE              0
+#define NBD_REPLY_TYPE_OFFSET_DATA       1
+#define NBD_REPLY_TYPE_OFFSET_HOLE       2
+#define NBD_REPLY_TYPE_BLOCK_STATUS      5
+#define NBD_REPLY_TYPE_BLOCK_STATUS_EXT  6
+#define NBD_REPLY_TYPE_ERROR             NBD_REPLY_ERR(1)
+#define NBD_REPLY_TYPE_ERROR_OFFSET      NBD_REPLY_ERR(2)

 /* Extent flags for base:allocation in NBD_REPLY_TYPE_BLOCK_STATUS */
 #define NBD_STATE_HOLE (1 << 0)
diff --git a/nbd/nbd-internal.h b/nbd/nbd-internal.h
index df42fef7066..133b1d94b50 100644
--- a/nbd/nbd-internal.h
+++ b/nbd/nbd-internal.h
@@ -1,7 +1,7 @@
 /*
  * NBD Internal Declarations
  *
- * Copyright (C) 2016 Red Hat, Inc.
+ * Copyright Red Hat
  *
  * This work is licensed under the terms of the GNU GPL, version 2 or later.
  * See the COPYING file in the top-level directory.
@@ -44,7 +44,6 @@
 #define NBD_OLDSTYLE_NEGOTIATE_SIZE (8 + 8 + 8 + 4 + 124)

 #define NBD_INIT_MAGIC              0x4e42444d41474943LL /* ASCII "NBDMAGIC" */
-#define NBD_REQUEST_MAGIC           0x25609513
 #define NBD_OPTS_MAGIC              0x49484156454F5054LL /* ASCII "IHAVEOPT" */
 #define NBD_CLIENT_MAGIC            0x0000420281861253LL
 #define NBD_REP_MAGIC               0x0003e889045565a9LL
diff --git a/block/nbd.c b/block/nbd.c
index ca5991f868a..c7581794873 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -611,7 +611,7 @@ static int nbd_parse_offset_hole_payload(BDRVNBDState *s,
 static int nbd_parse_blockstatus_payload(BDRVNBDState *s,
                                          NBDStructuredReplyChunk *chunk,
                                          uint8_t *payload, uint64_t orig_length,
-                                         NBDExtent *extent, Error **errp)
+                                         NBDExtent32 *extent, Error **errp)
 {
     uint32_t context_id;

@@ -1119,7 +1119,7 @@ nbd_co_receive_cmdread_reply(BDRVNBDState *s, uint64_t cookie,

 static int coroutine_fn
 nbd_co_receive_blockstatus_reply(BDRVNBDState *s, uint64_t cookie,
-                                 uint64_t length, NBDExtent *extent,
+                                 uint64_t length, NBDExtent32 *extent,
                                  int *request_ret, Error **errp)
 {
     NBDReplyChunkIter iter;
@@ -1374,7 +1374,7 @@ static int coroutine_fn GRAPH_RDLOCK nbd_client_co_block_status(
         int64_t *pnum, int64_t *map, BlockDriverState **file)
 {
     int ret, request_ret;
-    NBDExtent extent = { 0 };
+    NBDExtent32 extent = { 0 };
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
     Error *local_err = NULL;

diff --git a/nbd/common.c b/nbd/common.c
index 989fbe54a19..3247c1d618a 100644
--- a/nbd/common.c
+++ b/nbd/common.c
@@ -79,6 +79,8 @@ const char *nbd_opt_lookup(uint32_t opt)
         return "list meta context";
     case NBD_OPT_SET_META_CONTEXT:
         return "set meta context";
+    case NBD_OPT_EXTENDED_HEADERS:
+        return "extended headers";
     default:
         return "<unknown>";
     }
@@ -112,6 +114,10 @@ const char *nbd_rep_lookup(uint32_t rep)
         return "server shutting down";
     case NBD_REP_ERR_BLOCK_SIZE_REQD:
         return "block size required";
+    case NBD_REP_ERR_TOO_BIG:
+        return "option payload too big";
+    case NBD_REP_ERR_EXT_HEADER_REQD:
+        return "extended headers required";
     default:
         return "<unknown>";
     }
@@ -170,7 +176,9 @@ const char *nbd_reply_type_lookup(uint16_t type)
     case NBD_REPLY_TYPE_OFFSET_HOLE:
         return "hole";
     case NBD_REPLY_TYPE_BLOCK_STATUS:
-        return "block status";
+        return "block status (32-bit)";
+    case NBD_REPLY_TYPE_BLOCK_STATUS_EXT:
+        return "block status (64-bit)";
     case NBD_REPLY_TYPE_ERROR:
         return "generic error";
     case NBD_REPLY_TYPE_ERROR_OFFSET:
@@ -261,6 +269,8 @@ const char *nbd_mode_lookup(NBDMode mode)
         return "simple headers";
     case NBD_MODE_STRUCTURED:
         return "structured replies";
+    case NBD_MODE_EXTENDED:
+        return "extended headers";
     default:
         return "<unknown>";
     }
diff --git a/nbd/server.c b/nbd/server.c
index f4a7a169ae3..97403db2e07 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -2084,7 +2084,7 @@ static int coroutine_fn nbd_co_send_sparse_read(NBDClient *client,
 }

 typedef struct NBDExtentArray {
-    NBDExtent *extents;
+    NBDExtent32 *extents;
     unsigned int nb_alloc;
     unsigned int count;
     uint64_t total_length;
@@ -2097,7 +2097,7 @@ static NBDExtentArray *nbd_extent_array_new(unsigned int nb_alloc)
     NBDExtentArray *ea = g_new0(NBDExtentArray, 1);

     ea->nb_alloc = nb_alloc;
-    ea->extents = g_new(NBDExtent, nb_alloc);
+    ea->extents = g_new(NBDExtent32, nb_alloc);
     ea->can_add = true;

     return ea;
@@ -2160,7 +2160,7 @@ static int nbd_extent_array_add(NBDExtentArray *ea,
     }

     ea->total_length += length;
-    ea->extents[ea->count] = (NBDExtent) {.length = length, .flags = flags};
+    ea->extents[ea->count] = (NBDExtent32) {.length = length, .flags = flags};
     ea->count++;

     return 0;
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 04/17] nbd: Prepare for 64-bit request effect lengths
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (2 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 03/17] nbd: Add types for extended headers Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-09-04 16:15   ` Vladimir Sementsov-Ogievskiy
  2023-08-29 17:58 ` [PATCH v6 05/17] nbd/server: Refactor handling of command sanity checks Eric Blake
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov, Kevin Wolf, Hanna Reitz

Widen the length field of NBDRequest to 64-bits, although we can
assert that all current uses are still under 32 bits: either because
of NBD_MAX_BUFFER_SIZE which is even smaller (and where size_t can
still be appropriate, even on 32-bit platforms), or because nothing
ever puts us into NBD_MODE_EXTENDED yet (and while future patches will
allow larger transactions, the lengths in play here are still capped
at 32-bit).  There are no semantic changes, other than a typo fix in a
couple of error messages.

Signed-off-by: Eric Blake <eblake@redhat.com>
---

v6: fix sign extension bug

v5: tweak commit message, adjust a few more spots [Vladimir]

v4: split off enum changes to earlier patches [Vladimir]
---
 include/block/nbd.h |  4 ++--
 block/nbd.c         | 25 +++++++++++++++++++------
 nbd/client.c        |  1 +
 nbd/server.c        | 23 +++++++++++++++--------
 block/trace-events  |  2 +-
 nbd/trace-events    | 14 +++++++-------
 6 files changed, 45 insertions(+), 24 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index b2fb8ab44d5..ec4e8eda6bd 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -71,8 +71,8 @@ typedef enum NBDMode {
  */
 typedef struct NBDRequest {
     uint64_t cookie;
-    uint64_t from;
-    uint32_t len;
+    uint64_t from;  /* Offset touched by the command */
+    uint64_t len;   /* Effect length; 32 bit limit without extended headers */
     uint16_t flags; /* NBD_CMD_FLAG_* */
     uint16_t type;  /* NBD_CMD_* */
     NBDMode mode;   /* Determines which network representation to use */
diff --git a/block/nbd.c b/block/nbd.c
index c7581794873..57123c17f94 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -1306,10 +1306,11 @@ nbd_client_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int64_t bytes,
     NBDRequest request = {
         .type = NBD_CMD_WRITE_ZEROES,
         .from = offset,
-        .len = bytes,  /* .len is uint32_t actually */
+        .len = bytes,
     };

-    assert(bytes <= UINT32_MAX); /* rely on max_pwrite_zeroes */
+    /* rely on max_pwrite_zeroes */
+    assert(bytes <= UINT32_MAX || s->info.mode >= NBD_MODE_EXTENDED);

     assert(!(s->info.flags & NBD_FLAG_READ_ONLY));
     if (!(s->info.flags & NBD_FLAG_SEND_WRITE_ZEROES)) {
@@ -1356,10 +1357,11 @@ nbd_client_co_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes)
     NBDRequest request = {
         .type = NBD_CMD_TRIM,
         .from = offset,
-        .len = bytes, /* len is uint32_t */
+        .len = bytes,
     };

-    assert(bytes <= UINT32_MAX); /* rely on max_pdiscard */
+    /* rely on max_pdiscard */
+    assert(bytes <= UINT32_MAX || s->info.mode >= NBD_MODE_EXTENDED);

     assert(!(s->info.flags & NBD_FLAG_READ_ONLY));
     if (!(s->info.flags & NBD_FLAG_SEND_TRIM) || !bytes) {
@@ -1381,8 +1383,7 @@ static int coroutine_fn GRAPH_RDLOCK nbd_client_co_block_status(
     NBDRequest request = {
         .type = NBD_CMD_BLOCK_STATUS,
         .from = offset,
-        .len = MIN(QEMU_ALIGN_DOWN(INT_MAX, bs->bl.request_alignment),
-                   MIN(bytes, s->info.size - offset)),
+        .len = MIN(bytes, s->info.size - offset),
         .flags = NBD_CMD_FLAG_REQ_ONE,
     };

@@ -1392,6 +1393,10 @@ static int coroutine_fn GRAPH_RDLOCK nbd_client_co_block_status(
         *file = bs;
         return BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID;
     }
+    if (s->info.mode < NBD_MODE_EXTENDED) {
+        request.len = MIN(QEMU_ALIGN_DOWN(INT_MAX, bs->bl.request_alignment),
+                          request.len);
+    }

     /*
      * Work around the fact that the block layer doesn't do
@@ -1956,6 +1961,14 @@ static void nbd_refresh_limits(BlockDriverState *bs, Error **errp)
     bs->bl.max_pwrite_zeroes = max;
     bs->bl.max_transfer = max;

+    /*
+     * Assume that if the server supports extended headers, it also
+     * supports unlimited size zero and trim commands.
+     */
+    if (s->info.mode >= NBD_MODE_EXTENDED) {
+        bs->bl.max_pdiscard = bs->bl.max_pwrite_zeroes = 0;
+    }
+
     if (s->info.opt_block &&
         s->info.opt_block > bs->bl.opt_transfer) {
         bs->bl.opt_transfer = s->info.opt_block;
diff --git a/nbd/client.c b/nbd/client.c
index 40a1eb72346..1495a9b0ab1 100644
--- a/nbd/client.c
+++ b/nbd/client.c
@@ -1355,6 +1355,7 @@ int nbd_send_request(QIOChannel *ioc, NBDRequest *request)
     uint8_t buf[NBD_REQUEST_SIZE];

     assert(request->mode <= NBD_MODE_STRUCTURED); /* TODO handle extended */
+    assert(request->len <= UINT32_MAX);
     trace_nbd_send_request(request->from, request->len, request->cookie,
                            request->flags, request->type,
                            nbd_cmd_lookup(request->type));
diff --git a/nbd/server.c b/nbd/server.c
index 97403db2e07..e38a8f700a9 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -1165,7 +1165,7 @@ static int nbd_negotiate_options(NBDClient *client, Error **errp)
         client->optlen = length;

         if (length > NBD_MAX_BUFFER_SIZE) {
-            error_setg(errp, "len (%" PRIu32" ) is larger than max len (%u)",
+            error_setg(errp, "len (%" PRIu32 ") is larger than max len (%u)",
                        length, NBD_MAX_BUFFER_SIZE);
             return -EINVAL;
         }
@@ -1441,7 +1441,7 @@ static int coroutine_fn nbd_receive_request(NBDClient *client, NBDRequest *reque
     request->type   = lduw_be_p(buf + 6);
     request->cookie = ldq_be_p(buf + 8);
     request->from   = ldq_be_p(buf + 16);
-    request->len    = ldl_be_p(buf + 24);
+    request->len    = (uint32_t)ldl_be_p(buf + 24); /* widen 32 to 64 bits */

     trace_nbd_receive_request(magic, request->flags, request->type,
                               request->from, request->len);
@@ -1899,7 +1899,7 @@ static int coroutine_fn nbd_co_send_simple_reply(NBDClient *client,
                                                  NBDRequest *request,
                                                  uint32_t error,
                                                  void *data,
-                                                 size_t len,
+                                                 uint64_t len,
                                                  Error **errp)
 {
     NBDSimpleReply reply;
@@ -1910,6 +1910,7 @@ static int coroutine_fn nbd_co_send_simple_reply(NBDClient *client,
     };

     assert(!len || !nbd_err);
+    assert(len <= NBD_MAX_STRING_SIZE);
     assert(client->mode < NBD_MODE_STRUCTURED ||
            (client->mode == NBD_MODE_STRUCTURED &&
             request->type != NBD_CMD_READ));
@@ -1968,7 +1969,7 @@ static int coroutine_fn nbd_co_send_chunk_read(NBDClient *client,
                                                NBDRequest *request,
                                                uint64_t offset,
                                                void *data,
-                                               size_t size,
+                                               uint64_t size,
                                                bool final,
                                                Error **errp)
 {
@@ -1980,7 +1981,7 @@ static int coroutine_fn nbd_co_send_chunk_read(NBDClient *client,
         {.iov_base = data, .iov_len = size}
     };

-    assert(size);
+    assert(size && size <= NBD_MAX_BUFFER_SIZE);
     trace_nbd_co_send_chunk_read(request->cookie, offset, data, size);
     set_be_chunk(client, iov, 3, final ? NBD_REPLY_FLAG_DONE : 0,
                  NBD_REPLY_TYPE_OFFSET_DATA, request);
@@ -2023,13 +2024,14 @@ static int coroutine_fn nbd_co_send_sparse_read(NBDClient *client,
                                                 NBDRequest *request,
                                                 uint64_t offset,
                                                 uint8_t *data,
-                                                size_t size,
+                                                uint64_t size,
                                                 Error **errp)
 {
     int ret = 0;
     NBDExport *exp = client->exp;
     size_t progress = 0;

+    assert(size <= NBD_MAX_BUFFER_SIZE);
     while (progress < size) {
         int64_t pnum;
         int status = blk_co_block_status_above(exp->common.blk, NULL,
@@ -2359,7 +2361,7 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req, NBDRequest *
         request->type == NBD_CMD_CACHE)
     {
         if (request->len > NBD_MAX_BUFFER_SIZE) {
-            error_setg(errp, "len (%" PRIu32" ) is larger than max len (%u)",
+            error_setg(errp, "len (%" PRIu64 ") is larger than max len (%u)",
                        request->len, NBD_MAX_BUFFER_SIZE);
             return -EINVAL;
         }
@@ -2375,6 +2377,7 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req, NBDRequest *
     }

     if (request->type == NBD_CMD_WRITE) {
+        assert(request->len <= NBD_MAX_BUFFER_SIZE);
         if (nbd_read(client->ioc, req->data, request->len, "CMD_WRITE data",
                      errp) < 0)
         {
@@ -2396,7 +2399,7 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req, NBDRequest *
     }
     if (request->from > client->exp->size ||
         request->len > client->exp->size - request->from) {
-        error_setg(errp, "operation past EOF; From: %" PRIu64 ", Len: %" PRIu32
+        error_setg(errp, "operation past EOF; From: %" PRIu64 ", Len: %" PRIu64
                    ", Size: %" PRIu64, request->from, request->len,
                    client->exp->size);
         return (request->type == NBD_CMD_WRITE ||
@@ -2458,6 +2461,7 @@ static coroutine_fn int nbd_do_cmd_read(NBDClient *client, NBDRequest *request,
     NBDExport *exp = client->exp;

     assert(request->type == NBD_CMD_READ);
+    assert(request->len <= NBD_MAX_BUFFER_SIZE);

     /* XXX: NBD Protocol only documents use of FUA with WRITE */
     if (request->flags & NBD_CMD_FLAG_FUA) {
@@ -2508,6 +2512,7 @@ static coroutine_fn int nbd_do_cmd_cache(NBDClient *client, NBDRequest *request,
     NBDExport *exp = client->exp;

     assert(request->type == NBD_CMD_CACHE);
+    assert(request->len <= NBD_MAX_BUFFER_SIZE);

     ret = blk_co_preadv(exp->common.blk, request->from, request->len,
                         NULL, BDRV_REQ_COPY_ON_READ | BDRV_REQ_PREFETCH);
@@ -2541,6 +2546,7 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
         if (request->flags & NBD_CMD_FLAG_FUA) {
             flags |= BDRV_REQ_FUA;
         }
+        assert(request->len <= NBD_MAX_BUFFER_SIZE);
         ret = blk_co_pwrite(exp->common.blk, request->from, request->len, data,
                             flags);
         return nbd_send_generic_reply(client, request, ret,
@@ -2584,6 +2590,7 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
             return nbd_send_generic_reply(client, request, -EINVAL,
                                           "need non-zero length", errp);
         }
+        assert(request->len <= UINT32_MAX);
         if (client->export_meta.count) {
             bool dont_fragment = request->flags & NBD_CMD_FLAG_REQ_ONE;
             int contexts_remaining = client->export_meta.count;
diff --git a/block/trace-events b/block/trace-events
index 6f121b76365..925aa554bbf 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -167,7 +167,7 @@ iscsi_xcopy(void *src_lun, uint64_t src_off, void *dst_lun, uint64_t dst_off, ui
 nbd_parse_blockstatus_compliance(const char *err) "ignoring extra data from non-compliant server: %s"
 nbd_structured_read_compliance(const char *type) "server sent non-compliant unaligned read %s chunk"
 nbd_read_reply_entry_fail(int ret, const char *err) "ret = %d, err: %s"
-nbd_co_request_fail(uint64_t from, uint32_t len, uint64_t handle, uint16_t flags, uint16_t type, const char *name, int ret, const char *err) "Request failed { .from = %" PRIu64", .len = %" PRIu32 ", .handle = %" PRIu64 ", .flags = 0x%" PRIx16 ", .type = %" PRIu16 " (%s) } ret = %d, err: %s"
+nbd_co_request_fail(uint64_t from, uint64_t len, uint64_t handle, uint16_t flags, uint16_t type, const char *name, int ret, const char *err) "Request failed { .from = %" PRIu64", .len = %" PRIu64 ", .handle = %" PRIu64 ", .flags = 0x%" PRIx16 ", .type = %" PRIu16 " (%s) } ret = %d, err: %s"
 nbd_client_handshake(const char *export_name) "export '%s'"
 nbd_client_handshake_success(const char *export_name) "export '%s'"
 nbd_reconnect_attempt(unsigned in_flight) "in_flight %u"
diff --git a/nbd/trace-events b/nbd/trace-events
index f19a4d0db39..f9dccfcfb44 100644
--- a/nbd/trace-events
+++ b/nbd/trace-events
@@ -31,7 +31,7 @@ nbd_client_loop(void) "Doing NBD loop"
 nbd_client_loop_ret(int ret, const char *error) "NBD loop returned %d: %s"
 nbd_client_clear_queue(void) "Clearing NBD queue"
 nbd_client_clear_socket(void) "Clearing NBD socket"
-nbd_send_request(uint64_t from, uint32_t len, uint64_t cookie, uint16_t flags, uint16_t type, const char *name) "Sending request to server: { .from = %" PRIu64", .len = %" PRIu32 ", .cookie = %" PRIu64 ", .flags = 0x%" PRIx16 ", .type = %" PRIu16 " (%s) }"
+nbd_send_request(uint64_t from, uint64_t len, uint64_t cookie, uint16_t flags, uint16_t type, const char *name) "Sending request to server: { .from = %" PRIu64", .len = %" PRIu64 ", .cookie = %" PRIu64 ", .flags = 0x%" PRIx16 ", .type = %" PRIu16 " (%s) }"
 nbd_receive_simple_reply(int32_t error, const char *errname, uint64_t cookie) "Got simple reply: { .error = %" PRId32 " (%s), cookie = %" PRIu64" }"
 nbd_receive_structured_reply_chunk(uint16_t flags, uint16_t type, const char *name, uint64_t cookie, uint32_t length) "Got structured reply chunk: { flags = 0x%" PRIx16 ", type = %d (%s), cookie = %" PRIu64 ", length = %" PRIu32 " }"

@@ -60,18 +60,18 @@ nbd_negotiate_options_check_option(uint32_t option, const char *name) "Checking
 nbd_negotiate_begin(void) "Beginning negotiation"
 nbd_negotiate_new_style_size_flags(uint64_t size, unsigned flags) "advertising size %" PRIu64 " and flags 0x%x"
 nbd_negotiate_success(void) "Negotiation succeeded"
-nbd_receive_request(uint32_t magic, uint16_t flags, uint16_t type, uint64_t from, uint32_t len) "Got request: { magic = 0x%" PRIx32 ", .flags = 0x%" PRIx16 ", .type = 0x%" PRIx16 ", from = %" PRIu64 ", len = %" PRIu32 " }"
+nbd_receive_request(uint32_t magic, uint16_t flags, uint16_t type, uint64_t from, uint64_t len) "Got request: { magic = 0x%" PRIx32 ", .flags = 0x%" PRIx16 ", .type = 0x%" PRIx16 ", from = %" PRIu64 ", len = %" PRIu64 " }"
 nbd_blk_aio_attached(const char *name, void *ctx) "Export %s: Attaching clients to AIO context %p"
 nbd_blk_aio_detach(const char *name, void *ctx) "Export %s: Detaching clients from AIO context %p"
-nbd_co_send_simple_reply(uint64_t cookie, uint32_t error, const char *errname, int len) "Send simple reply: cookie = %" PRIu64 ", error = %" PRIu32 " (%s), len = %d"
+nbd_co_send_simple_reply(uint64_t cookie, uint32_t error, const char *errname, uint64_t len) "Send simple reply: cookie = %" PRIu64 ", error = %" PRIu32 " (%s), len = %" PRIu64
 nbd_co_send_chunk_done(uint64_t cookie) "Send structured reply done: cookie = %" PRIu64
-nbd_co_send_chunk_read(uint64_t cookie, uint64_t offset, void *data, size_t size) "Send structured read data reply: cookie = %" PRIu64 ", offset = %" PRIu64 ", data = %p, len = %zu"
-nbd_co_send_chunk_read_hole(uint64_t cookie, uint64_t offset, size_t size) "Send structured read hole reply: cookie = %" PRIu64 ", offset = %" PRIu64 ", len = %zu"
+nbd_co_send_chunk_read(uint64_t cookie, uint64_t offset, void *data, uint64_t size) "Send structured read data reply: cookie = %" PRIu64 ", offset = %" PRIu64 ", data = %p, len = %" PRIu64
+nbd_co_send_chunk_read_hole(uint64_t cookie, uint64_t offset, uint64_t size) "Send structured read hole reply: cookie = %" PRIu64 ", offset = %" PRIu64 ", len = %" PRIu64
 nbd_co_send_extents(uint64_t cookie, unsigned int extents, uint32_t id, uint64_t length, int last) "Send block status reply: cookie = %" PRIu64 ", extents = %u, context = %d (extents cover %" PRIu64 " bytes, last chunk = %d)"
 nbd_co_send_chunk_error(uint64_t cookie, int err, const char *errname, const char *msg) "Send structured error reply: cookie = %" PRIu64 ", error = %d (%s), msg = '%s'"
 nbd_co_receive_request_decode_type(uint64_t cookie, uint16_t type, const char *name) "Decoding type: cookie = %" PRIu64 ", type = %" PRIu16 " (%s)"
-nbd_co_receive_request_payload_received(uint64_t cookie, uint32_t len) "Payload received: cookie = %" PRIu64 ", len = %" PRIu32
-nbd_co_receive_align_compliance(const char *op, uint64_t from, uint32_t len, uint32_t align) "client sent non-compliant unaligned %s request: from=0x%" PRIx64 ", len=0x%" PRIx32 ", align=0x%" PRIx32
+nbd_co_receive_request_payload_received(uint64_t cookie, uint64_t len) "Payload received: cookie = %" PRIu64 ", len = %" PRIu64
+nbd_co_receive_align_compliance(const char *op, uint64_t from, uint64_t len, uint32_t align) "client sent non-compliant unaligned %s request: from=0x%" PRIx64 ", len=0x%" PRIx64 ", align=0x%" PRIx32
 nbd_trip(void) "Reading request"

 # client-connection.c
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 05/17] nbd/server: Refactor handling of command sanity checks
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (3 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 04/17] nbd: Prepare for 64-bit request effect lengths Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-09-04 16:53   ` Vladimir Sementsov-Ogievskiy
  2023-08-29 17:58 ` [PATCH v6 06/17] nbd/server: Support a request payload Eric Blake
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov

Upcoming additions to support NBD 64-bit effect lengths will add a new
command flag NBD_CMD_FLAG_PAYLOAD_LEN that needs to be considered in
our sanity checks of the client's messages (that is, more than just
CMD_WRITE have the potential to carry a client payload when extended
headers are in effect).  But before we can start to support that, it
is easier to first refactor the existing set of various if statements
over open-coded combinations of request->type to instead be a single
switch statement over all command types that sets witnesses, then
straight-line processing based on the witnesses.  No semantic change
is intended.

Signed-off-by: Eric Blake <eblake@redhat.com>
---

v5: new patch split out from v4 13/24 [Vladimir]
---
 nbd/server.c | 118 ++++++++++++++++++++++++++++++++-------------------
 1 file changed, 74 insertions(+), 44 deletions(-)

diff --git a/nbd/server.c b/nbd/server.c
index e38a8f700a9..dd3ab59224c 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -2329,11 +2329,16 @@ static int coroutine_fn nbd_co_send_bitmap(NBDClient *client,
  * to the client (although the caller may still need to disconnect after
  * reporting the error).
  */
-static int coroutine_fn nbd_co_receive_request(NBDRequestData *req, NBDRequest *request,
+static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
+                                               NBDRequest *request,
                                                Error **errp)
 {
     NBDClient *client = req->client;
-    int valid_flags;
+    bool check_length = false;
+    bool check_rofs = false;
+    bool allocate_buffer = false;
+    unsigned payload_len = 0;
+    int valid_flags = NBD_CMD_FLAG_FUA;
     int ret;

     g_assert(qemu_in_coroutine());
@@ -2345,55 +2350,88 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req, NBDRequest *

     trace_nbd_co_receive_request_decode_type(request->cookie, request->type,
                                              nbd_cmd_lookup(request->type));
-
-    if (request->type != NBD_CMD_WRITE) {
-        /* No payload, we are ready to read the next request.  */
-        req->complete = true;
-    }
-
-    if (request->type == NBD_CMD_DISC) {
+    switch (request->type) {
+    case NBD_CMD_DISC:
         /* Special case: we're going to disconnect without a reply,
          * whether or not flags, from, or len are bogus */
+        req->complete = true;
         return -EIO;
-    }

-    if (request->type == NBD_CMD_READ || request->type == NBD_CMD_WRITE ||
-        request->type == NBD_CMD_CACHE)
-    {
-        if (request->len > NBD_MAX_BUFFER_SIZE) {
-            error_setg(errp, "len (%" PRIu64 ") is larger than max len (%u)",
-                       request->len, NBD_MAX_BUFFER_SIZE);
-            return -EINVAL;
+    case NBD_CMD_READ:
+        if (client->mode >= NBD_MODE_STRUCTURED) {
+            valid_flags |= NBD_CMD_FLAG_DF;
         }
+        check_length = true;
+        allocate_buffer = true;
+        break;

-        if (request->type != NBD_CMD_CACHE) {
-            req->data = blk_try_blockalign(client->exp->common.blk,
-                                           request->len);
-            if (req->data == NULL) {
-                error_setg(errp, "No memory");
-                return -ENOMEM;
-            }
-        }
+    case NBD_CMD_WRITE:
+        payload_len = request->len;
+        check_length = true;
+        allocate_buffer = true;
+        check_rofs = true;
+        break;
+
+    case NBD_CMD_FLUSH:
+        break;
+
+    case NBD_CMD_TRIM:
+        check_rofs = true;
+        break;
+
+    case NBD_CMD_CACHE:
+        check_length = true;
+        break;
+
+    case NBD_CMD_WRITE_ZEROES:
+        valid_flags |= NBD_CMD_FLAG_NO_HOLE | NBD_CMD_FLAG_FAST_ZERO;
+        check_rofs = true;
+        break;
+
+    case NBD_CMD_BLOCK_STATUS:
+        valid_flags |= NBD_CMD_FLAG_REQ_ONE;
+        break;
+
+    default:
+        /* Unrecognized, will fail later */
+        ;
     }

-    if (request->type == NBD_CMD_WRITE) {
-        assert(request->len <= NBD_MAX_BUFFER_SIZE);
-        if (nbd_read(client->ioc, req->data, request->len, "CMD_WRITE data",
-                     errp) < 0)
-        {
+    /* Payload and buffer handling. */
+    if (!payload_len) {
+        req->complete = true;
+    }
+    if (check_length && request->len > NBD_MAX_BUFFER_SIZE) {
+        /* READ, WRITE, CACHE */
+        error_setg(errp, "len (%" PRIu64 ") is larger than max len (%u)",
+                   request->len, NBD_MAX_BUFFER_SIZE);
+        return -EINVAL;
+    }
+    if (allocate_buffer) {
+        /* READ, WRITE */
+        req->data = blk_try_blockalign(client->exp->common.blk,
+                                       request->len);
+        if (req->data == NULL) {
+            error_setg(errp, "No memory");
+            return -ENOMEM;
+        }
+    }
+    if (payload_len) {
+        /* WRITE */
+        assert(req->data);
+        ret = nbd_read(client->ioc, req->data, payload_len,
+                       "CMD_WRITE data", errp);
+        if (ret < 0) {
             return -EIO;
         }
         req->complete = true;
-
         trace_nbd_co_receive_request_payload_received(request->cookie,
-                                                      request->len);
+                                                      payload_len);
     }

     /* Sanity checks. */
-    if (client->exp->nbdflags & NBD_FLAG_READ_ONLY &&
-        (request->type == NBD_CMD_WRITE ||
-         request->type == NBD_CMD_WRITE_ZEROES ||
-         request->type == NBD_CMD_TRIM)) {
+    if (client->exp->nbdflags & NBD_FLAG_READ_ONLY && check_rofs) {
+        /* WRITE, TRIM, WRITE_ZEROES */
         error_setg(errp, "Export is read-only");
         return -EROFS;
     }
@@ -2416,14 +2454,6 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req, NBDRequest *
                                               request->len,
                                               client->check_align);
     }
-    valid_flags = NBD_CMD_FLAG_FUA;
-    if (request->type == NBD_CMD_READ && client->mode >= NBD_MODE_STRUCTURED) {
-        valid_flags |= NBD_CMD_FLAG_DF;
-    } else if (request->type == NBD_CMD_WRITE_ZEROES) {
-        valid_flags |= NBD_CMD_FLAG_NO_HOLE | NBD_CMD_FLAG_FAST_ZERO;
-    } else if (request->type == NBD_CMD_BLOCK_STATUS) {
-        valid_flags |= NBD_CMD_FLAG_REQ_ONE;
-    }
     if (request->flags & ~valid_flags) {
         error_setg(errp, "unsupported flags for command %s (got 0x%x)",
                    nbd_cmd_lookup(request->type), request->flags);
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 06/17] nbd/server: Support a request payload
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (4 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 05/17] nbd/server: Refactor handling of command sanity checks Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-09-05 14:36   ` Vladimir Sementsov-Ogievskiy
  2023-08-29 17:58 ` [PATCH v6 07/17] nbd/server: Prepare to receive extended header requests Eric Blake
                   ` (10 subsequent siblings)
  16 siblings, 1 reply; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov

Upcoming additions to support NBD 64-bit effect lengths allow for the
possibility to distinguish between payload length (capped at 32M) and
effect length (64 bits, although we generally assume 63 bits because
of off_t limitations).  Without that extension, only the NBD_CMD_WRITE
request has a payload; but with the extension, it makes sense to allow
at least NBD_CMD_BLOCK_STATUS to have both a payload and effect length
in a future patch (where the payload is a limited-size struct that in
turn gives the real effect length as well as a subset of known ids for
which status is requested).  Other future NBD commands may also have a
request payload, so the 64-bit extension introduces a new
NBD_CMD_FLAG_PAYLOAD_LEN that distinguishes between whether the header
length is a payload length or an effect length, rather than
hard-coding the decision based on the command; although a client
should never send a command with a payload without the negotiation
phase proving such extension is available, we are now able to
gracefully fail unexpected client payloads while keeping the
connection alive.  Note that we do not support the payload version of
BLOCK_STATUS yet.

Signed-off-by: Eric Blake <eblake@redhat.com>
---

v5: retitled from v4 13/24, rewrite on top of previous patch's switch
statement [Vladimir]

v4: less indentation on several 'if's [Vladimir]
---
 nbd/server.c     | 33 ++++++++++++++++++++++++++++-----
 nbd/trace-events |  1 +
 2 files changed, 29 insertions(+), 5 deletions(-)

diff --git a/nbd/server.c b/nbd/server.c
index dd3ab59224c..adcfcdeacb7 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -2334,7 +2334,8 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
                                                Error **errp)
 {
     NBDClient *client = req->client;
-    bool check_length = false;
+    bool extended_with_payload;
+    bool check_length;
     bool check_rofs = false;
     bool allocate_buffer = false;
     unsigned payload_len = 0;
@@ -2350,6 +2351,9 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,

     trace_nbd_co_receive_request_decode_type(request->cookie, request->type,
                                              nbd_cmd_lookup(request->type));
+    check_length = extended_with_payload = client->mode >= NBD_MODE_EXTENDED &&
+        request->flags & NBD_CMD_FLAG_PAYLOAD_LEN;
+
     switch (request->type) {
     case NBD_CMD_DISC:
         /* Special case: we're going to disconnect without a reply,
@@ -2366,6 +2370,14 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
         break;

     case NBD_CMD_WRITE:
+        if (client->mode >= NBD_MODE_EXTENDED) {
+            if (!extended_with_payload) {
+                /* The client is noncompliant. Trace it, but proceed. */
+                trace_nbd_co_receive_ext_payload_compliance(request->from,
+                                                            request->len);
+            }
+            valid_flags |= NBD_CMD_FLAG_PAYLOAD_LEN;
+        }
         payload_len = request->len;
         check_length = true;
         allocate_buffer = true;
@@ -2407,6 +2419,15 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
                    request->len, NBD_MAX_BUFFER_SIZE);
         return -EINVAL;
     }
+    if (extended_with_payload && !allocate_buffer) {
+        /*
+         * For now, we don't support payloads on other commands; but
+         * we can keep the connection alive by ignoring the payload.
+         */
+        assert(request->type != NBD_CMD_WRITE);
+        payload_len = request->len;
+        request->len = 0;
+    }
     if (allocate_buffer) {
         /* READ, WRITE */
         req->data = blk_try_blockalign(client->exp->common.blk,
@@ -2417,10 +2438,12 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
         }
     }
     if (payload_len) {
-        /* WRITE */
-        assert(req->data);
-        ret = nbd_read(client->ioc, req->data, payload_len,
-                       "CMD_WRITE data", errp);
+        if (req->data) {
+            ret = nbd_read(client->ioc, req->data, payload_len,
+                           "CMD_WRITE data", errp);
+        } else {
+            ret = nbd_drop(client->ioc, payload_len, errp);
+        }
         if (ret < 0) {
             return -EIO;
         }
diff --git a/nbd/trace-events b/nbd/trace-events
index f9dccfcfb44..c1a3227613f 100644
--- a/nbd/trace-events
+++ b/nbd/trace-events
@@ -71,6 +71,7 @@ nbd_co_send_extents(uint64_t cookie, unsigned int extents, uint32_t id, uint64_t
 nbd_co_send_chunk_error(uint64_t cookie, int err, const char *errname, const char *msg) "Send structured error reply: cookie = %" PRIu64 ", error = %d (%s), msg = '%s'"
 nbd_co_receive_request_decode_type(uint64_t cookie, uint16_t type, const char *name) "Decoding type: cookie = %" PRIu64 ", type = %" PRIu16 " (%s)"
 nbd_co_receive_request_payload_received(uint64_t cookie, uint64_t len) "Payload received: cookie = %" PRIu64 ", len = %" PRIu64
+nbd_co_receive_ext_payload_compliance(uint64_t from, uint64_t len) "client sent non-compliant write without payload flag: from=0x%" PRIx64 ", len=0x%" PRIx64
 nbd_co_receive_align_compliance(const char *op, uint64_t from, uint64_t len, uint32_t align) "client sent non-compliant unaligned %s request: from=0x%" PRIx64 ", len=0x%" PRIx64 ", align=0x%" PRIx32
 nbd_trip(void) "Reading request"

-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 07/17] nbd/server: Prepare to receive extended header requests
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (5 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 06/17] nbd/server: Support a request payload Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-08-29 17:58 ` [PATCH v6 08/17] nbd/server: Prepare to send extended header replies Eric Blake
                   ` (9 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov

Although extended mode is not yet enabled, once we do turn it on, we
need to accept extended requests for all messages.  Previous patches
have already taken care of supporting 64-bit lengths, now we just need
to read it off the wire.

Note that this implementation will block indefinitely on a buggy
client that sends a non-extended payload (that is, we try to read a
full packet before we ever check the magic number, but a client that
mistakenly sends a simple request after negotiating extended headers
doesn't send us enough bytes), but it's no different from any other
client that stops talking to us partway through a packet and thus not
worth coding around.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---

v6: fix sign extension bug

v5: no change

v4: new patch, split out from v3 9/14
---
 nbd/nbd-internal.h |  5 ++++-
 nbd/server.c       | 43 ++++++++++++++++++++++++++++++-------------
 2 files changed, 34 insertions(+), 14 deletions(-)

diff --git a/nbd/nbd-internal.h b/nbd/nbd-internal.h
index 133b1d94b50..dfa02f77ee4 100644
--- a/nbd/nbd-internal.h
+++ b/nbd/nbd-internal.h
@@ -34,8 +34,11 @@
  * https://github.com/yoe/nbd/blob/master/doc/proto.md
  */

-/* Size of all NBD_OPT_*, without payload */
+/* Size of all compact NBD_CMD_*, without payload */
 #define NBD_REQUEST_SIZE            (4 + 2 + 2 + 8 + 8 + 4)
+/* Size of all extended NBD_CMD_*, without payload */
+#define NBD_EXTENDED_REQUEST_SIZE   (4 + 2 + 2 + 8 + 8 + 8)
+
 /* Size of all NBD_REP_* sent in answer to most NBD_OPT_*, without payload */
 #define NBD_REPLY_SIZE              (4 + 4 + 8)
 /* Size of reply to NBD_OPT_EXPORT_NAME */
diff --git a/nbd/server.c b/nbd/server.c
index adcfcdeacb7..b8e8694f3f4 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -1415,11 +1415,13 @@ nbd_read_eof(NBDClient *client, void *buffer, size_t size, Error **errp)
 static int coroutine_fn nbd_receive_request(NBDClient *client, NBDRequest *request,
                                             Error **errp)
 {
-    uint8_t buf[NBD_REQUEST_SIZE];
-    uint32_t magic;
+    uint8_t buf[NBD_EXTENDED_REQUEST_SIZE];
+    uint32_t magic, expect;
     int ret;
+    size_t size = client->mode >= NBD_MODE_EXTENDED ?
+        NBD_EXTENDED_REQUEST_SIZE : NBD_REQUEST_SIZE;

-    ret = nbd_read_eof(client, buf, sizeof(buf), errp);
+    ret = nbd_read_eof(client, buf, size, errp);
     if (ret < 0) {
         return ret;
     }
@@ -1427,13 +1429,21 @@ static int coroutine_fn nbd_receive_request(NBDClient *client, NBDRequest *reque
         return -EIO;
     }

-    /* Request
-       [ 0 ..  3]   magic   (NBD_REQUEST_MAGIC)
-       [ 4 ..  5]   flags   (NBD_CMD_FLAG_FUA, ...)
-       [ 6 ..  7]   type    (NBD_CMD_READ, ...)
-       [ 8 .. 15]   cookie
-       [16 .. 23]   from
-       [24 .. 27]   len
+    /*
+     * Compact request
+     *  [ 0 ..  3]   magic   (NBD_REQUEST_MAGIC)
+     *  [ 4 ..  5]   flags   (NBD_CMD_FLAG_FUA, ...)
+     *  [ 6 ..  7]   type    (NBD_CMD_READ, ...)
+     *  [ 8 .. 15]   cookie
+     *  [16 .. 23]   from
+     *  [24 .. 27]   len
+     * Extended request
+     *  [ 0 ..  3]   magic   (NBD_EXTENDED_REQUEST_MAGIC)
+     *  [ 4 ..  5]   flags   (NBD_CMD_FLAG_FUA, NBD_CMD_FLAG_PAYLOAD_LEN, ...)
+     *  [ 6 ..  7]   type    (NBD_CMD_READ, ...)
+     *  [ 8 .. 15]   cookie
+     *  [16 .. 23]   from
+     *  [24 .. 31]   len
      */

     magic = ldl_be_p(buf);
@@ -1441,13 +1451,20 @@ static int coroutine_fn nbd_receive_request(NBDClient *client, NBDRequest *reque
     request->type   = lduw_be_p(buf + 6);
     request->cookie = ldq_be_p(buf + 8);
     request->from   = ldq_be_p(buf + 16);
-    request->len    = (uint32_t)ldl_be_p(buf + 24); /* widen 32 to 64 bits */
+    if (client->mode >= NBD_MODE_EXTENDED) {
+        request->len = ldq_be_p(buf + 24);
+        expect = NBD_EXTENDED_REQUEST_MAGIC;
+    } else {
+        request->len = (uint32_t)ldl_be_p(buf + 24); /* widen 32 to 64 bits */
+        expect = NBD_REQUEST_MAGIC;
+    }

     trace_nbd_receive_request(magic, request->flags, request->type,
                               request->from, request->len);

-    if (magic != NBD_REQUEST_MAGIC) {
-        error_setg(errp, "invalid magic (got 0x%" PRIx32 ")", magic);
+    if (magic != expect) {
+        error_setg(errp, "invalid magic (got 0x%" PRIx32 ", expected 0x%"
+                   PRIx32 ")", magic, expect);
         return -EINVAL;
     }
     return 0;
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 08/17] nbd/server: Prepare to send extended header replies
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (6 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 07/17] nbd/server: Prepare to receive extended header requests Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-08-29 17:58 ` [PATCH v6 09/17] nbd/server: Support 64-bit block status Eric Blake
                   ` (8 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov

Although extended mode is not yet enabled, once we do turn it on, we
need to reply with extended headers to all messages.  Update the low
level entry points necessary so that all other callers automatically
get the right header based on the current mode.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---

v5: s/iov->iov_len/iov[0].iov_len/ [Vladimir], add R-b

v4: new patch, split out from v3 9/14
---
 nbd/server.c | 30 ++++++++++++++++++++++--------
 1 file changed, 22 insertions(+), 8 deletions(-)

diff --git a/nbd/server.c b/nbd/server.c
index b8e8694f3f4..efeb2e12bd2 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -1950,8 +1950,6 @@ static inline void set_be_chunk(NBDClient *client, struct iovec *iov,
                                 size_t niov, uint16_t flags, uint16_t type,
                                 NBDRequest *request)
 {
-    /* TODO - handle structured vs. extended replies */
-    NBDStructuredReplyChunk *chunk = iov->iov_base;
     size_t i, length = 0;

     for (i = 1; i < niov; i++) {
@@ -1959,12 +1957,26 @@ static inline void set_be_chunk(NBDClient *client, struct iovec *iov,
     }
     assert(length <= NBD_MAX_BUFFER_SIZE + sizeof(NBDStructuredReadData));

-    iov[0].iov_len = sizeof(*chunk);
-    stl_be_p(&chunk->magic, NBD_STRUCTURED_REPLY_MAGIC);
-    stw_be_p(&chunk->flags, flags);
-    stw_be_p(&chunk->type, type);
-    stq_be_p(&chunk->cookie, request->cookie);
-    stl_be_p(&chunk->length, length);
+    if (client->mode >= NBD_MODE_EXTENDED) {
+        NBDExtendedReplyChunk *chunk = iov->iov_base;
+
+        iov[0].iov_len = sizeof(*chunk);
+        stl_be_p(&chunk->magic, NBD_EXTENDED_REPLY_MAGIC);
+        stw_be_p(&chunk->flags, flags);
+        stw_be_p(&chunk->type, type);
+        stq_be_p(&chunk->cookie, request->cookie);
+        stq_be_p(&chunk->offset, request->from);
+        stq_be_p(&chunk->length, length);
+    } else {
+        NBDStructuredReplyChunk *chunk = iov->iov_base;
+
+        iov[0].iov_len = sizeof(*chunk);
+        stl_be_p(&chunk->magic, NBD_STRUCTURED_REPLY_MAGIC);
+        stw_be_p(&chunk->flags, flags);
+        stw_be_p(&chunk->type, type);
+        stq_be_p(&chunk->cookie, request->cookie);
+        stl_be_p(&chunk->length, length);
+    }
 }

 static int coroutine_fn nbd_co_send_chunk_done(NBDClient *client,
@@ -2515,6 +2527,8 @@ static coroutine_fn int nbd_send_generic_reply(NBDClient *client,
 {
     if (client->mode >= NBD_MODE_STRUCTURED && ret < 0) {
         return nbd_co_send_chunk_error(client, request, -ret, error_msg, errp);
+    } else if (client->mode >= NBD_MODE_EXTENDED) {
+        return nbd_co_send_chunk_done(client, request, errp);
     } else {
         return nbd_co_send_simple_reply(client, request, ret < 0 ? -ret : 0,
                                         NULL, 0, errp);
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 09/17] nbd/server: Support 64-bit block status
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (7 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 08/17] nbd/server: Prepare to send extended header replies Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-08-29 17:58 ` [PATCH v6 10/17] nbd/server: Enable initial support for extended headers Eric Blake
                   ` (7 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov

The NBD spec states that if the client negotiates extended headers,
the server must avoid NBD_REPLY_TYPE_BLOCK_STATUS and instead use
NBD_REPLY_TYPE_BLOCK_STATUS_EXT which supports 64-bit lengths, even if
the reply does not need more than 32 bits.  As of this patch,
client->mode is still never NBD_MODE_EXTENDED, so the code added here
does not take effect until the next patch enables negotiation.

For now, all metacontexts that we know how to export never populate
more than 32 bits of information, so we don't have to worry about
NBD_REP_ERR_EXT_HEADER_REQD or filtering during handshake, and we
always send all zeroes for the upper 32 bits of status during
NBD_CMD_BLOCK_STATUS.

Note that we previously had some interesting size-juggling on call
chains, such as:

nbd_co_send_block_status(uint32_t length)
-> blockstatus_to_extents(uint32_t bytes)
  -> bdrv_block_status_above(bytes, &uint64_t num)
  -> nbd_extent_array_add(uint64_t num)
    -> store num in 32-bit length

But we were lucky that it never overflowed: bdrv_block_status_above
never sets num larger than bytes, and we had previously been capping
'bytes' at 32 bits (since the protocol does not allow sending a larger
request without extended headers).  This patch adds some assertions
that ensure we continue to avoid overflowing 32 bits for a narrow
client, while fully utilizing 64-bits all the way through when the
client understands that.  Even in 64-bit math, overflow is not an
issue, because all lengths are coming from the block layer, and we
know that the block layer does not support images larger than off_t
(if lengths were coming from the network, the story would be
different).

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---

v5: stronger justification on assertion [Vladimir], add R-b

v4: split conversion to big-endian across two helper functions rather
than in-place union [Vladimir]
---
 nbd/server.c | 108 ++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 82 insertions(+), 26 deletions(-)

diff --git a/nbd/server.c b/nbd/server.c
index efeb2e12bd2..f5e60b5ab88 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -2115,20 +2115,24 @@ static int coroutine_fn nbd_co_send_sparse_read(NBDClient *client,
 }

 typedef struct NBDExtentArray {
-    NBDExtent32 *extents;
+    NBDExtent64 *extents;
     unsigned int nb_alloc;
     unsigned int count;
     uint64_t total_length;
+    bool extended;
     bool can_add;
     bool converted_to_be;
 } NBDExtentArray;

-static NBDExtentArray *nbd_extent_array_new(unsigned int nb_alloc)
+static NBDExtentArray *nbd_extent_array_new(unsigned int nb_alloc,
+                                            NBDMode mode)
 {
     NBDExtentArray *ea = g_new0(NBDExtentArray, 1);

+    assert(mode >= NBD_MODE_STRUCTURED);
     ea->nb_alloc = nb_alloc;
-    ea->extents = g_new(NBDExtent32, nb_alloc);
+    ea->extents = g_new(NBDExtent64, nb_alloc);
+    ea->extended = mode >= NBD_MODE_EXTENDED;
     ea->can_add = true;

     return ea;
@@ -2147,15 +2151,36 @@ static void nbd_extent_array_convert_to_be(NBDExtentArray *ea)
     int i;

     assert(!ea->converted_to_be);
+    assert(ea->extended);
     ea->can_add = false;
     ea->converted_to_be = true;

     for (i = 0; i < ea->count; i++) {
-        ea->extents[i].flags = cpu_to_be32(ea->extents[i].flags);
-        ea->extents[i].length = cpu_to_be32(ea->extents[i].length);
+        ea->extents[i].length = cpu_to_be64(ea->extents[i].length);
+        ea->extents[i].flags = cpu_to_be64(ea->extents[i].flags);
     }
 }

+/* Further modifications of the array after conversion are abandoned */
+static NBDExtent32 *nbd_extent_array_convert_to_narrow(NBDExtentArray *ea)
+{
+    int i;
+    NBDExtent32 *extents = g_new(NBDExtent32, ea->count);
+
+    assert(!ea->converted_to_be);
+    assert(!ea->extended);
+    ea->can_add = false;
+    ea->converted_to_be = true;
+
+    for (i = 0; i < ea->count; i++) {
+        assert((ea->extents[i].length | ea->extents[i].flags) <= UINT32_MAX);
+        extents[i].length = cpu_to_be32(ea->extents[i].length);
+        extents[i].flags = cpu_to_be32(ea->extents[i].flags);
+    }
+
+    return extents;
+}
+
 /*
  * Add extent to NBDExtentArray. If extent can't be added (no available space),
  * return -1.
@@ -2166,19 +2191,27 @@ static void nbd_extent_array_convert_to_be(NBDExtentArray *ea)
  * would result in an incorrect range reported to the client)
  */
 static int nbd_extent_array_add(NBDExtentArray *ea,
-                                uint32_t length, uint32_t flags)
+                                uint64_t length, uint32_t flags)
 {
     assert(ea->can_add);

     if (!length) {
         return 0;
     }
+    if (!ea->extended) {
+        assert(length <= UINT32_MAX);
+    }

     /* Extend previous extent if flags are the same */
     if (ea->count > 0 && flags == ea->extents[ea->count - 1].flags) {
-        uint64_t sum = (uint64_t)length + ea->extents[ea->count - 1].length;
+        uint64_t sum = length + ea->extents[ea->count - 1].length;

-        if (sum <= UINT32_MAX) {
+        /*
+         * sum cannot overflow: the block layer bounds image size at
+         * 2^63, and ea->extents[].length comes from the block layer.
+         */
+        assert(sum >= length);
+        if (sum <= UINT32_MAX || ea->extended) {
             ea->extents[ea->count - 1].length = sum;
             ea->total_length += length;
             return 0;
@@ -2191,7 +2224,7 @@ static int nbd_extent_array_add(NBDExtentArray *ea,
     }

     ea->total_length += length;
-    ea->extents[ea->count] = (NBDExtent32) {.length = length, .flags = flags};
+    ea->extents[ea->count] = (NBDExtent64) {.length = length, .flags = flags};
     ea->count++;

     return 0;
@@ -2260,20 +2293,39 @@ nbd_co_send_extents(NBDClient *client, NBDRequest *request, NBDExtentArray *ea,
                     bool last, uint32_t context_id, Error **errp)
 {
     NBDReply hdr;
-    NBDStructuredMeta chunk;
-    struct iovec iov[] = {
-        {.iov_base = &hdr},
-        {.iov_base = &chunk, .iov_len = sizeof(chunk)},
-        {.iov_base = ea->extents, .iov_len = ea->count * sizeof(ea->extents[0])}
-    };
-
-    nbd_extent_array_convert_to_be(ea);
+    NBDStructuredMeta meta;
+    NBDExtendedMeta meta_ext;
+    g_autofree NBDExtent32 *extents = NULL;
+    uint16_t type;
+    struct iovec iov[] = { {.iov_base = &hdr}, {0}, {0} };
+
+    if (client->mode >= NBD_MODE_EXTENDED) {
+        type = NBD_REPLY_TYPE_BLOCK_STATUS_EXT;
+
+        iov[1].iov_base = &meta_ext;
+        iov[1].iov_len = sizeof(meta_ext);
+        stl_be_p(&meta_ext.context_id, context_id);
+        stl_be_p(&meta_ext.count, ea->count);
+
+        nbd_extent_array_convert_to_be(ea);
+        iov[2].iov_base = ea->extents;
+        iov[2].iov_len = ea->count * sizeof(ea->extents[0]);
+    } else {
+        type = NBD_REPLY_TYPE_BLOCK_STATUS;
+
+        iov[1].iov_base = &meta;
+        iov[1].iov_len = sizeof(meta);
+        stl_be_p(&meta.context_id, context_id);
+
+        extents = nbd_extent_array_convert_to_narrow(ea);
+        iov[2].iov_base = extents;
+        iov[2].iov_len = ea->count * sizeof(extents[0]);
+    }

     trace_nbd_co_send_extents(request->cookie, ea->count, context_id,
                               ea->total_length, last);
-    set_be_chunk(client, iov, 3, last ? NBD_REPLY_FLAG_DONE : 0,
-                 NBD_REPLY_TYPE_BLOCK_STATUS, request);
-    stl_be_p(&chunk.context_id, context_id);
+    set_be_chunk(client, iov, 3, last ? NBD_REPLY_FLAG_DONE : 0, type,
+                 request);

     return nbd_co_send_iov(client, iov, 3, errp);
 }
@@ -2282,13 +2334,14 @@ nbd_co_send_extents(NBDClient *client, NBDRequest *request, NBDExtentArray *ea,
 static int
 coroutine_fn nbd_co_send_block_status(NBDClient *client, NBDRequest *request,
                                       BlockBackend *blk, uint64_t offset,
-                                      uint32_t length, bool dont_fragment,
+                                      uint64_t length, bool dont_fragment,
                                       bool last, uint32_t context_id,
                                       Error **errp)
 {
     int ret;
     unsigned int nb_extents = dont_fragment ? 1 : NBD_MAX_BLOCK_STATUS_EXTENTS;
-    g_autoptr(NBDExtentArray) ea = nbd_extent_array_new(nb_extents);
+    g_autoptr(NBDExtentArray) ea =
+        nbd_extent_array_new(nb_extents, client->mode);

     if (context_id == NBD_META_ID_BASE_ALLOCATION) {
         ret = blockstatus_to_extents(blk, offset, length, ea);
@@ -2311,11 +2364,12 @@ static void bitmap_to_extents(BdrvDirtyBitmap *bitmap,
     int64_t start, dirty_start, dirty_count;
     int64_t end = offset + length;
     bool full = false;
+    int64_t bound = es->extended ? INT64_MAX : INT32_MAX;

     bdrv_dirty_bitmap_lock(bitmap);

     for (start = offset;
-         bdrv_dirty_bitmap_next_dirty_area(bitmap, start, end, INT32_MAX,
+         bdrv_dirty_bitmap_next_dirty_area(bitmap, start, end, bound,
                                            &dirty_start, &dirty_count);
          start = dirty_start + dirty_count)
     {
@@ -2339,12 +2393,13 @@ static int coroutine_fn nbd_co_send_bitmap(NBDClient *client,
                                            NBDRequest *request,
                                            BdrvDirtyBitmap *bitmap,
                                            uint64_t offset,
-                                           uint32_t length, bool dont_fragment,
+                                           uint64_t length, bool dont_fragment,
                                            bool last, uint32_t context_id,
                                            Error **errp)
 {
     unsigned int nb_extents = dont_fragment ? 1 : NBD_MAX_BLOCK_STATUS_EXTENTS;
-    g_autoptr(NBDExtentArray) ea = nbd_extent_array_new(nb_extents);
+    g_autoptr(NBDExtentArray) ea =
+        nbd_extent_array_new(nb_extents, client->mode);

     bitmap_to_extents(bitmap, offset, length, ea);

@@ -2674,7 +2729,8 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
             return nbd_send_generic_reply(client, request, -EINVAL,
                                           "need non-zero length", errp);
         }
-        assert(request->len <= UINT32_MAX);
+        assert(client->mode >= NBD_MODE_EXTENDED ||
+               request->len <= UINT32_MAX);
         if (client->export_meta.count) {
             bool dont_fragment = request->flags & NBD_CMD_FLAG_REQ_ONE;
             int contexts_remaining = client->export_meta.count;
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 10/17] nbd/server: Enable initial support for extended headers
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (8 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 09/17] nbd/server: Support 64-bit block status Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-08-29 17:58 ` [PATCH v6 11/17] nbd/client: Plumb errp through nbd_receive_replies Eric Blake
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov

Time to start supporting clients that request extended headers.  Now
we can finally reach the code added across several previous patches.

Even though the NBD spec has been altered to allow us to accept
NBD_CMD_READ larger than the max payload size (provided our response
is a hole or broken up over more than one data chunk), we are not
planning to take advantage of that, and continue to cap NBD_CMD_READ
to 32M regardless of header size.

For NBD_CMD_WRITE_ZEROES and NBD_CMD_TRIM, the block layer already
supports 64-bit operations without any effort on our part.  For
NBD_CMD_BLOCK_STATUS, the client's length is a hint, and the previous
patch took care of implementing the required
NBD_REPLY_TYPE_BLOCK_STATUS_EXT.

We do not yet support clients that want to do request payload
filtering of NBD_CMD_BLOCK_STATUS; that will be added in later
patches, but is not essential for qemu as a client since qemu only
requests the single context base:allocation.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---

v5: add R-b, s/8.1/8.2/

v4: split out parts into earlier patches, rebase to earlier changes,
simplify handling of generic replies, retitle (compare to v3 9/14)
---
 docs/interop/nbd.txt |  1 +
 nbd/server.c         | 21 +++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/docs/interop/nbd.txt b/docs/interop/nbd.txt
index f5ca25174a6..9aae5e1f294 100644
--- a/docs/interop/nbd.txt
+++ b/docs/interop/nbd.txt
@@ -69,3 +69,4 @@ NBD_CMD_BLOCK_STATUS for "qemu:dirty-bitmap:", NBD_CMD_CACHE
 NBD_CMD_FLAG_FAST_ZERO
 * 5.2: NBD_CMD_BLOCK_STATUS for "qemu:allocation-depth"
 * 7.1: NBD_FLAG_CAN_MULTI_CONN for shareable writable exports
+* 8.2: NBD_OPT_EXTENDED_HEADERS
diff --git a/nbd/server.c b/nbd/server.c
index f5e60b5ab88..c0a5b00b0f0 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -482,6 +482,10 @@ static int nbd_negotiate_handle_export_name(NBDClient *client, bool no_zeroes,
         [10 .. 133]   reserved     (0) [unless no_zeroes]
      */
     trace_nbd_negotiate_handle_export_name();
+    if (client->mode >= NBD_MODE_EXTENDED) {
+        error_setg(errp, "Extended headers already negotiated");
+        return -EINVAL;
+    }
     if (client->optlen > NBD_MAX_STRING_SIZE) {
         error_setg(errp, "Bad length received");
         return -EINVAL;
@@ -1264,6 +1268,10 @@ static int nbd_negotiate_options(NBDClient *client, Error **errp)
             case NBD_OPT_STRUCTURED_REPLY:
                 if (length) {
                     ret = nbd_reject_length(client, false, errp);
+                } else if (client->mode >= NBD_MODE_EXTENDED) {
+                    ret = nbd_negotiate_send_rep_err(
+                        client, NBD_REP_ERR_EXT_HEADER_REQD, errp,
+                        "extended headers already negotiated");
                 } else if (client->mode >= NBD_MODE_STRUCTURED) {
                     ret = nbd_negotiate_send_rep_err(
                         client, NBD_REP_ERR_INVALID, errp,
@@ -1280,6 +1288,19 @@ static int nbd_negotiate_options(NBDClient *client, Error **errp)
                                                  errp);
                 break;

+            case NBD_OPT_EXTENDED_HEADERS:
+                if (length) {
+                    ret = nbd_reject_length(client, false, errp);
+                } else if (client->mode >= NBD_MODE_EXTENDED) {
+                    ret = nbd_negotiate_send_rep_err(
+                        client, NBD_REP_ERR_INVALID, errp,
+                        "extended headers already negotiated");
+                } else {
+                    ret = nbd_negotiate_send_rep(client, NBD_REP_ACK, errp);
+                    client->mode = NBD_MODE_EXTENDED;
+                }
+                break;
+
             default:
                 ret = nbd_opt_drop(client, NBD_REP_ERR_UNSUP, errp,
                                    "Unsupported option %" PRIu32 " (%s)",
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 11/17] nbd/client: Plumb errp through nbd_receive_replies
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (9 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 10/17] nbd/server: Enable initial support for extended headers Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-08-29 17:58 ` [PATCH v6 12/17] nbd/client: Initial support for extended headers Eric Blake
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov, Kevin Wolf, Hanna Reitz

Instead of ignoring the low-level error just to refabricate our own
message to pass to the caller, we can just plumb the caller's errp
down to the low level.

Signed-off-by: Eric Blake <eblake@redhat.com>
---

v5: set errp on more failure cases [Vladimir], typo fix

v4: new patch [Vladimir]
---
 block/nbd.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index 57123c17f94..4b60b832b70 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -417,7 +417,8 @@ static void coroutine_fn GRAPH_RDLOCK nbd_reconnect_attempt(BDRVNBDState *s)
     reconnect_delay_timer_del(s);
 }

-static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t cookie)
+static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t cookie,
+                                            Error **errp)
 {
     int ret;
     uint64_t ind = COOKIE_TO_INDEX(cookie), ind2;
@@ -458,20 +459,25 @@ static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t cookie)

         /* We are under mutex and cookie is 0. We have to do the dirty work. */
         assert(s->reply.cookie == 0);
-        ret = nbd_receive_reply(s->bs, s->ioc, &s->reply, NULL);
-        if (ret <= 0) {
-            ret = ret ? ret : -EIO;
+        ret = nbd_receive_reply(s->bs, s->ioc, &s->reply, errp);
+        if (ret == 0) {
+            ret = -EIO;
+            error_setg(errp, "server dropped connection");
+        }
+        if (ret < 0) {
             nbd_channel_error(s, ret);
             return ret;
         }
         if (nbd_reply_is_structured(&s->reply) &&
             s->info.mode < NBD_MODE_STRUCTURED) {
             nbd_channel_error(s, -EINVAL);
+            error_setg(errp, "unexpected structured reply");
             return -EINVAL;
         }
         ind2 = COOKIE_TO_INDEX(s->reply.cookie);
         if (ind2 >= MAX_NBD_REQUESTS || !s->requests[ind2].coroutine) {
             nbd_channel_error(s, -EINVAL);
+            error_setg(errp, "unexpected cookie value");
             return -EINVAL;
         }
         if (s->reply.cookie == cookie) {
@@ -843,9 +849,9 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
     }
     *request_ret = 0;

-    ret = nbd_receive_replies(s, cookie);
+    ret = nbd_receive_replies(s, cookie, errp);
     if (ret < 0) {
-        error_setg(errp, "Connection closed");
+        error_prepend(errp, "Connection closed: ");
         return -EIO;
     }
     assert(s->ioc);
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 12/17] nbd/client: Initial support for extended headers
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (10 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 11/17] nbd/client: Plumb errp through nbd_receive_replies Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-08-29 17:58 ` [PATCH v6 13/17] nbd/client: Accept 64-bit block status chunks Eric Blake
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov, Kevin Wolf, Hanna Reitz

Update the client code to be able to send an extended request, and
parse an extended header from the server.  Note that since we reject
any structured reply with a too-large payload, we can always normalize
a valid header back into the compact form, so that the caller need not
deal with two branches of a union.  Still, until a later patch lets
the client negotiate extended headers, the code added here should not
be reached.  Note that because of the different magic numbers, it is
just as easy to trace and then tolerate a non-compliant server sending
the wrong header reply as it would be to insist that the server is
compliant.

Signed-off-by: Eric Blake <eblake@redhat.com>
---

v5: fix logic bug on error reporting [Vladimir]

v4: split off errp handling to separate patch [Vladimir], better
function naming [Vladimir]
---
 include/block/nbd.h |   3 +-
 block/nbd.c         |   2 +-
 nbd/client.c        | 104 +++++++++++++++++++++++++++++---------------
 nbd/trace-events    |   3 +-
 4 files changed, 74 insertions(+), 38 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index ec4e8eda6bd..4e9ce679e37 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -390,7 +390,8 @@ int nbd_init(int fd, QIOChannelSocket *sioc, NBDExportInfo *info,
              Error **errp);
 int nbd_send_request(QIOChannel *ioc, NBDRequest *request);
 int coroutine_fn nbd_receive_reply(BlockDriverState *bs, QIOChannel *ioc,
-                                   NBDReply *reply, Error **errp);
+                                   NBDReply *reply, NBDMode mode,
+                                   Error **errp);
 int nbd_client(int fd);
 int nbd_disconnect(int fd);
 int nbd_errno_to_system_errno(int err);
diff --git a/block/nbd.c b/block/nbd.c
index 4b60b832b70..d60782b25c7 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -459,7 +459,7 @@ static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t cookie,

         /* We are under mutex and cookie is 0. We have to do the dirty work. */
         assert(s->reply.cookie == 0);
-        ret = nbd_receive_reply(s->bs, s->ioc, &s->reply, errp);
+        ret = nbd_receive_reply(s->bs, s->ioc, &s->reply, s->info.mode, errp);
         if (ret == 0) {
             ret = -EIO;
             error_setg(errp, "server dropped connection");
diff --git a/nbd/client.c b/nbd/client.c
index 1495a9b0ab1..e78d6c00f18 100644
--- a/nbd/client.c
+++ b/nbd/client.c
@@ -1352,22 +1352,29 @@ int nbd_disconnect(int fd)

 int nbd_send_request(QIOChannel *ioc, NBDRequest *request)
 {
-    uint8_t buf[NBD_REQUEST_SIZE];
+    uint8_t buf[NBD_EXTENDED_REQUEST_SIZE];
+    size_t len;

-    assert(request->mode <= NBD_MODE_STRUCTURED); /* TODO handle extended */
-    assert(request->len <= UINT32_MAX);
     trace_nbd_send_request(request->from, request->len, request->cookie,
                            request->flags, request->type,
                            nbd_cmd_lookup(request->type));

-    stl_be_p(buf, NBD_REQUEST_MAGIC);
     stw_be_p(buf + 4, request->flags);
     stw_be_p(buf + 6, request->type);
     stq_be_p(buf + 8, request->cookie);
     stq_be_p(buf + 16, request->from);
-    stl_be_p(buf + 24, request->len);
+    if (request->mode >= NBD_MODE_EXTENDED) {
+        stl_be_p(buf, NBD_EXTENDED_REQUEST_MAGIC);
+        stq_be_p(buf + 24, request->len);
+        len = NBD_EXTENDED_REQUEST_SIZE;
+    } else {
+        assert(request->len <= UINT32_MAX);
+        stl_be_p(buf, NBD_REQUEST_MAGIC);
+        stl_be_p(buf + 24, request->len);
+        len = NBD_REQUEST_SIZE;
+    }

-    return nbd_write(ioc, buf, sizeof(buf), NULL);
+    return nbd_write(ioc, buf, len, NULL);
 }

 /* nbd_receive_simple_reply
@@ -1394,30 +1401,36 @@ static int nbd_receive_simple_reply(QIOChannel *ioc, NBDSimpleReply *reply,
     return 0;
 }

-/* nbd_receive_structured_reply_chunk
+/* nbd_receive_reply_chunk_header
  * Read structured reply chunk except magic field (which should be already
- * read).
+ * read).  Normalize into the compact form.
  * Payload is not read.
  */
-static int nbd_receive_structured_reply_chunk(QIOChannel *ioc,
-                                              NBDStructuredReplyChunk *chunk,
-                                              Error **errp)
+static int nbd_receive_reply_chunk_header(QIOChannel *ioc, NBDReply *chunk,
+                                          Error **errp)
 {
     int ret;
+    size_t len;
+    uint64_t payload_len;

-    assert(chunk->magic == NBD_STRUCTURED_REPLY_MAGIC);
+    if (chunk->magic == NBD_STRUCTURED_REPLY_MAGIC) {
+        len = sizeof(chunk->structured);
+    } else {
+        assert(chunk->magic == NBD_EXTENDED_REPLY_MAGIC);
+        len = sizeof(chunk->extended);
+    }

     ret = nbd_read(ioc, (uint8_t *)chunk + sizeof(chunk->magic),
-                   sizeof(*chunk) - sizeof(chunk->magic), "structured chunk",
+                   len - sizeof(chunk->magic), "structured chunk",
                    errp);
     if (ret < 0) {
         return ret;
     }

-    chunk->flags = be16_to_cpu(chunk->flags);
-    chunk->type = be16_to_cpu(chunk->type);
-    chunk->cookie = be64_to_cpu(chunk->cookie);
-    chunk->length = be32_to_cpu(chunk->length);
+    /* flags, type, and cookie occupy same space between forms */
+    chunk->structured.flags = be16_to_cpu(chunk->structured.flags);
+    chunk->structured.type = be16_to_cpu(chunk->structured.type);
+    chunk->structured.cookie = be64_to_cpu(chunk->structured.cookie);

     /*
      * Because we use BLOCK_STATUS with REQ_ONE, and cap READ requests
@@ -1425,11 +1438,20 @@ static int nbd_receive_structured_reply_chunk(QIOChannel *ioc,
      * this.  Even if we stopped using REQ_ONE, sane servers will cap
      * the number of extents they return for block status.
      */
-    if (chunk->length > NBD_MAX_BUFFER_SIZE + sizeof(NBDStructuredReadData)) {
+    if (chunk->magic == NBD_STRUCTURED_REPLY_MAGIC) {
+        payload_len = be32_to_cpu(chunk->structured.length);
+    } else {
+        /* For now, we are ignoring the extended header offset. */
+        payload_len = be64_to_cpu(chunk->extended.length);
+        chunk->magic = NBD_STRUCTURED_REPLY_MAGIC;
+    }
+    if (payload_len > NBD_MAX_BUFFER_SIZE + sizeof(NBDStructuredReadData)) {
         error_setg(errp, "server chunk %" PRIu32 " (%s) payload is too long",
-                   chunk->type, nbd_rep_lookup(chunk->type));
+                   chunk->structured.type,
+                   nbd_rep_lookup(chunk->structured.type));
         return -EINVAL;
     }
+    chunk->structured.length = payload_len;

     return 0;
 }
@@ -1476,19 +1498,21 @@ nbd_read_eof(BlockDriverState *bs, QIOChannel *ioc, void *buffer, size_t size,

 /* nbd_receive_reply
  *
- * Decreases bs->in_flight while waiting for a new reply. This yield is where
- * we wait indefinitely and the coroutine must be able to be safely reentered
- * for nbd_client_attach_aio_context().
+ * Wait for a new reply. If this yields, the coroutine must be able to be
+ * safely reentered for nbd_client_attach_aio_context().  @mode determines
+ * which reply magic we are expecting, although this normalizes the result
+ * so that the caller only has to work with compact headers.
  *
  * Returns 1 on success
- *         0 on eof, when no data was read (errp is not set)
- *         negative errno on failure (errp is set)
+ *         0 on eof, when no data was read
+ *         negative errno on failure
  */
 int coroutine_fn nbd_receive_reply(BlockDriverState *bs, QIOChannel *ioc,
-                                   NBDReply *reply, Error **errp)
+                                   NBDReply *reply, NBDMode mode, Error **errp)
 {
     int ret;
     const char *type;
+    uint32_t expected;

     ret = nbd_read_eof(bs, ioc, &reply->magic, sizeof(reply->magic), errp);
     if (ret <= 0) {
@@ -1497,34 +1521,44 @@ int coroutine_fn nbd_receive_reply(BlockDriverState *bs, QIOChannel *ioc,

     reply->magic = be32_to_cpu(reply->magic);

+    /* Diagnose but accept wrong-width header */
     switch (reply->magic) {
     case NBD_SIMPLE_REPLY_MAGIC:
+        if (mode >= NBD_MODE_EXTENDED) {
+            trace_nbd_receive_wrong_header(reply->magic,
+                                           nbd_mode_lookup(mode));
+        }
         ret = nbd_receive_simple_reply(ioc, &reply->simple, errp);
         if (ret < 0) {
-            break;
+            return ret;
         }
         trace_nbd_receive_simple_reply(reply->simple.error,
                                        nbd_err_lookup(reply->simple.error),
                                        reply->cookie);
         break;
     case NBD_STRUCTURED_REPLY_MAGIC:
-        ret = nbd_receive_structured_reply_chunk(ioc, &reply->structured, errp);
+    case NBD_EXTENDED_REPLY_MAGIC:
+        expected = mode >= NBD_MODE_EXTENDED ? NBD_EXTENDED_REPLY_MAGIC
+            : NBD_STRUCTURED_REPLY_MAGIC;
+        if (reply->magic != expected) {
+            trace_nbd_receive_wrong_header(reply->magic,
+                                           nbd_mode_lookup(mode));
+        }
+        ret = nbd_receive_reply_chunk_header(ioc, reply, errp);
         if (ret < 0) {
-            break;
+            return ret;
         }
         type = nbd_reply_type_lookup(reply->structured.type);
-        trace_nbd_receive_structured_reply_chunk(reply->structured.flags,
-                                                 reply->structured.type, type,
-                                                 reply->structured.cookie,
-                                                 reply->structured.length);
+        trace_nbd_receive_reply_chunk_header(reply->structured.flags,
+                                             reply->structured.type, type,
+                                             reply->structured.cookie,
+                                             reply->structured.length);
         break;
     default:
+        trace_nbd_receive_wrong_header(reply->magic, nbd_mode_lookup(mode));
         error_setg(errp, "invalid magic (got 0x%" PRIx32 ")", reply->magic);
         return -EINVAL;
     }
-    if (ret < 0) {
-        return ret;
-    }

     return 1;
 }
diff --git a/nbd/trace-events b/nbd/trace-events
index c1a3227613f..8f4e20ee9f2 100644
--- a/nbd/trace-events
+++ b/nbd/trace-events
@@ -33,7 +33,8 @@ nbd_client_clear_queue(void) "Clearing NBD queue"
 nbd_client_clear_socket(void) "Clearing NBD socket"
 nbd_send_request(uint64_t from, uint64_t len, uint64_t cookie, uint16_t flags, uint16_t type, const char *name) "Sending request to server: { .from = %" PRIu64", .len = %" PRIu64 ", .cookie = %" PRIu64 ", .flags = 0x%" PRIx16 ", .type = %" PRIu16 " (%s) }"
 nbd_receive_simple_reply(int32_t error, const char *errname, uint64_t cookie) "Got simple reply: { .error = %" PRId32 " (%s), cookie = %" PRIu64" }"
-nbd_receive_structured_reply_chunk(uint16_t flags, uint16_t type, const char *name, uint64_t cookie, uint32_t length) "Got structured reply chunk: { flags = 0x%" PRIx16 ", type = %d (%s), cookie = %" PRIu64 ", length = %" PRIu32 " }"
+nbd_receive_reply_chunk_header(uint16_t flags, uint16_t type, const char *name, uint64_t cookie, uint32_t length) "Got reply chunk header: { flags = 0x%" PRIx16 ", type = %d (%s), cookie = %" PRIu64 ", length = %" PRIu32 " }"
+nbd_receive_wrong_header(uint32_t magic, const char *mode) "Server sent unexpected magic 0x%" PRIx32 " for negotiated mode %s"

 # common.c
 nbd_unknown_error(int err) "Squashing unexpected error %d to EINVAL"
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 13/17] nbd/client: Accept 64-bit block status chunks
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (11 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 12/17] nbd/client: Initial support for extended headers Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-08-29 17:58 ` [PATCH v6 14/17] nbd/client: Request extended headers during negotiation Eric Blake
                   ` (3 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov, Kevin Wolf, Hanna Reitz

Once extended mode is enabled, we need to accept 64-bit status replies
(even for replies that don't exceed a 32-bit length).  It is easier to
normalize narrow replies into wide format so that the rest of our code
only has to handle one width.  Although a server is non-compliant if
it sends a 64-bit reply in compact mode, or a 32-bit reply in extended
mode, it is still easy enough to tolerate these mismatches.

In normal execution, we are only requesting "base:allocation" which
never exceeds 32 bits for flag values. But during testing with
x-dirty-bitmap, we can force qemu to connect to some other context
that might have 64-bit status bit; however, we ignore those upper bits
(other than mapping qemu:allocation-depth into something that
'qemu-img map --output=json' can expose), and since that only affects
testing, we really don't bother with checking whether more than the
two least-significant bits are set.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---

v5: factor out duplicate length calculation [Vladimir], add R-b

v4: tweak comments and error message about count mismatch, fix setting
of wide in loop [Vladimir]
---
 block/nbd.c        | 49 ++++++++++++++++++++++++++++++++--------------
 block/trace-events |  1 +
 2 files changed, 35 insertions(+), 15 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index d60782b25c7..d37f5425a0f 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -616,13 +616,17 @@ static int nbd_parse_offset_hole_payload(BDRVNBDState *s,
  */
 static int nbd_parse_blockstatus_payload(BDRVNBDState *s,
                                          NBDStructuredReplyChunk *chunk,
-                                         uint8_t *payload, uint64_t orig_length,
-                                         NBDExtent32 *extent, Error **errp)
+                                         uint8_t *payload, bool wide,
+                                         uint64_t orig_length,
+                                         NBDExtent64 *extent, Error **errp)
 {
     uint32_t context_id;
+    uint32_t count;
+    size_t ext_len = wide ? sizeof(*extent) : sizeof(NBDExtent32);
+    size_t pay_len = sizeof(context_id) + wide * sizeof(count) + ext_len;

     /* The server succeeded, so it must have sent [at least] one extent */
-    if (chunk->length < sizeof(context_id) + sizeof(*extent)) {
+    if (chunk->length < pay_len) {
         error_setg(errp, "Protocol error: invalid payload for "
                          "NBD_REPLY_TYPE_BLOCK_STATUS");
         return -EINVAL;
@@ -637,8 +641,15 @@ static int nbd_parse_blockstatus_payload(BDRVNBDState *s,
         return -EINVAL;
     }

-    extent->length = payload_advance32(&payload);
-    extent->flags = payload_advance32(&payload);
+    if (wide) {
+        count = payload_advance32(&payload);
+        extent->length = payload_advance64(&payload);
+        extent->flags = payload_advance64(&payload);
+    } else {
+        count = 0;
+        extent->length = payload_advance32(&payload);
+        extent->flags = payload_advance32(&payload);
+    }

     if (extent->length == 0) {
         error_setg(errp, "Protocol error: server sent status chunk with "
@@ -659,7 +670,7 @@ static int nbd_parse_blockstatus_payload(BDRVNBDState *s,
      * (always a safe status, even if it loses information).
      */
     if (s->info.min_block && !QEMU_IS_ALIGNED(extent->length,
-                                                   s->info.min_block)) {
+                                              s->info.min_block)) {
         trace_nbd_parse_blockstatus_compliance("extent length is unaligned");
         if (extent->length > s->info.min_block) {
             extent->length = QEMU_ALIGN_DOWN(extent->length,
@@ -673,13 +684,15 @@ static int nbd_parse_blockstatus_payload(BDRVNBDState *s,
     /*
      * We used NBD_CMD_FLAG_REQ_ONE, so the server should not have
      * sent us any more than one extent, nor should it have included
-     * status beyond our request in that extent. However, it's easy
-     * enough to ignore the server's noncompliance without killing the
+     * status beyond our request in that extent. Furthermore, a wide
+     * server should have replied with an accurate count (we left
+     * count at 0 for a narrow server).  However, it's easy enough to
+     * ignore the server's noncompliance without killing the
      * connection; just ignore trailing extents, and clamp things to
      * the length of our request.
      */
-    if (chunk->length > sizeof(context_id) + sizeof(*extent)) {
-        trace_nbd_parse_blockstatus_compliance("more than one extent");
+    if (count != wide || chunk->length > pay_len) {
+        trace_nbd_parse_blockstatus_compliance("unexpected extent count");
     }
     if (extent->length > orig_length) {
         extent->length = orig_length;
@@ -1125,7 +1138,7 @@ nbd_co_receive_cmdread_reply(BDRVNBDState *s, uint64_t cookie,

 static int coroutine_fn
 nbd_co_receive_blockstatus_reply(BDRVNBDState *s, uint64_t cookie,
-                                 uint64_t length, NBDExtent32 *extent,
+                                 uint64_t length, NBDExtent64 *extent,
                                  int *request_ret, Error **errp)
 {
     NBDReplyChunkIter iter;
@@ -1138,11 +1151,17 @@ nbd_co_receive_blockstatus_reply(BDRVNBDState *s, uint64_t cookie,
     NBD_FOREACH_REPLY_CHUNK(s, iter, cookie, false, NULL, &reply, &payload) {
         int ret;
         NBDStructuredReplyChunk *chunk = &reply.structured;
+        bool wide;

         assert(nbd_reply_is_structured(&reply));

         switch (chunk->type) {
+        case NBD_REPLY_TYPE_BLOCK_STATUS_EXT:
         case NBD_REPLY_TYPE_BLOCK_STATUS:
+            wide = chunk->type == NBD_REPLY_TYPE_BLOCK_STATUS_EXT;
+            if ((s->info.mode >= NBD_MODE_EXTENDED) != wide) {
+                trace_nbd_extended_headers_compliance("block_status");
+            }
             if (received) {
                 nbd_channel_error(s, -EINVAL);
                 error_setg(&local_err, "Several BLOCK_STATUS chunks in reply");
@@ -1150,9 +1169,9 @@ nbd_co_receive_blockstatus_reply(BDRVNBDState *s, uint64_t cookie,
             }
             received = true;

-            ret = nbd_parse_blockstatus_payload(s, &reply.structured,
-                                                payload, length, extent,
-                                                &local_err);
+            ret = nbd_parse_blockstatus_payload(
+                s, &reply.structured, payload, wide,
+                length, extent, &local_err);
             if (ret < 0) {
                 nbd_channel_error(s, ret);
                 nbd_iter_channel_error(&iter, ret, &local_err);
@@ -1382,7 +1401,7 @@ static int coroutine_fn GRAPH_RDLOCK nbd_client_co_block_status(
         int64_t *pnum, int64_t *map, BlockDriverState **file)
 {
     int ret, request_ret;
-    NBDExtent32 extent = { 0 };
+    NBDExtent64 extent = { 0 };
     BDRVNBDState *s = (BDRVNBDState *)bs->opaque;
     Error *local_err = NULL;

diff --git a/block/trace-events b/block/trace-events
index 925aa554bbf..8e789e1f12f 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -166,6 +166,7 @@ iscsi_xcopy(void *src_lun, uint64_t src_off, void *dst_lun, uint64_t dst_off, ui
 # nbd.c
 nbd_parse_blockstatus_compliance(const char *err) "ignoring extra data from non-compliant server: %s"
 nbd_structured_read_compliance(const char *type) "server sent non-compliant unaligned read %s chunk"
+nbd_extended_headers_compliance(const char *type) "server sent non-compliant %s chunk not matching choice of extended headers"
 nbd_read_reply_entry_fail(int ret, const char *err) "ret = %d, err: %s"
 nbd_co_request_fail(uint64_t from, uint64_t len, uint64_t handle, uint16_t flags, uint16_t type, const char *name, int ret, const char *err) "Request failed { .from = %" PRIu64", .len = %" PRIu64 ", .handle = %" PRIu64 ", .flags = 0x%" PRIx16 ", .type = %" PRIu16 " (%s) } ret = %d, err: %s"
 nbd_client_handshake(const char *export_name) "export '%s'"
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 14/17] nbd/client: Request extended headers during negotiation
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (12 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 13/17] nbd/client: Accept 64-bit block status chunks Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-08-29 17:58 ` [PATCH v6 15/17] nbd/server: Refactor list of negotiated meta contexts Eric Blake
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov, Kevin Wolf, Hanna Reitz

All the pieces are in place for a client to finally request extended
headers.  Note that we must not request extended headers when qemu-nbd
is used to connect to the kernel module (as nbd.ko does not expect
them, but expects us to do the negotiation in userspace before handing
the socket over to the kernel), but there is no harm in all other
clients requesting them.

Extended headers are not essential to the information collected during
'qemu-nbd --list', but probing for it gives us one more piece of
information in that output.  Update the iotests affected by the new
line of output.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---

v5: add R-b

v4: rebase to earlier changes, tweak commit message for why qemu-nbd
connection to /dev/nbd cannot use extended mode [Vladimir]
---
 nbd/client-connection.c                       |  2 +-
 nbd/client.c                                  | 20 ++++++++++++++-----
 qemu-nbd.c                                    |  3 +++
 tests/qemu-iotests/223.out                    |  6 ++++++
 tests/qemu-iotests/233.out                    |  4 ++++
 tests/qemu-iotests/241.out                    |  3 +++
 tests/qemu-iotests/307.out                    |  5 +++++
 .../tests/nbd-qemu-allocation.out             |  1 +
 8 files changed, 38 insertions(+), 6 deletions(-)

diff --git a/nbd/client-connection.c b/nbd/client-connection.c
index 13e4cb6684b..d9d946da006 100644
--- a/nbd/client-connection.c
+++ b/nbd/client-connection.c
@@ -93,7 +93,7 @@ NBDClientConnection *nbd_client_connection_new(const SocketAddress *saddr,
         .do_negotiation = do_negotiation,

         .initial_info.request_sizes = true,
-        .initial_info.mode = NBD_MODE_STRUCTURED,
+        .initial_info.mode = NBD_MODE_EXTENDED,
         .initial_info.base_allocation = true,
         .initial_info.x_dirty_bitmap = g_strdup(x_dirty_bitmap),
         .initial_info.name = g_strdup(export_name ?: "")
diff --git a/nbd/client.c b/nbd/client.c
index e78d6c00f18..4520c08049e 100644
--- a/nbd/client.c
+++ b/nbd/client.c
@@ -958,15 +958,23 @@ static int nbd_start_negotiate(AioContext *aio_context, QIOChannel *ioc,
         if (fixedNewStyle) {
             int result = 0;

+            if (max_mode >= NBD_MODE_EXTENDED) {
+                result = nbd_request_simple_option(ioc,
+                                                   NBD_OPT_EXTENDED_HEADERS,
+                                                   false, errp);
+                if (result) {
+                    return result < 0 ? -EINVAL : NBD_MODE_EXTENDED;
+                }
+            }
             if (max_mode >= NBD_MODE_STRUCTURED) {
                 result = nbd_request_simple_option(ioc,
                                                    NBD_OPT_STRUCTURED_REPLY,
                                                    false, errp);
-                if (result < 0) {
-                    return -EINVAL;
+                if (result) {
+                    return result < 0 ? -EINVAL : NBD_MODE_STRUCTURED;
                 }
             }
-            return result ? NBD_MODE_STRUCTURED : NBD_MODE_SIMPLE;
+            return NBD_MODE_SIMPLE;
         } else {
             return NBD_MODE_EXPORT_NAME;
         }
@@ -1040,6 +1048,7 @@ int nbd_receive_negotiate(AioContext *aio_context, QIOChannel *ioc,
     }

     switch (info->mode) {
+    case NBD_MODE_EXTENDED:
     case NBD_MODE_STRUCTURED:
         if (base_allocation) {
             result = nbd_negotiate_simple_meta_context(ioc, info, errp);
@@ -1150,7 +1159,7 @@ int nbd_receive_export_list(QIOChannel *ioc, QCryptoTLSCreds *tlscreds,

     *info = NULL;
     result = nbd_start_negotiate(NULL, ioc, tlscreds, hostname, &sioc,
-                                 NBD_MODE_STRUCTURED, NULL, errp);
+                                 NBD_MODE_EXTENDED, NULL, errp);
     if (tlscreds && sioc) {
         ioc = sioc;
     }
@@ -1161,6 +1170,7 @@ int nbd_receive_export_list(QIOChannel *ioc, QCryptoTLSCreds *tlscreds,
     switch ((NBDMode)result) {
     case NBD_MODE_SIMPLE:
     case NBD_MODE_STRUCTURED:
+    case NBD_MODE_EXTENDED:
         /* newstyle - use NBD_OPT_LIST to populate array, then try
          * NBD_OPT_INFO on each array member. If structured replies
          * are enabled, also try NBD_OPT_LIST_META_CONTEXT. */
@@ -1197,7 +1207,7 @@ int nbd_receive_export_list(QIOChannel *ioc, QCryptoTLSCreds *tlscreds,
                 break;
             }

-            if (result == NBD_MODE_STRUCTURED &&
+            if (result >= NBD_MODE_STRUCTURED &&
                 nbd_list_meta_contexts(ioc, &array[i], errp) < 0) {
                 goto out;
             }
diff --git a/qemu-nbd.c b/qemu-nbd.c
index 32c5a349e06..ca846f7d96d 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -237,6 +237,9 @@ static int qemu_nbd_client_list(SocketAddress *saddr, QCryptoTLSCreds *tls,
             printf("  opt block: %u\n", list[i].opt_block);
             printf("  max block: %u\n", list[i].max_block);
         }
+        printf("  transaction size: %s\n",
+               list[i].mode >= NBD_MODE_EXTENDED ?
+               "64-bit" : "32-bit");
         if (list[i].n_contexts) {
             printf("  available meta contexts: %d\n", list[i].n_contexts);
             for (j = 0; j < list[i].n_contexts; j++) {
diff --git a/tests/qemu-iotests/223.out b/tests/qemu-iotests/223.out
index 26fb347c5da..b98582c38ea 100644
--- a/tests/qemu-iotests/223.out
+++ b/tests/qemu-iotests/223.out
@@ -87,6 +87,7 @@ exports available: 3
   min block: 1
   opt block: 4096
   max block: 33554432
+  transaction size: 64-bit
   available meta contexts: 2
    base:allocation
    qemu:dirty-bitmap:b
@@ -97,6 +98,7 @@ exports available: 3
   min block: 1
   opt block: 4096
   max block: 33554432
+  transaction size: 64-bit
   available meta contexts: 2
    base:allocation
    qemu:dirty-bitmap:b2
@@ -106,6 +108,7 @@ exports available: 3
   min block: 1
   opt block: 4096
   max block: 33554432
+  transaction size: 64-bit
   available meta contexts: 2
    base:allocation
    qemu:dirty-bitmap:b3
@@ -206,6 +209,7 @@ exports available: 3
   min block: 1
   opt block: 4096
   max block: 33554432
+  transaction size: 64-bit
   available meta contexts: 2
    base:allocation
    qemu:dirty-bitmap:b
@@ -216,6 +220,7 @@ exports available: 3
   min block: 1
   opt block: 4096
   max block: 33554432
+  transaction size: 64-bit
   available meta contexts: 2
    base:allocation
    qemu:dirty-bitmap:b2
@@ -225,6 +230,7 @@ exports available: 3
   min block: 1
   opt block: 4096
   max block: 33554432
+  transaction size: 64-bit
   available meta contexts: 2
    base:allocation
    qemu:dirty-bitmap:b3
diff --git a/tests/qemu-iotests/233.out b/tests/qemu-iotests/233.out
index 237c82767ea..1910f7df20f 100644
--- a/tests/qemu-iotests/233.out
+++ b/tests/qemu-iotests/233.out
@@ -39,6 +39,7 @@ exports available: 1
  export: ''
   size:  67108864
   min block: 1
+  transaction size: 64-bit

 == check TLS fail over TCP with mismatched hostname ==
 qemu-img: Could not open 'driver=nbd,host=localhost,port=PORT,tls-creds=tls0': Certificate does not match the hostname localhost
@@ -53,6 +54,7 @@ exports available: 1
  export: ''
   size:  67108864
   min block: 1
+  transaction size: 64-bit

 == check TLS with different CA fails ==
 qemu-img: Could not open 'driver=nbd,host=127.0.0.1,port=PORT,tls-creds=tls0': The certificate hasn't got a known issuer
@@ -83,6 +85,7 @@ exports available: 1
  export: ''
   size:  67108864
   min block: 1
+  transaction size: 64-bit

 == check TLS works over UNIX with PSK ==
 image: nbd+unix://?socket=SOCK_DIR/qemu-nbd.sock
@@ -93,6 +96,7 @@ exports available: 1
  export: ''
   size:  67108864
   min block: 1
+  transaction size: 64-bit

 == check TLS fails over UNIX with mismatch PSK ==
 qemu-img: Could not open 'driver=nbd,path=SOCK_DIR/qemu-nbd.sock,tls-creds=tls0': TLS handshake failed: The TLS connection was non-properly terminated.
diff --git a/tests/qemu-iotests/241.out b/tests/qemu-iotests/241.out
index 88e8cfcd7e2..a9efb876521 100644
--- a/tests/qemu-iotests/241.out
+++ b/tests/qemu-iotests/241.out
@@ -6,6 +6,7 @@ exports available: 1
  export: ''
   size:  1024
   min block: 1
+  transaction size: 64-bit
 [{ "start": 0, "length": 1000, "depth": 0, "present": true, "zero": false, "data": true, "offset": OFFSET},
 { "start": 1000, "length": 24, "depth": 0, "present": true, "zero": true, "data": false, "offset": OFFSET}]
 1 KiB (0x400) bytes     allocated at offset 0 bytes (0x0)
@@ -16,6 +17,7 @@ exports available: 1
  export: ''
   size:  1024
   min block: 512
+  transaction size: 64-bit
 [{ "start": 0, "length": 1024, "depth": 0, "present": true, "zero": false, "data": true, "offset": OFFSET}]
 1 KiB (0x400) bytes     allocated at offset 0 bytes (0x0)
 WARNING: Image format was not specified for 'TEST_DIR/t.raw' and probing guessed raw.
@@ -28,6 +30,7 @@ exports available: 1
  export: ''
   size:  1024
   min block: 1
+  transaction size: 64-bit
 [{ "start": 0, "length": 1000, "depth": 0, "present": true, "zero": false, "data": true, "offset": OFFSET},
 { "start": 1000, "length": 24, "depth": 0, "present": true, "zero": true, "data": false, "offset": OFFSET}]
 1 KiB (0x400) bytes     allocated at offset 0 bytes (0x0)
diff --git a/tests/qemu-iotests/307.out b/tests/qemu-iotests/307.out
index 390f05d1b78..2b9a6a67a1a 100644
--- a/tests/qemu-iotests/307.out
+++ b/tests/qemu-iotests/307.out
@@ -19,6 +19,7 @@ exports available: 1
   min block: XXX
   opt block: XXX
   max block: XXX
+  transaction size: 64-bit
   available meta contexts: 1
    base:allocation

@@ -47,6 +48,7 @@ exports available: 1
   min block: XXX
   opt block: XXX
   max block: XXX
+  transaction size: 64-bit
   available meta contexts: 1
    base:allocation

@@ -78,6 +80,7 @@ exports available: 2
   min block: XXX
   opt block: XXX
   max block: XXX
+  transaction size: 64-bit
   available meta contexts: 1
    base:allocation
  export: 'export1'
@@ -87,6 +90,7 @@ exports available: 2
   min block: XXX
   opt block: XXX
   max block: XXX
+  transaction size: 64-bit
   available meta contexts: 1
    base:allocation

@@ -113,6 +117,7 @@ exports available: 1
   min block: XXX
   opt block: XXX
   max block: XXX
+  transaction size: 64-bit
   available meta contexts: 1
    base:allocation

diff --git a/tests/qemu-iotests/tests/nbd-qemu-allocation.out b/tests/qemu-iotests/tests/nbd-qemu-allocation.out
index 9d938db24e6..659276032b0 100644
--- a/tests/qemu-iotests/tests/nbd-qemu-allocation.out
+++ b/tests/qemu-iotests/tests/nbd-qemu-allocation.out
@@ -21,6 +21,7 @@ exports available: 1
   min block: 1
   opt block: 4096
   max block: 33554432
+  transaction size: 64-bit
   available meta contexts: 2
    base:allocation
    qemu:allocation-depth
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 15/17] nbd/server: Refactor list of negotiated meta contexts
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (13 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 14/17] nbd/client: Request extended headers during negotiation Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-08-29 17:58 ` [PATCH v6 16/17] nbd/server: Prepare for per-request filtering of BLOCK_STATUS Eric Blake
  2023-08-29 17:58 ` [PATCH v6 17/17] nbd/server: Add FLAG_PAYLOAD support to CMD_BLOCK_STATUS Eric Blake
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov, Kevin Wolf, Hanna Reitz

Peform several minor refactorings of how the list of negotiated meta
contexts is managed, to make upcoming patches easier: Promote the
internal type NBDExportMetaContexts to the public opaque type
NBDMetaContexts, and mark exp const.  Use a shorter member name in
NBDClient.  Hoist calls to nbd_check_meta_context() earlier in their
callers, as the number of negotiated contexts may impact the flags
exposed in regards to an export, which in turn requires a new
parameter.  Drop a redundant parameter to nbd_negotiate_meta_queries.
No semantic change intended on the success path; on the failure path,
dropping context in nbd_check_meta_export even when reporting an error
is safer.

Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---

v5: rebase to master, tweak commit message [Vladimir], R-b added

v4: new patch split out from v3 13/14, with smaller impact (quit
trying to separate exp outside of NBDMeataContexts)
---
 include/block/nbd.h |  1 +
 nbd/server.c        | 55 ++++++++++++++++++++++++---------------------
 2 files changed, 31 insertions(+), 25 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 4e9ce679e37..7643c321f36 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -29,6 +29,7 @@
 typedef struct NBDExport NBDExport;
 typedef struct NBDClient NBDClient;
 typedef struct NBDClientConnection NBDClientConnection;
+typedef struct NBDMetaContexts NBDMetaContexts;

 extern const BlockExportDriver blk_exp_nbd;

diff --git a/nbd/server.c b/nbd/server.c
index c0a5b00b0f0..72db982c9ca 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -105,11 +105,13 @@ struct NBDExport {

 static QTAILQ_HEAD(, NBDExport) exports = QTAILQ_HEAD_INITIALIZER(exports);

-/* NBDExportMetaContexts represents a list of contexts to be exported,
+/*
+ * NBDMetaContexts represents a list of meta contexts in use,
  * as selected by NBD_OPT_SET_META_CONTEXT. Also used for
- * NBD_OPT_LIST_META_CONTEXT. */
-typedef struct NBDExportMetaContexts {
-    NBDExport *exp;
+ * NBD_OPT_LIST_META_CONTEXT.
+ */
+struct NBDMetaContexts {
+    const NBDExport *exp; /* associated export */
     size_t count; /* number of negotiated contexts */
     bool base_allocation; /* export base:allocation context (block status) */
     bool allocation_depth; /* export qemu:allocation-depth */
@@ -117,7 +119,7 @@ typedef struct NBDExportMetaContexts {
                     * export qemu:dirty-bitmap:<export bitmap name>,
                     * sized by exp->nr_export_bitmaps
                     */
-} NBDExportMetaContexts;
+};

 struct NBDClient {
     int refcount;
@@ -144,7 +146,7 @@ struct NBDClient {
     uint32_t check_align; /* If non-zero, check for aligned client requests */

     NBDMode mode;
-    NBDExportMetaContexts export_meta;
+    NBDMetaContexts contexts; /* Negotiated meta contexts */

     uint32_t opt; /* Current option being negotiated */
     uint32_t optlen; /* remaining length of data in ioc for the option being
@@ -455,10 +457,10 @@ static int nbd_negotiate_handle_list(NBDClient *client, Error **errp)
     return nbd_negotiate_send_rep(client, NBD_REP_ACK, errp);
 }

-static void nbd_check_meta_export(NBDClient *client)
+static void nbd_check_meta_export(NBDClient *client, NBDExport *exp)
 {
-    if (client->exp != client->export_meta.exp) {
-        client->export_meta.count = 0;
+    if (exp != client->contexts.exp) {
+        client->contexts.count = 0;
     }
 }

@@ -504,6 +506,7 @@ static int nbd_negotiate_handle_export_name(NBDClient *client, bool no_zeroes,
         error_setg(errp, "export not found");
         return -EINVAL;
     }
+    nbd_check_meta_export(client, client->exp);

     myflags = client->exp->nbdflags;
     if (client->mode >= NBD_MODE_STRUCTURED) {
@@ -521,7 +524,6 @@ static int nbd_negotiate_handle_export_name(NBDClient *client, bool no_zeroes,

     QTAILQ_INSERT_TAIL(&client->exp->clients, client, next);
     blk_exp_ref(&client->exp->common);
-    nbd_check_meta_export(client);

     return 0;
 }
@@ -641,6 +643,9 @@ static int nbd_negotiate_handle_info(NBDClient *client, Error **errp)
                                           errp, "export '%s' not present",
                                           sane_name);
     }
+    if (client->opt == NBD_OPT_GO) {
+        nbd_check_meta_export(client, exp);
+    }

     /* Don't bother sending NBD_INFO_NAME unless client requested it */
     if (sendname) {
@@ -729,7 +734,6 @@ static int nbd_negotiate_handle_info(NBDClient *client, Error **errp)
         client->check_align = check_align;
         QTAILQ_INSERT_TAIL(&client->exp->clients, client, next);
         blk_exp_ref(&client->exp->common);
-        nbd_check_meta_export(client);
         rc = 1;
     }
     return rc;
@@ -852,7 +856,7 @@ static bool nbd_strshift(const char **str, const char *prefix)
  * Handle queries to 'base' namespace. For now, only the base:allocation
  * context is available.  Return true if @query has been handled.
  */
-static bool nbd_meta_base_query(NBDClient *client, NBDExportMetaContexts *meta,
+static bool nbd_meta_base_query(NBDClient *client, NBDMetaContexts *meta,
                                 const char *query)
 {
     if (!nbd_strshift(&query, "base:")) {
@@ -872,7 +876,7 @@ static bool nbd_meta_base_query(NBDClient *client, NBDExportMetaContexts *meta,
  * and qemu:allocation-depth contexts are available.  Return true if @query
  * has been handled.
  */
-static bool nbd_meta_qemu_query(NBDClient *client, NBDExportMetaContexts *meta,
+static bool nbd_meta_qemu_query(NBDClient *client, NBDMetaContexts *meta,
                                 const char *query)
 {
     size_t i;
@@ -938,7 +942,7 @@ static bool nbd_meta_qemu_query(NBDClient *client, NBDExportMetaContexts *meta,
  * Return -errno on I/O error, 0 if option was completely handled by
  * sending a reply about inconsistent lengths, or 1 on success. */
 static int nbd_negotiate_meta_query(NBDClient *client,
-                                    NBDExportMetaContexts *meta, Error **errp)
+                                    NBDMetaContexts *meta, Error **errp)
 {
     int ret;
     g_autofree char *query = NULL;
@@ -977,14 +981,14 @@ static int nbd_negotiate_meta_query(NBDClient *client,
  * Handle NBD_OPT_LIST_META_CONTEXT and NBD_OPT_SET_META_CONTEXT
  *
  * Return -errno on I/O error, or 0 if option was completely handled. */
-static int nbd_negotiate_meta_queries(NBDClient *client,
-                                      NBDExportMetaContexts *meta, Error **errp)
+static int nbd_negotiate_meta_queries(NBDClient *client, Error **errp)
 {
     int ret;
     g_autofree char *export_name = NULL;
     /* Mark unused to work around https://bugs.llvm.org/show_bug.cgi?id=3888 */
     g_autofree G_GNUC_UNUSED bool *bitmaps = NULL;
-    NBDExportMetaContexts local_meta = {0};
+    NBDMetaContexts local_meta = {0};
+    NBDMetaContexts *meta;
     uint32_t nb_queries;
     size_t i;
     size_t count = 0;
@@ -1000,6 +1004,8 @@ static int nbd_negotiate_meta_queries(NBDClient *client,
     if (client->opt == NBD_OPT_LIST_META_CONTEXT) {
         /* Only change the caller's meta on SET. */
         meta = &local_meta;
+    } else {
+        meta = &client->contexts;
     }

     g_free(meta->bitmaps);
@@ -1284,8 +1290,7 @@ static int nbd_negotiate_options(NBDClient *client, Error **errp)

             case NBD_OPT_LIST_META_CONTEXT:
             case NBD_OPT_SET_META_CONTEXT:
-                ret = nbd_negotiate_meta_queries(client, &client->export_meta,
-                                                 errp);
+                ret = nbd_negotiate_meta_queries(client, errp);
                 break;

             case NBD_OPT_EXTENDED_HEADERS:
@@ -1517,7 +1522,7 @@ void nbd_client_put(NBDClient *client)
             QTAILQ_REMOVE(&client->exp->clients, client, next);
             blk_exp_unref(&client->exp->common);
         }
-        g_free(client->export_meta.bitmaps);
+        g_free(client->contexts.bitmaps);
         g_free(client);
     }
 }
@@ -2752,11 +2757,11 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
         }
         assert(client->mode >= NBD_MODE_EXTENDED ||
                request->len <= UINT32_MAX);
-        if (client->export_meta.count) {
+        if (client->contexts.count) {
             bool dont_fragment = request->flags & NBD_CMD_FLAG_REQ_ONE;
-            int contexts_remaining = client->export_meta.count;
+            int contexts_remaining = client->contexts.count;

-            if (client->export_meta.base_allocation) {
+            if (client->contexts.base_allocation) {
                 ret = nbd_co_send_block_status(client, request,
                                                exp->common.blk,
                                                request->from,
@@ -2769,7 +2774,7 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
                 }
             }

-            if (client->export_meta.allocation_depth) {
+            if (client->contexts.allocation_depth) {
                 ret = nbd_co_send_block_status(client, request,
                                                exp->common.blk,
                                                request->from, request->len,
@@ -2783,7 +2788,7 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
             }

             for (i = 0; i < client->exp->nr_export_bitmaps; i++) {
-                if (!client->export_meta.bitmaps[i]) {
+                if (!client->contexts.bitmaps[i]) {
                     continue;
                 }
                 ret = nbd_co_send_bitmap(client, request,
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 16/17] nbd/server: Prepare for per-request filtering of BLOCK_STATUS
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (14 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 15/17] nbd/server: Refactor list of negotiated meta contexts Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  2023-08-29 17:58 ` [PATCH v6 17/17] nbd/server: Add FLAG_PAYLOAD support to CMD_BLOCK_STATUS Eric Blake
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov, Kevin Wolf, Hanna Reitz

The next commit will add support for the optional extension
NBD_CMD_FLAG_PAYLOAD during NBD_CMD_BLOCK_STATUS, where the client can
request that the server only return a subset of negotiated contexts,
rather than all contexts.  To make that task easier, this patch
populates the list of contexts to return on a per-command basis (for
now, identical to the full set of negotiated contexts).

Signed-off-by: Eric Blake <eblake@redhat.com>
---

v5: fix null dereference on early error [Vladimir], hoist in assertion
from v4 24/24

v4: split out NBDMetaContexts refactoring to its own patch, track
NBDRequests.contexts as a pointer rather than inline
---
 include/block/nbd.h |  1 +
 nbd/server.c        | 22 +++++++++++++++++-----
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 7643c321f36..9285aa85826 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -77,6 +77,7 @@ typedef struct NBDRequest {
     uint16_t flags; /* NBD_CMD_FLAG_* */
     uint16_t type;  /* NBD_CMD_* */
     NBDMode mode;   /* Determines which network representation to use */
+    NBDMetaContexts *contexts; /* Used by NBD_CMD_BLOCK_STATUS */
 } NBDRequest;

 typedef struct NBDSimpleReply {
diff --git a/nbd/server.c b/nbd/server.c
index 72db982c9ca..f1805b39318 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -2511,6 +2511,7 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
         break;

     case NBD_CMD_BLOCK_STATUS:
+        request->contexts = &client->contexts;
         valid_flags |= NBD_CMD_FLAG_REQ_ONE;
         break;

@@ -2751,17 +2752,18 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
                                       "discard failed", errp);

     case NBD_CMD_BLOCK_STATUS:
+        assert(request->contexts);
         if (!request->len) {
             return nbd_send_generic_reply(client, request, -EINVAL,
                                           "need non-zero length", errp);
         }
         assert(client->mode >= NBD_MODE_EXTENDED ||
                request->len <= UINT32_MAX);
-        if (client->contexts.count) {
+        if (request->contexts->count) {
             bool dont_fragment = request->flags & NBD_CMD_FLAG_REQ_ONE;
-            int contexts_remaining = client->contexts.count;
+            int contexts_remaining = request->contexts->count;

-            if (client->contexts.base_allocation) {
+            if (request->contexts->base_allocation) {
                 ret = nbd_co_send_block_status(client, request,
                                                exp->common.blk,
                                                request->from,
@@ -2774,7 +2776,7 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
                 }
             }

-            if (client->contexts.allocation_depth) {
+            if (request->contexts->allocation_depth) {
                 ret = nbd_co_send_block_status(client, request,
                                                exp->common.blk,
                                                request->from, request->len,
@@ -2787,8 +2789,9 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
                 }
             }

+            assert(request->contexts->exp == client->exp);
             for (i = 0; i < client->exp->nr_export_bitmaps; i++) {
-                if (!client->contexts.bitmaps[i]) {
+                if (!request->contexts->bitmaps[i]) {
                     continue;
                 }
                 ret = nbd_co_send_bitmap(client, request,
@@ -2804,6 +2807,10 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
             assert(!contexts_remaining);

             return 0;
+        } else if (client->contexts.count) {
+            return nbd_send_generic_reply(client, request, -EINVAL,
+                                          "CMD_BLOCK_STATUS payload not valid",
+                                          errp);
         } else {
             return nbd_send_generic_reply(client, request, -EINVAL,
                                           "CMD_BLOCK_STATUS not negotiated",
@@ -2882,6 +2889,11 @@ static coroutine_fn void nbd_trip(void *opaque)
     } else {
         ret = nbd_handle_request(client, &request, req->data, &local_err);
     }
+    if (request.contexts && request.contexts != &client->contexts) {
+        assert(request.type == NBD_CMD_BLOCK_STATUS);
+        g_free(request.contexts->bitmaps);
+        g_free(request.contexts);
+    }
     if (ret < 0) {
         error_prepend(&local_err, "Failed to send reply: ");
         goto disconnect;
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v6 17/17] nbd/server: Add FLAG_PAYLOAD support to CMD_BLOCK_STATUS
  2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
                   ` (15 preceding siblings ...)
  2023-08-29 17:58 ` [PATCH v6 16/17] nbd/server: Prepare for per-request filtering of BLOCK_STATUS Eric Blake
@ 2023-08-29 17:58 ` Eric Blake
  16 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-08-29 17:58 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, vsementsov, Kevin Wolf, Hanna Reitz

Allow a client to request a subset of negotiated meta contexts.  For
example, a client may ask to use a single connection to learn about
both block status and dirty bitmaps, but where the dirty bitmap
queries only need to be performed on a subset of the disk; forcing the
server to compute that information on block status queries in the rest
of the disk is wasted effort (both at the server, and on the amount of
traffic sent over the wire to be parsed and ignored by the client).

Qemu as an NBD client never requests to use more than one meta
context, so it has no need to use block status payloads.  Testing this
instead requires support from libnbd, which CAN access multiple meta
contexts in parallel from a single NBD connection; an interop test
submitted to the libnbd project at the same time as this patch
demonstrates the feature working, as well as testing some corner cases
(for example, when the payload length is longer than the export
length), although other corner cases (like passing the same id
duplicated) requires a protocol fuzzer because libnbd is not wired up
to break the protocol that badly.

This also includes tweaks to 'qemu-nbd --list' to show when a server
is advertising the capability, and to the testsuite to reflect the
addition to that output.

Of note: qemu will always advertise the new feature bit during
NBD_OPT_INFO if extended headers have alreay been negotiated
(regardless of whether any NBD_OPT_SET_META_CONTEXT negotiation has
occurred); but for NBD_OPT_GO, qemu only advertises the feature if
block status is also enabled (that is, if the client does not
negotiate any contexts, then NBD_CMD_BLOCK_STATUS cannot be used, so
the feature is not advertised).

Signed-off-by: Eric Blake <eblake@redhat.com>
---

v5: factor out 'id - NBD_MTA_ID_DIRTY_BITMAP' [Vladimir], rework logic
on zero-length requests to be clearer [Vladimir], rebase to earlier
changes
---
 docs/interop/nbd.txt                          |   2 +-
 nbd/server.c                                  | 114 ++++++++++++++++--
 qemu-nbd.c                                    |   1 +
 nbd/trace-events                              |   1 +
 tests/qemu-iotests/223.out                    |  12 +-
 tests/qemu-iotests/307.out                    |  10 +-
 .../tests/nbd-qemu-allocation.out             |   2 +-
 7 files changed, 122 insertions(+), 20 deletions(-)

diff --git a/docs/interop/nbd.txt b/docs/interop/nbd.txt
index 9aae5e1f294..18efb251de9 100644
--- a/docs/interop/nbd.txt
+++ b/docs/interop/nbd.txt
@@ -69,4 +69,4 @@ NBD_CMD_BLOCK_STATUS for "qemu:dirty-bitmap:", NBD_CMD_CACHE
 NBD_CMD_FLAG_FAST_ZERO
 * 5.2: NBD_CMD_BLOCK_STATUS for "qemu:allocation-depth"
 * 7.1: NBD_FLAG_CAN_MULTI_CONN for shareable writable exports
-* 8.2: NBD_OPT_EXTENDED_HEADERS
+* 8.2: NBD_OPT_EXTENDED_HEADERS, NBD_FLAG_BLOCK_STATUS_PAYLOAD
diff --git a/nbd/server.c b/nbd/server.c
index f1805b39318..3030ecfbc59 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -512,6 +512,9 @@ static int nbd_negotiate_handle_export_name(NBDClient *client, bool no_zeroes,
     if (client->mode >= NBD_MODE_STRUCTURED) {
         myflags |= NBD_FLAG_SEND_DF;
     }
+    if (client->mode >= NBD_MODE_EXTENDED && client->contexts.count) {
+        myflags |= NBD_FLAG_BLOCK_STAT_PAYLOAD;
+    }
     trace_nbd_negotiate_new_style_size_flags(client->exp->size, myflags);
     stq_be_p(buf, client->exp->size);
     stw_be_p(buf + 8, myflags);
@@ -699,6 +702,10 @@ static int nbd_negotiate_handle_info(NBDClient *client, Error **errp)
     if (client->mode >= NBD_MODE_STRUCTURED) {
         myflags |= NBD_FLAG_SEND_DF;
     }
+    if (client->mode >= NBD_MODE_EXTENDED &&
+        (client->contexts.count || client->opt == NBD_OPT_INFO)) {
+        myflags |= NBD_FLAG_BLOCK_STAT_PAYLOAD;
+    }
     trace_nbd_negotiate_new_style_size_flags(exp->size, myflags);
     stq_be_p(buf, exp->size);
     stw_be_p(buf + 8, myflags);
@@ -2432,6 +2439,87 @@ static int coroutine_fn nbd_co_send_bitmap(NBDClient *client,
     return nbd_co_send_extents(client, request, ea, last, context_id, errp);
 }

+/*
+ * nbd_co_block_status_payload_read
+ * Called when a client wants a subset of negotiated contexts via a
+ * BLOCK_STATUS payload.  Check the payload for valid length and
+ * contents.  On success, return 0 with request updated to effective
+ * length.  If request was invalid but all payload consumed, return 0
+ * with request->len and request->contexts->count set to 0 (which will
+ * trigger an appropriate NBD_EINVAL response later on).  Return
+ * negative errno if the payload was not fully consumed.
+ */
+static int
+nbd_co_block_status_payload_read(NBDClient *client, NBDRequest *request,
+                                 Error **errp)
+{
+    int payload_len = request->len;
+    g_autofree char *buf = NULL;
+    size_t count, i, nr_bitmaps;
+    uint32_t id;
+
+    if (payload_len > NBD_MAX_BUFFER_SIZE) {
+        error_setg(errp, "len (%" PRIu64 ") is larger than max len (%u)",
+                   request->len, NBD_MAX_BUFFER_SIZE);
+        return -EINVAL;
+    }
+
+    assert(client->contexts.exp == client->exp);
+    nr_bitmaps = client->exp->nr_export_bitmaps;
+    request->contexts = g_new0(NBDMetaContexts, 1);
+    request->contexts->exp = client->exp;
+
+    if (payload_len % sizeof(uint32_t) ||
+        payload_len < sizeof(NBDBlockStatusPayload) ||
+        payload_len > (sizeof(NBDBlockStatusPayload) +
+                       sizeof(id) * client->contexts.count)) {
+        goto skip;
+    }
+
+    buf = g_malloc(payload_len);
+    if (nbd_read(client->ioc, buf, payload_len,
+                 "CMD_BLOCK_STATUS data", errp) < 0) {
+        return -EIO;
+    }
+    trace_nbd_co_receive_request_payload_received(request->cookie,
+                                                  payload_len);
+    request->contexts->bitmaps = g_new0(bool, nr_bitmaps);
+    count = (payload_len - sizeof(NBDBlockStatusPayload)) / sizeof(id);
+    payload_len = 0;
+
+    for (i = 0; i < count; i++) {
+        id = ldl_be_p(buf + sizeof(NBDBlockStatusPayload) + sizeof(id) * i);
+        if (id == NBD_META_ID_BASE_ALLOCATION) {
+            if (request->contexts->base_allocation) {
+                goto skip;
+            }
+            request->contexts->base_allocation = true;
+        } else if (id == NBD_META_ID_ALLOCATION_DEPTH) {
+            if (request->contexts->allocation_depth) {
+                goto skip;
+            }
+            request->contexts->allocation_depth = true;
+        } else {
+            int idx = id - NBD_META_ID_DIRTY_BITMAP;
+
+            if (idx > nr_bitmaps || request->contexts->bitmaps[idx]) {
+                goto skip;
+            }
+            request->contexts->bitmaps[idx] = true;
+        }
+    }
+
+    request->len = ldq_be_p(buf);
+    request->contexts->count = count;
+    return 0;
+
+ skip:
+    trace_nbd_co_receive_block_status_payload_compliance(request->from,
+                                                         request->len);
+    request->len = request->contexts->count = 0;
+    return nbd_drop(client->ioc, payload_len, errp);
+}
+
 /* nbd_co_receive_request
  * Collect a client request. Return 0 if request looks valid, -EIO to drop
  * connection right away, -EAGAIN to indicate we were interrupted and the
@@ -2511,7 +2599,18 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
         break;

     case NBD_CMD_BLOCK_STATUS:
-        request->contexts = &client->contexts;
+        if (extended_with_payload) {
+            ret = nbd_co_block_status_payload_read(client, request, errp);
+            if (ret < 0) {
+                return ret;
+            }
+            /* payload now consumed */
+            check_length = extended_with_payload = false;
+            payload_len = 0;
+            valid_flags |= NBD_CMD_FLAG_PAYLOAD_LEN;
+        } else {
+            request->contexts = &client->contexts;
+        }
         valid_flags |= NBD_CMD_FLAG_REQ_ONE;
         break;

@@ -2753,16 +2852,16 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,

     case NBD_CMD_BLOCK_STATUS:
         assert(request->contexts);
-        if (!request->len) {
-            return nbd_send_generic_reply(client, request, -EINVAL,
-                                          "need non-zero length", errp);
-        }
         assert(client->mode >= NBD_MODE_EXTENDED ||
                request->len <= UINT32_MAX);
         if (request->contexts->count) {
             bool dont_fragment = request->flags & NBD_CMD_FLAG_REQ_ONE;
             int contexts_remaining = request->contexts->count;

+            if (!request->len) {
+                return nbd_send_generic_reply(client, request, -EINVAL,
+                                              "need non-zero length", errp);
+            }
             if (request->contexts->base_allocation) {
                 ret = nbd_co_send_block_status(client, request,
                                                exp->common.blk,
@@ -2899,8 +2998,9 @@ static coroutine_fn void nbd_trip(void *opaque)
         goto disconnect;
     }

-    /* We must disconnect after NBD_CMD_WRITE if we did not
-     * read the payload.
+    /*
+     * We must disconnect after NBD_CMD_WRITE or BLOCK_STATUS with
+     * payload if we did not read the payload.
      */
     if (!req->complete) {
         error_setg(&local_err, "Request handling failed in intermediate state");
diff --git a/qemu-nbd.c b/qemu-nbd.c
index ca846f7d96d..9c6dfe04535 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -221,6 +221,7 @@ static int qemu_nbd_client_list(SocketAddress *saddr, QCryptoTLSCreds *tls,
                 [NBD_FLAG_SEND_RESIZE_BIT]          = "resize",
                 [NBD_FLAG_SEND_CACHE_BIT]           = "cache",
                 [NBD_FLAG_SEND_FAST_ZERO_BIT]       = "fast-zero",
+                [NBD_FLAG_BLOCK_STAT_PAYLOAD_BIT]   = "block-status-payload",
             };

             printf("  size:  %" PRIu64 "\n", list[i].size);
diff --git a/nbd/trace-events b/nbd/trace-events
index 8f4e20ee9f2..ac186c19ec0 100644
--- a/nbd/trace-events
+++ b/nbd/trace-events
@@ -70,6 +70,7 @@ nbd_co_send_chunk_read(uint64_t cookie, uint64_t offset, void *data, uint64_t si
 nbd_co_send_chunk_read_hole(uint64_t cookie, uint64_t offset, uint64_t size) "Send structured read hole reply: cookie = %" PRIu64 ", offset = %" PRIu64 ", len = %" PRIu64
 nbd_co_send_extents(uint64_t cookie, unsigned int extents, uint32_t id, uint64_t length, int last) "Send block status reply: cookie = %" PRIu64 ", extents = %u, context = %d (extents cover %" PRIu64 " bytes, last chunk = %d)"
 nbd_co_send_chunk_error(uint64_t cookie, int err, const char *errname, const char *msg) "Send structured error reply: cookie = %" PRIu64 ", error = %d (%s), msg = '%s'"
+nbd_co_receive_block_status_payload_compliance(uint64_t from, int len) "client sent unusable block status payload: from=0x%" PRIx64 ", len=0x%x"
 nbd_co_receive_request_decode_type(uint64_t cookie, uint16_t type, const char *name) "Decoding type: cookie = %" PRIu64 ", type = %" PRIu16 " (%s)"
 nbd_co_receive_request_payload_received(uint64_t cookie, uint64_t len) "Payload received: cookie = %" PRIu64 ", len = %" PRIu64
 nbd_co_receive_ext_payload_compliance(uint64_t from, uint64_t len) "client sent non-compliant write without payload flag: from=0x%" PRIx64 ", len=0x%" PRIx64
diff --git a/tests/qemu-iotests/223.out b/tests/qemu-iotests/223.out
index b98582c38ea..b38f0b7963b 100644
--- a/tests/qemu-iotests/223.out
+++ b/tests/qemu-iotests/223.out
@@ -83,7 +83,7 @@ exports available: 0
 exports available: 3
  export: 'n'
   size:  4194304
-  flags: 0x58f ( readonly flush fua df multi cache )
+  flags: 0x158f ( readonly flush fua df multi cache block-status-payload )
   min block: 1
   opt block: 4096
   max block: 33554432
@@ -94,7 +94,7 @@ exports available: 3
  export: 'n2'
   description: some text
   size:  4194304
-  flags: 0xded ( flush fua trim zeroes df multi cache fast-zero )
+  flags: 0x1ded ( flush fua trim zeroes df multi cache fast-zero block-status-payload )
   min block: 1
   opt block: 4096
   max block: 33554432
@@ -104,7 +104,7 @@ exports available: 3
    qemu:dirty-bitmap:b2
  export: 'n3'
   size:  4194304
-  flags: 0x58f ( readonly flush fua df multi cache )
+  flags: 0x158f ( readonly flush fua df multi cache block-status-payload )
   min block: 1
   opt block: 4096
   max block: 33554432
@@ -205,7 +205,7 @@ exports available: 0
 exports available: 3
  export: 'n'
   size:  4194304
-  flags: 0x58f ( readonly flush fua df multi cache )
+  flags: 0x158f ( readonly flush fua df multi cache block-status-payload )
   min block: 1
   opt block: 4096
   max block: 33554432
@@ -216,7 +216,7 @@ exports available: 3
  export: 'n2'
   description: some text
   size:  4194304
-  flags: 0xded ( flush fua trim zeroes df multi cache fast-zero )
+  flags: 0x1ded ( flush fua trim zeroes df multi cache fast-zero block-status-payload )
   min block: 1
   opt block: 4096
   max block: 33554432
@@ -226,7 +226,7 @@ exports available: 3
    qemu:dirty-bitmap:b2
  export: 'n3'
   size:  4194304
-  flags: 0x58f ( readonly flush fua df multi cache )
+  flags: 0x158f ( readonly flush fua df multi cache block-status-payload )
   min block: 1
   opt block: 4096
   max block: 33554432
diff --git a/tests/qemu-iotests/307.out b/tests/qemu-iotests/307.out
index 2b9a6a67a1a..f645f3315f8 100644
--- a/tests/qemu-iotests/307.out
+++ b/tests/qemu-iotests/307.out
@@ -15,7 +15,7 @@ wrote 4096/4096 bytes at offset 0
 exports available: 1
  export: 'fmt'
   size:  67108864
-  flags: 0x58f ( readonly flush fua df multi cache )
+  flags: 0x158f ( readonly flush fua df multi cache block-status-payload )
   min block: XXX
   opt block: XXX
   max block: XXX
@@ -44,7 +44,7 @@ exports available: 1
 exports available: 1
  export: 'fmt'
   size:  67108864
-  flags: 0x58f ( readonly flush fua df multi cache )
+  flags: 0x158f ( readonly flush fua df multi cache block-status-payload )
   min block: XXX
   opt block: XXX
   max block: XXX
@@ -76,7 +76,7 @@ exports available: 1
 exports available: 2
  export: 'fmt'
   size:  67108864
-  flags: 0x58f ( readonly flush fua df multi cache )
+  flags: 0x158f ( readonly flush fua df multi cache block-status-payload )
   min block: XXX
   opt block: XXX
   max block: XXX
@@ -86,7 +86,7 @@ exports available: 2
  export: 'export1'
   description: This is the writable second export
   size:  67108864
-  flags: 0xded ( flush fua trim zeroes df multi cache fast-zero )
+  flags: 0x1ded ( flush fua trim zeroes df multi cache fast-zero block-status-payload )
   min block: XXX
   opt block: XXX
   max block: XXX
@@ -113,7 +113,7 @@ exports available: 1
  export: 'export1'
   description: This is the writable second export
   size:  67108864
-  flags: 0xded ( flush fua trim zeroes df multi cache fast-zero )
+  flags: 0x1ded ( flush fua trim zeroes df multi cache fast-zero block-status-payload )
   min block: XXX
   opt block: XXX
   max block: XXX
diff --git a/tests/qemu-iotests/tests/nbd-qemu-allocation.out b/tests/qemu-iotests/tests/nbd-qemu-allocation.out
index 659276032b0..794d1bfce62 100644
--- a/tests/qemu-iotests/tests/nbd-qemu-allocation.out
+++ b/tests/qemu-iotests/tests/nbd-qemu-allocation.out
@@ -17,7 +17,7 @@ wrote 2097152/2097152 bytes at offset 1048576
 exports available: 1
  export: ''
   size:  4194304
-  flags: 0x48f ( readonly flush fua df cache )
+  flags: 0x148f ( readonly flush fua df cache block-status-payload )
   min block: 1
   opt block: 4096
   max block: 33554432
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 01/17] nbd: Replace bool structured_reply with mode enum
  2023-08-29 17:58 ` [PATCH v6 01/17] nbd: Replace bool structured_reply with mode enum Eric Blake
@ 2023-09-04 15:23   ` Vladimir Sementsov-Ogievskiy
  0 siblings, 0 replies; 28+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2023-09-04 15:23 UTC (permalink / raw)
  To: Eric Blake, qemu-devel; +Cc: qemu-block, Kevin Wolf, Hanna Reitz

On 29.08.23 20:58, Eric Blake wrote:
> The upcoming patches for 64-bit extensions requires various points in
> the protocol to make decisions based on what was negotiated.  While we
> could easily add a 'bool extended_headers' alongside the existing
> 'bool structured_reply', this does not scale well if more modes are
> added in the future.  Better is to expose the mode enum added in the
> recent commit bfe04d0a7d out to a wider use in the code base.
> 
> Where the code previously checked for structured_reply being set or
> clear, it now prefers checking for an inequality; this works because
> the nodes are in a continuum of increasing abilities, and allows us to
> touch fewer places if we ever insert other modes in the middle of the
> enum.  There should be no semantic change in this patch.
> 
> Signed-off-by: Eric Blake<eblake@redhat.com>

Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

-- 
Best regards,
Vladimir



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 04/17] nbd: Prepare for 64-bit request effect lengths
  2023-08-29 17:58 ` [PATCH v6 04/17] nbd: Prepare for 64-bit request effect lengths Eric Blake
@ 2023-09-04 16:15   ` Vladimir Sementsov-Ogievskiy
  2023-09-05 14:24     ` Eric Blake
  0 siblings, 1 reply; 28+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2023-09-04 16:15 UTC (permalink / raw)
  To: Eric Blake, qemu-devel; +Cc: qemu-block, Kevin Wolf, Hanna Reitz

On 29.08.23 20:58, Eric Blake wrote:
> Widen the length field of NBDRequest to 64-bits, although we can
> assert that all current uses are still under 32 bits: either because
> of NBD_MAX_BUFFER_SIZE which is even smaller (and where size_t can
> still be appropriate, even on 32-bit platforms), or because nothing
> ever puts us into NBD_MODE_EXTENDED yet (and while future patches will
> allow larger transactions, the lengths in play here are still capped
> at 32-bit).  There are no semantic changes, other than a typo fix in a
> couple of error messages.
> 
> Signed-off-by: Eric Blake <eblake@redhat.com>
> ---
> 
> v6: fix sign extension bug
> 
> v5: tweak commit message, adjust a few more spots [Vladimir]
> 
> v4: split off enum changes to earlier patches [Vladimir]

[..]

> --- a/nbd/server.c
> +++ b/nbd/server.c
> @@ -1165,7 +1165,7 @@ static int nbd_negotiate_options(NBDClient *client, Error **errp)
>           client->optlen = length;
> 
>           if (length > NBD_MAX_BUFFER_SIZE) {
> -            error_setg(errp, "len (%" PRIu32" ) is larger than max len (%u)",
> +            error_setg(errp, "len (%" PRIu32 ") is larger than max len (%u)",
>                          length, NBD_MAX_BUFFER_SIZE);
>               return -EINVAL;
>           }
> @@ -1441,7 +1441,7 @@ static int coroutine_fn nbd_receive_request(NBDClient *client, NBDRequest *reque
>       request->type   = lduw_be_p(buf + 6);
>       request->cookie = ldq_be_p(buf + 8);
>       request->from   = ldq_be_p(buf + 16);
> -    request->len    = ldl_be_p(buf + 24);
> +    request->len    = (uint32_t)ldl_be_p(buf + 24); /* widen 32 to 64 bits */

should it be "(uint64_t)" ?

> 
>       trace_nbd_receive_request(magic, request->flags, request->type,
>                                 request->from, request->len);
> @@ -1899,7 +1899,7 @@ static int coroutine_fn nbd_co_send_simple_reply(NBDClient *client,
>                                                    NBDRequest *request,
>                                                    uint32_t error,
>                                                    void *data,
> -                                                 size_t len,
> +                                                 uint64_t len,
>                                                    Error **errp)
>   {
>       NBDSimpleReply reply;
> @@ -1910,6 +1910,7 @@ static int coroutine_fn nbd_co_send_simple_reply(NBDClient *client,
>       };
> 
>       assert(!len || !nbd_err);
> +    assert(len <= NBD_MAX_STRING_SIZE);

NBD_MAX_BUFFER_SIZE ?

with that fixed:
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>



-- 
Best regards,
Vladimir



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 05/17] nbd/server: Refactor handling of command sanity checks
  2023-08-29 17:58 ` [PATCH v6 05/17] nbd/server: Refactor handling of command sanity checks Eric Blake
@ 2023-09-04 16:53   ` Vladimir Sementsov-Ogievskiy
  2023-09-25 19:03     ` Eric Blake
  0 siblings, 1 reply; 28+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2023-09-04 16:53 UTC (permalink / raw)
  To: Eric Blake, qemu-devel; +Cc: qemu-block

On 29.08.23 20:58, Eric Blake wrote:
> Upcoming additions to support NBD 64-bit effect lengths will add a new
> command flag NBD_CMD_FLAG_PAYLOAD_LEN that needs to be considered in
> our sanity checks of the client's messages (that is, more than just
> CMD_WRITE have the potential to carry a client payload when extended
> headers are in effect).  But before we can start to support that, it
> is easier to first refactor the existing set of various if statements
> over open-coded combinations of request->type to instead be a single
> switch statement over all command types that sets witnesses, then
> straight-line processing based on the witnesses.  No semantic change
> is intended.
> 
> Signed-off-by: Eric Blake<eblake@redhat.com>

Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>


-- 
Best regards,
Vladimir



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 04/17] nbd: Prepare for 64-bit request effect lengths
  2023-09-04 16:15   ` Vladimir Sementsov-Ogievskiy
@ 2023-09-05 14:24     ` Eric Blake
  2023-09-05 14:41       ` Vladimir Sementsov-Ogievskiy
  0 siblings, 1 reply; 28+ messages in thread
From: Eric Blake @ 2023-09-05 14:24 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-devel, qemu-block, Kevin Wolf, Hanna Reitz, peter.maydell

On Mon, Sep 04, 2023 at 07:15:04PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> On 29.08.23 20:58, Eric Blake wrote:
> > Widen the length field of NBDRequest to 64-bits, although we can
> > assert that all current uses are still under 32 bits: either because
> > of NBD_MAX_BUFFER_SIZE which is even smaller (and where size_t can
> > still be appropriate, even on 32-bit platforms), or because nothing
> > ever puts us into NBD_MODE_EXTENDED yet (and while future patches will
> > allow larger transactions, the lengths in play here are still capped
> > at 32-bit).  There are no semantic changes, other than a typo fix in a
> > couple of error messages.
> > 
> > Signed-off-by: Eric Blake <eblake@redhat.com>
> > ---
> > 
> > v6: fix sign extension bug
> > 
> > v5: tweak commit message, adjust a few more spots [Vladimir]
> > 
> > v4: split off enum changes to earlier patches [Vladimir]
> 
> [..]
> 
> > --- a/nbd/server.c
> > +++ b/nbd/server.c
> > @@ -1165,7 +1165,7 @@ static int nbd_negotiate_options(NBDClient *client, Error **errp)
> >           client->optlen = length;
> > 
> >           if (length > NBD_MAX_BUFFER_SIZE) {
> > -            error_setg(errp, "len (%" PRIu32" ) is larger than max len (%u)",
> > +            error_setg(errp, "len (%" PRIu32 ") is larger than max len (%u)",
> >                          length, NBD_MAX_BUFFER_SIZE);
> >               return -EINVAL;
> >           }
> > @@ -1441,7 +1441,7 @@ static int coroutine_fn nbd_receive_request(NBDClient *client, NBDRequest *reque
> >       request->type   = lduw_be_p(buf + 6);
> >       request->cookie = ldq_be_p(buf + 8);
> >       request->from   = ldq_be_p(buf + 16);
> > -    request->len    = ldl_be_p(buf + 24);
> > +    request->len    = (uint32_t)ldl_be_p(buf + 24); /* widen 32 to 64 bits */
> 
> should it be "(uint64_t)" ?

No. ldl_be_p() returns an int.  Widening int to 64 bits sign-extends;
we have to first make it unsigned (by casting to uint32_t) so that we
then have an unsigned value that widens by zero-extension.

Maybe we should fix ldl_bd_p() and friends to favor unsigned values.
'git grep ldul_be' has zero hits; even though Peter just touched the
docs patch claiming it exists:

https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg00645.html


> 
> > 
> >       trace_nbd_receive_request(magic, request->flags, request->type,
> >                                 request->from, request->len);
> > @@ -1899,7 +1899,7 @@ static int coroutine_fn nbd_co_send_simple_reply(NBDClient *client,
> >                                                    NBDRequest *request,
> >                                                    uint32_t error,
> >                                                    void *data,
> > -                                                 size_t len,
> > +                                                 uint64_t len,
> >                                                    Error **errp)
> >   {
> >       NBDSimpleReply reply;
> > @@ -1910,6 +1910,7 @@ static int coroutine_fn nbd_co_send_simple_reply(NBDClient *client,
> >       };
> > 
> >       assert(!len || !nbd_err);
> > +    assert(len <= NBD_MAX_STRING_SIZE);
> 
> NBD_MAX_BUFFER_SIZE ?

No. MAX_STRING_SIZE is 4k, MAX_BUFFER_SIZE is 32M.  The smaller size
is the correct bound here (an error message has to be transmitted as a
single string, and the recipient does not expect more than a 4k error
message).

> 
> with that fixed:
> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization:  qemu.org | libguestfs.org



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 06/17] nbd/server: Support a request payload
  2023-08-29 17:58 ` [PATCH v6 06/17] nbd/server: Support a request payload Eric Blake
@ 2023-09-05 14:36   ` Vladimir Sementsov-Ogievskiy
  2023-09-06 17:52     ` Eric Blake
  0 siblings, 1 reply; 28+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2023-09-05 14:36 UTC (permalink / raw)
  To: Eric Blake, qemu-devel; +Cc: qemu-block

On 29.08.23 20:58, Eric Blake wrote:
> Upcoming additions to support NBD 64-bit effect lengths allow for the
> possibility to distinguish between payload length (capped at 32M) and
> effect length (64 bits, although we generally assume 63 bits because
> of off_t limitations).  Without that extension, only the NBD_CMD_WRITE
> request has a payload; but with the extension, it makes sense to allow
> at least NBD_CMD_BLOCK_STATUS to have both a payload and effect length
> in a future patch (where the payload is a limited-size struct that in
> turn gives the real effect length as well as a subset of known ids for
> which status is requested).  Other future NBD commands may also have a
> request payload, so the 64-bit extension introduces a new
> NBD_CMD_FLAG_PAYLOAD_LEN that distinguishes between whether the header
> length is a payload length or an effect length, rather than
> hard-coding the decision based on the command; although a client
> should never send a command with a payload without the negotiation
> phase proving such extension is available, we are now able to
> gracefully fail unexpected client payloads while keeping the
> connection alive.  Note that we do not support the payload version of
> BLOCK_STATUS yet.
> 
> Signed-off-by: Eric Blake <eblake@redhat.com>
> ---
> 
> v5: retitled from v4 13/24, rewrite on top of previous patch's switch
> statement [Vladimir]
> 
> v4: less indentation on several 'if's [Vladimir]
> ---
>   nbd/server.c     | 33 ++++++++++++++++++++++++++++-----
>   nbd/trace-events |  1 +
>   2 files changed, 29 insertions(+), 5 deletions(-)
> 
> diff --git a/nbd/server.c b/nbd/server.c
> index dd3ab59224c..adcfcdeacb7 100644
> --- a/nbd/server.c
> +++ b/nbd/server.c
> @@ -2334,7 +2334,8 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
>                                                  Error **errp)
>   {
>       NBDClient *client = req->client;
> -    bool check_length = false;
> +    bool extended_with_payload;
> +    bool check_length;
>       bool check_rofs = false;
>       bool allocate_buffer = false;
>       unsigned payload_len = 0;
> @@ -2350,6 +2351,9 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
> 
>       trace_nbd_co_receive_request_decode_type(request->cookie, request->type,
>                                                nbd_cmd_lookup(request->type));
> +    check_length = extended_with_payload = client->mode >= NBD_MODE_EXTENDED &&
> +        request->flags & NBD_CMD_FLAG_PAYLOAD_LEN;
> +
>       switch (request->type) {
>       case NBD_CMD_DISC:
>           /* Special case: we're going to disconnect without a reply,
> @@ -2366,6 +2370,14 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
>           break;
> 
>       case NBD_CMD_WRITE:
> +        if (client->mode >= NBD_MODE_EXTENDED) {
> +            if (!extended_with_payload) {
> +                /* The client is noncompliant. Trace it, but proceed. */
> +                trace_nbd_co_receive_ext_payload_compliance(request->from,
> +                                                            request->len);
> +            }
> +            valid_flags |= NBD_CMD_FLAG_PAYLOAD_LEN;
> +        }
>           payload_len = request->len;
>           check_length = true;
>           allocate_buffer = true;
> @@ -2407,6 +2419,15 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,

more context:

     /* Payload and buffer handling. */
     if (!payload_len) {
         req->complete = true;
     }
     if (check_length && request->len > NBD_MAX_BUFFER_SIZE) {
         /* READ, WRITE, CACHE */
         error_setg(errp, "len (%" PRIu64 ") is larger than max len (%u)",
                    request->len, NBD_MAX_BUFFER_SIZE);
         return -EINVAL;
     }


> +    if (extended_with_payload && !allocate_buffer) {

it's correct but strange, as allocate_buffer is (READ or WRITE), and READ is totally unrelated here.

> +        /*
> +         * For now, we don't support payloads on other commands; but
> +         * we can keep the connection alive by ignoring the payload.

Was it in specification, that we can safely ignore unknown payload for known commands?

> +         */
> +        assert(request->type != NBD_CMD_WRITE);
> +        payload_len = request->len;

what I don't like here, is that we already set req->complete to true for this request, when payload_len was zero.

Probably, for extended_with_payload requests we should initialize payload_len at top of the function?

> +        request->len = 0;
> +    }
>       if (allocate_buffer) {>           /* READ, WRITE */
>           req->data = blk_try_blockalign(client->exp->common.blk,
> @@ -2417,10 +2438,12 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
>           }
>       }
>       if (payload_len) {
> -        /* WRITE */
> -        assert(req->data);
> -        ret = nbd_read(client->ioc, req->data, payload_len,
> -                       "CMD_WRITE data", errp);
> +        if (req->data) {

and req->data is actually (READ or WRITE) ( == allocated_buffer) as well.

I'd prefer here "if (request->type == NBD_CMD_WRITE)" here

> +            ret = nbd_read(client->ioc, req->data, payload_len,
> +                           "CMD_WRITE data", errp);
> +        } else {
> +            ret = nbd_drop(client->ioc, payload_len, errp);
> +        }
>           if (ret < 0) {
>               return -EIO;
>           }
> diff --git a/nbd/trace-events b/nbd/trace-events
> index f9dccfcfb44..c1a3227613f 100644
> --- a/nbd/trace-events
> +++ b/nbd/trace-events
> @@ -71,6 +71,7 @@ nbd_co_send_extents(uint64_t cookie, unsigned int extents, uint32_t id, uint64_t
>   nbd_co_send_chunk_error(uint64_t cookie, int err, const char *errname, const char *msg) "Send structured error reply: cookie = %" PRIu64 ", error = %d (%s), msg = '%s'"
>   nbd_co_receive_request_decode_type(uint64_t cookie, uint16_t type, const char *name) "Decoding type: cookie = %" PRIu64 ", type = %" PRIu16 " (%s)"
>   nbd_co_receive_request_payload_received(uint64_t cookie, uint64_t len) "Payload received: cookie = %" PRIu64 ", len = %" PRIu64
> +nbd_co_receive_ext_payload_compliance(uint64_t from, uint64_t len) "client sent non-compliant write without payload flag: from=0x%" PRIx64 ", len=0x%" PRIx64
>   nbd_co_receive_align_compliance(const char *op, uint64_t from, uint64_t len, uint32_t align) "client sent non-compliant unaligned %s request: from=0x%" PRIx64 ", len=0x%" PRIx64 ", align=0x%" PRIx32
>   nbd_trip(void) "Reading request"
> 

-- 
Best regards,
Vladimir



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 04/17] nbd: Prepare for 64-bit request effect lengths
  2023-09-05 14:24     ` Eric Blake
@ 2023-09-05 14:41       ` Vladimir Sementsov-Ogievskiy
  2023-09-06 17:18         ` Eric Blake
  0 siblings, 1 reply; 28+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2023-09-05 14:41 UTC (permalink / raw)
  To: Eric Blake; +Cc: qemu-devel, qemu-block, Kevin Wolf, Hanna Reitz, peter.maydell

On 05.09.23 17:24, Eric Blake wrote:
> On Mon, Sep 04, 2023 at 07:15:04PM +0300, Vladimir Sementsov-Ogievskiy wrote:
>> On 29.08.23 20:58, Eric Blake wrote:
>>> Widen the length field of NBDRequest to 64-bits, although we can
>>> assert that all current uses are still under 32 bits: either because
>>> of NBD_MAX_BUFFER_SIZE which is even smaller (and where size_t can
>>> still be appropriate, even on 32-bit platforms), or because nothing
>>> ever puts us into NBD_MODE_EXTENDED yet (and while future patches will
>>> allow larger transactions, the lengths in play here are still capped
>>> at 32-bit).  There are no semantic changes, other than a typo fix in a
>>> couple of error messages.
>>>
>>> Signed-off-by: Eric Blake <eblake@redhat.com>
>>> ---
>>>
>>> v6: fix sign extension bug
>>>
>>> v5: tweak commit message, adjust a few more spots [Vladimir]
>>>
>>> v4: split off enum changes to earlier patches [Vladimir]
>>
>> [..]
>>
>>> --- a/nbd/server.c
>>> +++ b/nbd/server.c
>>> @@ -1165,7 +1165,7 @@ static int nbd_negotiate_options(NBDClient *client, Error **errp)
>>>            client->optlen = length;
>>>
>>>            if (length > NBD_MAX_BUFFER_SIZE) {
>>> -            error_setg(errp, "len (%" PRIu32" ) is larger than max len (%u)",
>>> +            error_setg(errp, "len (%" PRIu32 ") is larger than max len (%u)",
>>>                           length, NBD_MAX_BUFFER_SIZE);
>>>                return -EINVAL;
>>>            }
>>> @@ -1441,7 +1441,7 @@ static int coroutine_fn nbd_receive_request(NBDClient *client, NBDRequest *reque
>>>        request->type   = lduw_be_p(buf + 6);
>>>        request->cookie = ldq_be_p(buf + 8);
>>>        request->from   = ldq_be_p(buf + 16);
>>> -    request->len    = ldl_be_p(buf + 24);
>>> +    request->len    = (uint32_t)ldl_be_p(buf + 24); /* widen 32 to 64 bits */
>>
>> should it be "(uint64_t)" ?
> 
> No. ldl_be_p() returns an int.  Widening int to 64 bits sign-extends;
> we have to first make it unsigned (by casting to uint32_t) so that we
> then have an unsigned value that widens by zero-extension.
> 
> Maybe we should fix ldl_bd_p() and friends to favor unsigned values.
> 'git grep ldul_be' has zero hits; even though Peter just touched the
> docs patch claiming it exists:
> 
> https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg00645.html
> 
> 
>>
>>>
>>>        trace_nbd_receive_request(magic, request->flags, request->type,
>>>                                  request->from, request->len);
>>> @@ -1899,7 +1899,7 @@ static int coroutine_fn nbd_co_send_simple_reply(NBDClient *client,
>>>                                                     NBDRequest *request,
>>>                                                     uint32_t error,
>>>                                                     void *data,
>>> -                                                 size_t len,
>>> +                                                 uint64_t len,
>>>                                                     Error **errp)
>>>    {
>>>        NBDSimpleReply reply;
>>> @@ -1910,6 +1910,7 @@ static int coroutine_fn nbd_co_send_simple_reply(NBDClient *client,
>>>        };
>>>
>>>        assert(!len || !nbd_err);
>>> +    assert(len <= NBD_MAX_STRING_SIZE);
>>
>> NBD_MAX_BUFFER_SIZE ?
> 
> No. MAX_STRING_SIZE is 4k, MAX_BUFFER_SIZE is 32M.  The smaller size
> is the correct bound here (an error message has to be transmitted as a
> single string, and the recipient does not expect more than a 4k error
> message).
> 

I miss something.. Why it's an error message? It's may be a simple reply for CMD_READ, where len is request->len


-- 
Best regards,
Vladimir



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 04/17] nbd: Prepare for 64-bit request effect lengths
  2023-09-05 14:41       ` Vladimir Sementsov-Ogievskiy
@ 2023-09-06 17:18         ` Eric Blake
  0 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-09-06 17:18 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy
  Cc: qemu-devel, qemu-block, Kevin Wolf, Hanna Reitz, peter.maydell

On Tue, Sep 05, 2023 at 05:41:33PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > > > @@ -1899,7 +1899,7 @@ static int coroutine_fn nbd_co_send_simple_reply(NBDClient *client,
> > > >                                                     NBDRequest *request,
> > > >                                                     uint32_t error,
> > > >                                                     void *data,
> > > > -                                                 size_t len,
> > > > +                                                 uint64_t len,
> > > >                                                     Error **errp)
> > > >    {
> > > >        NBDSimpleReply reply;
> > > > @@ -1910,6 +1910,7 @@ static int coroutine_fn nbd_co_send_simple_reply(NBDClient *client,
> > > >        };
> > > > 
> > > >        assert(!len || !nbd_err);
> > > > +    assert(len <= NBD_MAX_STRING_SIZE);
> > > 
> > > NBD_MAX_BUFFER_SIZE ?
> > 
> > No. MAX_STRING_SIZE is 4k, MAX_BUFFER_SIZE is 32M.  The smaller size
> > is the correct bound here (an error message has to be transmitted as a
> > single string, and the recipient does not expect more than a 4k error
> > message).
> > 
> 
> I miss something.. Why it's an error message? It's may be a simple reply for CMD_READ, where len is request->len

Oh; I was confusing this with the length of an error message; but when
an error message is sent over the wire, it's done with
nbd_co_send_chunk_error().  And a quick audit only shows two callers
of nbd_co_send_simple_reply(): nbd_send_generic_reply() which only
uses it with data/len NULL/0; and nbd_do_cmd_read() does indeed use up
to MAX_BUFFER_SIZE when structured mode was not negotiated.  Our unit
tests aren't covering interoperability with clients that don't request
structured replies, so I wasn't seeing the problem with just 'make
check' or any of the iotests; but libnbd's testsuite does do interop
testing, and there I was able to trip this assertion.  I'll fix the
assertion.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization:  qemu.org | libguestfs.org



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 06/17] nbd/server: Support a request payload
  2023-09-05 14:36   ` Vladimir Sementsov-Ogievskiy
@ 2023-09-06 17:52     ` Eric Blake
  2023-09-08 17:43       ` Eric Blake
  0 siblings, 1 reply; 28+ messages in thread
From: Eric Blake @ 2023-09-06 17:52 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: qemu-devel, qemu-block

On Tue, Sep 05, 2023 at 05:36:15PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> On 29.08.23 20:58, Eric Blake wrote:
> > Upcoming additions to support NBD 64-bit effect lengths allow for the
> > possibility to distinguish between payload length (capped at 32M) and
> > effect length (64 bits, although we generally assume 63 bits because
> > of off_t limitations).  Without that extension, only the NBD_CMD_WRITE
> > request has a payload; but with the extension, it makes sense to allow
> > at least NBD_CMD_BLOCK_STATUS to have both a payload and effect length
> > in a future patch (where the payload is a limited-size struct that in
> > turn gives the real effect length as well as a subset of known ids for
> > which status is requested).  Other future NBD commands may also have a
> > request payload, so the 64-bit extension introduces a new
> > NBD_CMD_FLAG_PAYLOAD_LEN that distinguishes between whether the header
> > length is a payload length or an effect length, rather than
> > hard-coding the decision based on the command; although a client
> > should never send a command with a payload without the negotiation
> > phase proving such extension is available, we are now able to
> > gracefully fail unexpected client payloads while keeping the
> > connection alive.  Note that we do not support the payload version of
> > BLOCK_STATUS yet.
> > 
> > Signed-off-by: Eric Blake <eblake@redhat.com>
> > ---
> > 
> > v5: retitled from v4 13/24, rewrite on top of previous patch's switch
> > statement [Vladimir]
> > 
> > v4: less indentation on several 'if's [Vladimir]
> > ---
> >   nbd/server.c     | 33 ++++++++++++++++++++++++++++-----
> >   nbd/trace-events |  1 +
> >   2 files changed, 29 insertions(+), 5 deletions(-)
> > 
> > diff --git a/nbd/server.c b/nbd/server.c
> > index dd3ab59224c..adcfcdeacb7 100644
> > --- a/nbd/server.c
> > +++ b/nbd/server.c
> > @@ -2334,7 +2334,8 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
> >                                                  Error **errp)
> >   {
> >       NBDClient *client = req->client;
> > -    bool check_length = false;
> > +    bool extended_with_payload;
> > +    bool check_length;
> >       bool check_rofs = false;
> >       bool allocate_buffer = false;
> >       unsigned payload_len = 0;
> > @@ -2350,6 +2351,9 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
> > 
> >       trace_nbd_co_receive_request_decode_type(request->cookie, request->type,
> >                                                nbd_cmd_lookup(request->type));
> > +    check_length = extended_with_payload = client->mode >= NBD_MODE_EXTENDED &&
> > +        request->flags & NBD_CMD_FLAG_PAYLOAD_LEN;
> > +
> >       switch (request->type) {
> >       case NBD_CMD_DISC:
> >           /* Special case: we're going to disconnect without a reply,
> > @@ -2366,6 +2370,14 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
> >           break;
> > 
> >       case NBD_CMD_WRITE:
> > +        if (client->mode >= NBD_MODE_EXTENDED) {
> > +            if (!extended_with_payload) {
> > +                /* The client is noncompliant. Trace it, but proceed. */
> > +                trace_nbd_co_receive_ext_payload_compliance(request->from,
> > +                                                            request->len);
> > +            }
> > +            valid_flags |= NBD_CMD_FLAG_PAYLOAD_LEN;
> > +        }
> >           payload_len = request->len;
> >           check_length = true;
> >           allocate_buffer = true;
> > @@ -2407,6 +2419,15 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
> 
> more context:
> 
>     /* Payload and buffer handling. */
>     if (!payload_len) {
>         req->complete = true;

At this point, payload_len is equal to 0 for all but NBD_CMD_WRITE.  [1]

>     }
>     if (check_length && request->len > NBD_MAX_BUFFER_SIZE) {
>         /* READ, WRITE, CACHE */
>         error_setg(errp, "len (%" PRIu64 ") is larger than max len (%u)",
>                    request->len, NBD_MAX_BUFFER_SIZE);
>         return -EINVAL;
>     }
> 
> 
> > +    if (extended_with_payload && !allocate_buffer) {
> 
> it's correct but strange, as allocate_buffer is (READ or WRITE), and READ is totally unrelated here.

Oh, you do have a point.  If a client mistakenly passes the
extended_with_payload flag on NBD_CMD_READ, we end up skipping this
code which tries to parse off that payload, meaning we could be out of
sync for reacting to the next command; if the client is particularly
malicious, they could send payload that resembles another valid
command.  Checking specifically for !WRITE rather than for
!allocate_buffer is more accurate, although I was trying to avoid
duplicating checks that were covered in the switch statement above.

I'll have to think about this a bit.

> 
> > +        /*
> > +         * For now, we don't support payloads on other commands; but
> > +         * we can keep the connection alive by ignoring the payload.
> 
> Was it in specification, that we can safely ignore unknown payload for known commands?

Well, at this point, the client is already non-compliant, so anything
we do is best effort only (the client may have set the bit but not
sent any payload, at which case we block until the client does send
another command but we consume that command as payload; or it sent
more payload than the size it advertises and so we see a magic number
mismatch when trying to interpret those extra bytes as the next
command).

But yes, the current spec wording mentions consuming payload bytes and
then failing with NBD_EINVAL.  Is there a potential that we need to
change the spec wording, if we think a malicious client can abuse the
flag to force the server into a deadlock scenario which can be abused
as a denial-of-service attack against other clients, compared to the
more conservative approach of just disconnecting because the client
sent a bad flag?  Generally, I look at disconnecting in response to
client behavior as the least favorable response to take; attempting to
return an error message is nicer.  And being blocked on I/O is not
necessarily a denial of service (it isn't actively consuming CPU
cycles, and we don't stop servicing parallel clients).

> 
> > +         */
> > +        assert(request->type != NBD_CMD_WRITE);
> > +        payload_len = request->len;
> 
> what I don't like here, is that we already set req->complete to true for this request, when payload_len was zero.
> 
> Probably, for extended_with_payload requests we should initialize payload_len at top of the function?

Well, we DO initialize it to 0, and then assign it under NBD_CMD_READ.
But the code that sets req->complete should probably NOT be setting it
if we are about to attempt reading payload bytes; that is, back at
[1], the code there probably needs to be conditional on
extended_with_payload.

> 
> > +        request->len = 0;
> > +    }
> >       if (allocate_buffer) {>           /* READ, WRITE */
> >           req->data = blk_try_blockalign(client->exp->common.blk,
> > @@ -2417,10 +2438,12 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
> >           }
> >       }
> >       if (payload_len) {
> > -        /* WRITE */
> > -        assert(req->data);
> > -        ret = nbd_read(client->ioc, req->data, payload_len,
> > -                       "CMD_WRITE data", errp);
> > +        if (req->data) {
> 
> and req->data is actually (READ or WRITE) ( == allocated_buffer) as well.
> 
> I'd prefer here "if (request->type == NBD_CMD_WRITE)" here

Looks like I'll be respinning this patch slightly.

> 
> > +            ret = nbd_read(client->ioc, req->data, payload_len,
> > +                           "CMD_WRITE data", errp);
> > +        } else {
> > +            ret = nbd_drop(client->ioc, payload_len, errp);
> > +        }
> >           if (ret < 0) {
> >               return -EIO;
> >           }
> > diff --git a/nbd/trace-events b/nbd/trace-events
> > index f9dccfcfb44..c1a3227613f 100644
> > --- a/nbd/trace-events
> > +++ b/nbd/trace-events
> > @@ -71,6 +71,7 @@ nbd_co_send_extents(uint64_t cookie, unsigned int extents, uint32_t id, uint64_t
> >   nbd_co_send_chunk_error(uint64_t cookie, int err, const char *errname, const char *msg) "Send structured error reply: cookie = %" PRIu64 ", error = %d (%s), msg = '%s'"
> >   nbd_co_receive_request_decode_type(uint64_t cookie, uint16_t type, const char *name) "Decoding type: cookie = %" PRIu64 ", type = %" PRIu16 " (%s)"
> >   nbd_co_receive_request_payload_received(uint64_t cookie, uint64_t len) "Payload received: cookie = %" PRIu64 ", len = %" PRIu64
> > +nbd_co_receive_ext_payload_compliance(uint64_t from, uint64_t len) "client sent non-compliant write without payload flag: from=0x%" PRIx64 ", len=0x%" PRIx64
> >   nbd_co_receive_align_compliance(const char *op, uint64_t from, uint64_t len, uint32_t align) "client sent non-compliant unaligned %s request: from=0x%" PRIx64 ", len=0x%" PRIx64 ", align=0x%" PRIx32
> >   nbd_trip(void) "Reading request"
> > 
> 
> -- 
> Best regards,
> Vladimir
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization:  qemu.org | libguestfs.org



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 06/17] nbd/server: Support a request payload
  2023-09-06 17:52     ` Eric Blake
@ 2023-09-08 17:43       ` Eric Blake
  0 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-09-08 17:43 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: qemu-devel, qemu-block

On Wed, Sep 06, 2023 at 12:52:22PM -0500, Eric Blake wrote:
> On Tue, Sep 05, 2023 at 05:36:15PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> > On 29.08.23 20:58, Eric Blake wrote:
> > > Upcoming additions to support NBD 64-bit effect lengths allow for the
> > > possibility to distinguish between payload length (capped at 32M) and
> > > effect length (64 bits, although we generally assume 63 bits because
> > > of off_t limitations).  Without that extension, only the NBD_CMD_WRITE
> > > request has a payload; but with the extension, it makes sense to allow
> > > at least NBD_CMD_BLOCK_STATUS to have both a payload and effect length
> > > in a future patch (where the payload is a limited-size struct that in
> > > turn gives the real effect length as well as a subset of known ids for
> > > which status is requested).  Other future NBD commands may also have a
> > > request payload, so the 64-bit extension introduces a new
> > > NBD_CMD_FLAG_PAYLOAD_LEN that distinguishes between whether the header
> > > length is a payload length or an effect length, rather than
> > > hard-coding the decision based on the command; although a client
> > > should never send a command with a payload without the negotiation
> > > phase proving such extension is available, we are now able to
> > > gracefully fail unexpected client payloads while keeping the
> > > connection alive.  Note that we do not support the payload version of
> > > BLOCK_STATUS yet.
> > > 
> > > Signed-off-by: Eric Blake <eblake@redhat.com>
> > > ---
> > > 
> > > v5: retitled from v4 13/24, rewrite on top of previous patch's switch
> > > statement [Vladimir]
> > > 
> > > v4: less indentation on several 'if's [Vladimir]
> > > ---
> > >   nbd/server.c     | 33 ++++++++++++++++++++++++++++-----
> > >   nbd/trace-events |  1 +
> > >   2 files changed, 29 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/nbd/server.c b/nbd/server.c
> > > index dd3ab59224c..adcfcdeacb7 100644
> > > --- a/nbd/server.c
> > > +++ b/nbd/server.c
> > > @@ -2334,7 +2334,8 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
> > >                                                  Error **errp)
> > >   {
> > >       NBDClient *client = req->client;
> > > -    bool check_length = false;
> > > +    bool extended_with_payload;
> > > +    bool check_length;
> > >       bool check_rofs = false;
> > >       bool allocate_buffer = false;
> > >       unsigned payload_len = 0;
> > > @@ -2350,6 +2351,9 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
> > > 
> > >       trace_nbd_co_receive_request_decode_type(request->cookie, request->type,
> > >                                                nbd_cmd_lookup(request->type));
> > > +    check_length = extended_with_payload = client->mode >= NBD_MODE_EXTENDED &&
> > > +        request->flags & NBD_CMD_FLAG_PAYLOAD_LEN;
> > > +
> > >       switch (request->type) {
> > >       case NBD_CMD_DISC:
> > >           /* Special case: we're going to disconnect without a reply,
> > > @@ -2366,6 +2370,14 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
> > >           break;
> > > 
> > >       case NBD_CMD_WRITE:
> > > +        if (client->mode >= NBD_MODE_EXTENDED) {
> > > +            if (!extended_with_payload) {
> > > +                /* The client is noncompliant. Trace it, but proceed. */
> > > +                trace_nbd_co_receive_ext_payload_compliance(request->from,
> > > +                                                            request->len);
> > > +            }
> > > +            valid_flags |= NBD_CMD_FLAG_PAYLOAD_LEN;
> > > +        }
> > >           payload_len = request->len;
> > >           check_length = true;
> > >           allocate_buffer = true;
> > > @@ -2407,6 +2419,15 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
> > 
> > more context:
> > 
> >     /* Payload and buffer handling. */
> >     if (!payload_len) {
> >         req->complete = true;
> 
> At this point, payload_len is equal to 0 for all but NBD_CMD_WRITE.  [1]
> 
> >     }
> >     if (check_length && request->len > NBD_MAX_BUFFER_SIZE) {
> >         /* READ, WRITE, CACHE */
> >         error_setg(errp, "len (%" PRIu64 ") is larger than max len (%u)",
> >                    request->len, NBD_MAX_BUFFER_SIZE);
> >         return -EINVAL;
> >     }
> > 
> > 
> > > +    if (extended_with_payload && !allocate_buffer) {
> > 
> > it's correct but strange, as allocate_buffer is (READ or WRITE), and READ is totally unrelated here.
> 
> Oh, you do have a point.  If a client mistakenly passes the
> extended_with_payload flag on NBD_CMD_READ, we end up skipping this
> code which tries to parse off that payload, meaning we could be out of
> sync for reacting to the next command; if the client is particularly
> malicious, they could send payload that resembles another valid
> command.  Checking specifically for !WRITE rather than for
> !allocate_buffer is more accurate, although I was trying to avoid
> duplicating checks that were covered in the switch statement above.
> 
> I'll have to think about this a bit.
> 
> > 
> > > +        /*
> > > +         * For now, we don't support payloads on other commands; but
> > > +         * we can keep the connection alive by ignoring the payload.
> > 
> > Was it in specification, that we can safely ignore unknown payload for known commands?
> 
> Well, at this point, the client is already non-compliant, so anything
> we do is best effort only (the client may have set the bit but not
> sent any payload, at which case we block until the client does send
> another command but we consume that command as payload; or it sent
> more payload than the size it advertises and so we see a magic number
> mismatch when trying to interpret those extra bytes as the next
> command).
> 
> But yes, the current spec wording mentions consuming payload bytes and
> then failing with NBD_EINVAL.  Is there a potential that we need to
> change the spec wording, if we think a malicious client can abuse the
> flag to force the server into a deadlock scenario which can be abused
> as a denial-of-service attack against other clients, compared to the
> more conservative approach of just disconnecting because the client
> sent a bad flag?  Generally, I look at disconnecting in response to
> client behavior as the least favorable response to take; attempting to
> return an error message is nicer.  And being blocked on I/O is not
> necessarily a denial of service (it isn't actively consuming CPU
> cycles, and we don't stop servicing parallel clients).
> 
> > 
> > > +         */
> > > +        assert(request->type != NBD_CMD_WRITE);
> > > +        payload_len = request->len;
> > 
> > what I don't like here, is that we already set req->complete to true for this request, when payload_len was zero.
> > 
> > Probably, for extended_with_payload requests we should initialize payload_len at top of the function?
> 
> Well, we DO initialize it to 0, and then assign it under NBD_CMD_READ.
> But the code that sets req->complete should probably NOT be setting it
> if we are about to attempt reading payload bytes; that is, back at
> [1], the code there probably needs to be conditional on
> extended_with_payload.
> 
> > 
> > > +        request->len = 0;
> > > +    }
> > >       if (allocate_buffer) {>           /* READ, WRITE */
> > >           req->data = blk_try_blockalign(client->exp->common.blk,
> > > @@ -2417,10 +2438,12 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
> > >           }
> > >       }
> > >       if (payload_len) {
> > > -        /* WRITE */
> > > -        assert(req->data);
> > > -        ret = nbd_read(client->ioc, req->data, payload_len,
> > > -                       "CMD_WRITE data", errp);
> > > +        if (req->data) {
> > 
> > and req->data is actually (READ or WRITE) ( == allocated_buffer) as well.
> > 
> > I'd prefer here "if (request->type == NBD_CMD_WRITE)" here
> 
> Looks like I'll be respinning this patch slightly.

Here's what I'm planning to squash in.  Your idea of setting
payload_len up front makes sense.  I will also have to squash in some
followups to 17/17, when I start accepting payload for
NBD_CMD_BLOCK_STATUS.

diff --git i/nbd/server.c w/nbd/server.c
index d54dd0604bf..35d3f8989f2 100644
--- i/nbd/server.c
+++ w/nbd/server.c
@@ -2317,36 +2317,41 @@ static int coroutine_fn nbd_co_send_bitmap(NBDClient *client,
  * to the client (although the caller may still need to disconnect after
  * reporting the error).
  */
 static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
                                                NBDRequest *request,
                                                Error **errp)
 {
     NBDClient *client = req->client;
     bool extended_with_payload;
-    bool check_length;
+    bool check_length = false;
     bool check_rofs = false;
     bool allocate_buffer = false;
+    bool payload_okay = false;
     unsigned payload_len = 0;
     int valid_flags = NBD_CMD_FLAG_FUA;
     int ret;

     g_assert(qemu_in_coroutine());
     assert(client->recv_coroutine == qemu_coroutine_self());
     ret = nbd_receive_request(client, request, errp);
     if (ret < 0) {
         return ret;
     }

     trace_nbd_co_receive_request_decode_type(request->cookie, request->type,
                                              nbd_cmd_lookup(request->type));
-    check_length = extended_with_payload = client->mode >= NBD_MODE_EXTENDED &&
+    extended_with_payload = client->mode >= NBD_MODE_EXTENDED &&
         request->flags & NBD_CMD_FLAG_PAYLOAD_LEN;
+    if (extended_with_payload) {
+        payload_len = request->len;
+        check_length = true;
+    }

     switch (request->type) {
     case NBD_CMD_DISC:
         /* Special case: we're going to disconnect without a reply,
          * whether or not flags, from, or len are bogus */
         req->complete = true;
         return -EIO;

     case NBD_CMD_READ:
@@ -2360,18 +2365,19 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
     case NBD_CMD_WRITE:
         if (client->mode >= NBD_MODE_EXTENDED) {
             if (!extended_with_payload) {
                 /* The client is noncompliant. Trace it, but proceed. */
                 trace_nbd_co_receive_ext_payload_compliance(request->from,
                                                             request->len);
             }
             valid_flags |= NBD_CMD_FLAG_PAYLOAD_LEN;
         }
+        payload_okay = true;
         payload_len = request->len;
         check_length = true;
         allocate_buffer = true;
         check_rofs = true;
         break;

     case NBD_CMD_FLUSH:
         break;

@@ -2401,38 +2407,38 @@ static int coroutine_fn nbd_co_receive_request(NBDRequestData *req,
     if (!payload_len) {
         req->complete = true;
     }
     if (check_length && request->len > NBD_MAX_BUFFER_SIZE) {
         /* READ, WRITE, CACHE */
         error_setg(errp, "len (%" PRIu64 ") is larger than max len (%u)",
                    request->len, NBD_MAX_BUFFER_SIZE);
         return -EINVAL;
     }
-    if (extended_with_payload && !allocate_buffer) {
+    if (payload_len && !payload_okay) {
         /*
          * For now, we don't support payloads on other commands; but
          * we can keep the connection alive by ignoring the payload.
          */
         assert(request->type != NBD_CMD_WRITE);
-        payload_len = request->len;
         request->len = 0;
     }
     if (allocate_buffer) {
         /* READ, WRITE */
         req->data = blk_try_blockalign(client->exp->common.blk,
                                        request->len);
         if (req->data == NULL) {
             error_setg(errp, "No memory");
             return -ENOMEM;
         }
     }
     if (payload_len) {
-        if (req->data) {
+        if (payload_okay) {
+            assert(req->data);
             ret = nbd_read(client->ioc, req->data, payload_len,
                            "CMD_WRITE data", errp);
         } else {
             ret = nbd_drop(client->ioc, payload_len, errp);
         }
         if (ret < 0) {
             return -EIO;
         }
         req->complete = true;


-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization:  qemu.org | libguestfs.org



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v6 05/17] nbd/server: Refactor handling of command sanity checks
  2023-09-04 16:53   ` Vladimir Sementsov-Ogievskiy
@ 2023-09-25 19:03     ` Eric Blake
  0 siblings, 0 replies; 28+ messages in thread
From: Eric Blake @ 2023-09-25 19:03 UTC (permalink / raw)
  To: Vladimir Sementsov-Ogievskiy; +Cc: qemu-devel, qemu-block

On Mon, Sep 04, 2023 at 07:53:10PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> On 29.08.23 20:58, Eric Blake wrote:
> > Upcoming additions to support NBD 64-bit effect lengths will add a new
> > command flag NBD_CMD_FLAG_PAYLOAD_LEN that needs to be considered in
> > our sanity checks of the client's messages (that is, more than just
> > CMD_WRITE have the potential to carry a client payload when extended
> > headers are in effect).  But before we can start to support that, it
> > is easier to first refactor the existing set of various if statements
> > over open-coded combinations of request->type to instead be a single
> > switch statement over all command types that sets witnesses, then
> > straight-line processing based on the witnesses.  No semantic change
> > is intended.
> > 
> > Signed-off-by: Eric Blake<eblake@redhat.com>
> 
> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>

Patches 1-5 queued through my latest NBD pull request; will respin
patches 6 and following on top of latest master and post a v7 for one
more round of reviews.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization:  qemu.org | libguestfs.org



^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2023-09-25 19:04 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-29 17:58 [PATCH v6 00/17] qemu patches for 64-bit NBD extensions Eric Blake
2023-08-29 17:58 ` [PATCH v6 01/17] nbd: Replace bool structured_reply with mode enum Eric Blake
2023-09-04 15:23   ` Vladimir Sementsov-Ogievskiy
2023-08-29 17:58 ` [PATCH v6 02/17] nbd/client: Pass mode through to nbd_send_request Eric Blake
2023-08-29 17:58 ` [PATCH v6 03/17] nbd: Add types for extended headers Eric Blake
2023-08-29 17:58 ` [PATCH v6 04/17] nbd: Prepare for 64-bit request effect lengths Eric Blake
2023-09-04 16:15   ` Vladimir Sementsov-Ogievskiy
2023-09-05 14:24     ` Eric Blake
2023-09-05 14:41       ` Vladimir Sementsov-Ogievskiy
2023-09-06 17:18         ` Eric Blake
2023-08-29 17:58 ` [PATCH v6 05/17] nbd/server: Refactor handling of command sanity checks Eric Blake
2023-09-04 16:53   ` Vladimir Sementsov-Ogievskiy
2023-09-25 19:03     ` Eric Blake
2023-08-29 17:58 ` [PATCH v6 06/17] nbd/server: Support a request payload Eric Blake
2023-09-05 14:36   ` Vladimir Sementsov-Ogievskiy
2023-09-06 17:52     ` Eric Blake
2023-09-08 17:43       ` Eric Blake
2023-08-29 17:58 ` [PATCH v6 07/17] nbd/server: Prepare to receive extended header requests Eric Blake
2023-08-29 17:58 ` [PATCH v6 08/17] nbd/server: Prepare to send extended header replies Eric Blake
2023-08-29 17:58 ` [PATCH v6 09/17] nbd/server: Support 64-bit block status Eric Blake
2023-08-29 17:58 ` [PATCH v6 10/17] nbd/server: Enable initial support for extended headers Eric Blake
2023-08-29 17:58 ` [PATCH v6 11/17] nbd/client: Plumb errp through nbd_receive_replies Eric Blake
2023-08-29 17:58 ` [PATCH v6 12/17] nbd/client: Initial support for extended headers Eric Blake
2023-08-29 17:58 ` [PATCH v6 13/17] nbd/client: Accept 64-bit block status chunks Eric Blake
2023-08-29 17:58 ` [PATCH v6 14/17] nbd/client: Request extended headers during negotiation Eric Blake
2023-08-29 17:58 ` [PATCH v6 15/17] nbd/server: Refactor list of negotiated meta contexts Eric Blake
2023-08-29 17:58 ` [PATCH v6 16/17] nbd/server: Prepare for per-request filtering of BLOCK_STATUS Eric Blake
2023-08-29 17:58 ` [PATCH v6 17/17] nbd/server: Add FLAG_PAYLOAD support to CMD_BLOCK_STATUS Eric Blake

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).