qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] NVMe: Add SPDM over the storage transport support
@ 2025-08-28  7:25 Wilfred Mallawa
  2025-08-28  7:25 ` [PATCH v2 1/5] spdm-socket: add seperate send/recv functions Wilfred Mallawa
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Wilfred Mallawa @ 2025-08-28  7:25 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Keith Busch, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
	Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
	Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
	Jonathan Cameron, Wilfred Mallawa

From: Wilfred Mallawa <wilfred.mallawa@wdc.com>

This series extends the existing SPDM support in QEMU to support the DSP0286
SPDM Storage Transport [1] for NVMe. SPDM Storage Transport uses the NVMe
Admin Security Send/Receive commands, as such, support for these commands have
also been added.

With the addition of a new `spdm-trans` CLI argument for NVMe controllers,
users can specify `spdm_trans=nvme` or `spdm_trans=doe`. This allows for the
selection of the SPDM transport. The `doe` option is the current default,
`nvme` would select SPDM Storage Transport for the controller, where SPDM
communication happens over the NVMe Admin Security Send/Receive commands.

Support for DSP0286 already exists in `libspdm` [2] and support for the QEMU
SPDM server is being upstreamed for `spdm-utils` [3]. This series was tested by
using `spdm-utils` as the qemu SPDM server with SPDM Storage Transport support
built with `libspdm` v3.8.0, and `spdm-utils` also as the SPDM requester.

Changes V1 -> V2:
	- spdm_socket_rsp() now uses the new spdm_socket_send/receive()
	  functions. spdm_socket_command_valid() is added to parse the
	  command value incase some bytes were received (result = true) but
	  with an invalid command.

	- Added inline comments to describe fields of
	  StorageSpdmTransportHeader. Checkpatch generates warnings, but lots of
	  existing code does this. The QEMU_PACKED attribute now follows the
	  StorageSpdmTransportHeader struct definition.

	- Use extract32() instead of manual shifting/masking in
	  nvme_sec_prot_spdm_send/recv().

	- Use g_autofree for send/recv buffer allocation
	  in nvme_sec_prot_spdm_send/recv().
	
	- Added explicit fallthrough comment for checking `secp` in
	  nvme_security_receive()
	
	- Added enum support for SPDM transport type, such that a user defined
	  transport type string, can be mapped to the respective enum for
	  internal use.

Wilfred Mallawa (5):
  spdm-socket: add seperate send/recv functions
  spdm: add spdm storage transport virtual header
  hw/nvme: add NVMe Admin Security SPDM support
  spdm: define SPDM transport enum types
  hw/nvme: connect SPDM over NVMe Security Send/Recv

 backends/spdm-socket.c       |  79 ++++++++++--
 docs/specs/spdm.rst          |  10 +-
 hw/nvme/ctrl.c               | 242 +++++++++++++++++++++++++++++++++--
 hw/nvme/nvme.h               |   5 +
 include/block/nvme.h         |  15 +++
 include/hw/pci/pci_device.h  |   2 +
 include/system/spdm-socket.h |  66 +++++++++-
 7 files changed, 391 insertions(+), 28 deletions(-)

-- 
2.51.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/5] spdm-socket: add seperate send/recv functions
  2025-08-28  7:25 [PATCH v2 0/5] NVMe: Add SPDM over the storage transport support Wilfred Mallawa
@ 2025-08-28  7:25 ` Wilfred Mallawa
  2025-08-28  7:25 ` [PATCH v2 2/5] spdm: add spdm storage transport virtual header Wilfred Mallawa
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Wilfred Mallawa @ 2025-08-28  7:25 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Keith Busch, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
	Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
	Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
	Jonathan Cameron, Wilfred Mallawa

From: Wilfred Mallawa <wilfred.mallawa@wdc.com>

This is to support uni-directional transports such as SPDM over Storage.
As specified by the DMTF DSP0286.

Also update spdm_socket_rsp() to use the new send()/receive() functions. For
the case of spdm_socket_receive(), this allows us to do error checking
in one place with the addition of spdm_socket_command_valid().

Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
---
 backends/spdm-socket.c       | 56 +++++++++++++++++++++++++++++-------
 include/system/spdm-socket.h | 35 ++++++++++++++++++++++
 2 files changed, 80 insertions(+), 11 deletions(-)

diff --git a/backends/spdm-socket.c b/backends/spdm-socket.c
index 2c709c68c8..3d264814df 100644
--- a/backends/spdm-socket.c
+++ b/backends/spdm-socket.c
@@ -184,28 +184,62 @@ int spdm_socket_connect(uint16_t port, Error **errp)
     return client_socket;
 }
 
-uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
-                         void *req, uint32_t req_len,
-                         void *rsp, uint32_t rsp_len)
+static bool spdm_socket_command_valid(uint32_t command)
+{
+    switch (command) {
+    case SPDM_SOCKET_COMMAND_NORMAL:
+    case SPDM_SOCKET_STORAGE_CMD_IF_SEND:
+    case SPDM_SOCKET_STORAGE_CMD_IF_RECV:
+    case SOCKET_SPDM_STORAGE_ACK_STATUS:
+    case SPDM_SOCKET_COMMAND_OOB_ENCAP_KEY_UPDATE:
+    case SPDM_SOCKET_COMMAND_CONTINUE:
+    case SPDM_SOCKET_COMMAND_SHUTDOWN:
+    case SPDM_SOCKET_COMMAND_UNKOWN:
+    case SPDM_SOCKET_COMMAND_TEST:
+        return true;
+    default:
+        return false;
+    }
+}
+
+uint32_t spdm_socket_receive(const int socket, uint32_t transport_type,
+                             void *rsp, uint32_t rsp_len)
 {
     uint32_t command;
     bool result;
 
-    result = send_platform_data(socket, transport_type,
-                                SPDM_SOCKET_COMMAND_NORMAL,
-                                req, req_len);
-    if (!result) {
+    result = receive_platform_data(socket, transport_type, &command,
+                                   (uint8_t *)rsp, &rsp_len);
+
+    /* we may have received some data, but check if the command is valid */
+    if (!result || !spdm_socket_command_valid(command)) {
         return 0;
     }
 
-    result = receive_platform_data(socket, transport_type, &command,
-                                   (uint8_t *)rsp, &rsp_len);
+    return rsp_len;
+}
+
+bool spdm_socket_send(const int socket, uint32_t socket_cmd,
+                      uint32_t transport_type, void *req, uint32_t req_len)
+{
+    return send_platform_data(socket, transport_type,
+                              socket_cmd, req, req_len);
+}
+
+uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
+                         void *req, uint32_t req_len,
+                         void *rsp, uint32_t rsp_len)
+{
+    bool result;
+
+    result = spdm_socket_send(socket, SPDM_SOCKET_COMMAND_NORMAL,
+                              transport_type, req, req_len);
     if (!result) {
         return 0;
     }
 
-    assert(command != 0);
-
+    rsp_len = spdm_socket_receive(socket, transport_type, (uint8_t *)rsp,
+                                  rsp_len);
     return rsp_len;
 }
 
diff --git a/include/system/spdm-socket.h b/include/system/spdm-socket.h
index 5d8bd9aa4e..2b7d03f82d 100644
--- a/include/system/spdm-socket.h
+++ b/include/system/spdm-socket.h
@@ -50,6 +50,35 @@ uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
                          void *req, uint32_t req_len,
                          void *rsp, uint32_t rsp_len);
 
+/**
+ * spdm_socket_rsp: Receive a message from an SPDM server
+ * @socket: socket returned from spdm_socket_connect()
+ * @transport_type: SPDM_SOCKET_TRANSPORT_TYPE_* macro
+ * @rsp: response buffer
+ * @rsp_len: response buffer length
+ *
+ * Receives a message from the SPDM server and returns the number of bytes
+ * received or 0 on failure. This can be used to receive a message from the SPDM
+ * server without sending anything first.
+ */
+uint32_t spdm_socket_receive(const int socket, uint32_t transport_type,
+                             void *rsp, uint32_t rsp_len);
+
+/**
+ * spdm_socket_rsp: Sends a message to an SPDM server
+ * @socket: socket returned from spdm_socket_connect()
+ * @socket_cmd: socket command type (normal/if_recv/if_send etc...)
+ * @transport_type: SPDM_SOCKET_TRANSPORT_TYPE_* macro
+ * @req: request buffer
+ * @req_len: request buffer length
+ *
+ * Sends platform data to a SPDM server on socket, returns true on success.
+ * The response from the server must then be fetched by using
+ * spdm_socket_receive().
+ */
+bool spdm_socket_send(const int socket, uint32_t socket_cmd,
+                      uint32_t transport_type, void *req, uint32_t req_len);
+
 /**
  * spdm_socket_close: send a shutdown command to the server
  * @socket: socket returned from spdm_socket_connect()
@@ -60,6 +89,9 @@ uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
 void spdm_socket_close(const int socket, uint32_t transport_type);
 
 #define SPDM_SOCKET_COMMAND_NORMAL                0x0001
+#define SPDM_SOCKET_STORAGE_CMD_IF_SEND           0x0002
+#define SPDM_SOCKET_STORAGE_CMD_IF_RECV           0x0003
+#define SOCKET_SPDM_STORAGE_ACK_STATUS            0x0004
 #define SPDM_SOCKET_COMMAND_OOB_ENCAP_KEY_UPDATE  0x8001
 #define SPDM_SOCKET_COMMAND_CONTINUE              0xFFFD
 #define SPDM_SOCKET_COMMAND_SHUTDOWN              0xFFFE
@@ -68,7 +100,10 @@ void spdm_socket_close(const int socket, uint32_t transport_type);
 
 #define SPDM_SOCKET_TRANSPORT_TYPE_MCTP           0x01
 #define SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE        0x02
+#define SPDM_SOCKET_TRANSPORT_TYPE_SCSI           0x03
+#define SPDM_SOCKET_TRANSPORT_TYPE_NVME           0x04
 
 #define SPDM_SOCKET_MAX_MESSAGE_BUFFER_SIZE       0x1200
+#define SPDM_SOCKET_MAX_MSG_STATUS_LEN            0x02
 
 #endif
-- 
2.51.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 2/5] spdm: add spdm storage transport virtual header
  2025-08-28  7:25 [PATCH v2 0/5] NVMe: Add SPDM over the storage transport support Wilfred Mallawa
  2025-08-28  7:25 ` [PATCH v2 1/5] spdm-socket: add seperate send/recv functions Wilfred Mallawa
@ 2025-08-28  7:25 ` Wilfred Mallawa
  2025-08-28  7:25 ` [PATCH v2 3/5] hw/nvme: add NVMe Admin Security SPDM support Wilfred Mallawa
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Wilfred Mallawa @ 2025-08-28  7:25 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Keith Busch, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
	Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
	Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
	Jonathan Cameron, Wilfred Mallawa

From: Wilfred Mallawa <wilfred.mallawa@wdc.com>

This header contains the transport encoding for an SPDM message that
uses the SPDM over Storage transport as defined by the DMTF DSP0286.

Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
---
 include/system/spdm-socket.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/include/system/spdm-socket.h b/include/system/spdm-socket.h
index 2b7d03f82d..6c2cb7b926 100644
--- a/include/system/spdm-socket.h
+++ b/include/system/spdm-socket.h
@@ -88,6 +88,18 @@ bool spdm_socket_send(const int socket, uint32_t socket_cmd,
  */
 void spdm_socket_close(const int socket, uint32_t transport_type);
 
+/**
+ * Defines the transport encoding for SPDM, this information shall be passed
+ * down to the SPDM server, when conforming to the SPDM over Storage standard
+ * as defined by DSP0286.
+ */
+typedef struct {
+    uint8_t security_protocol;              /* Must be 0xE8 for SPDM Commands */
+    uint16_t security_protocol_specific;    /* Bit[7:2] SPDM Operation
+                                               Bit[0:1] Connection ID */
+    uint32_t length;                        /* Length of the SPDM Message*/
+} QEMU_PACKED StorageSpdmTransportHeader;
+
 #define SPDM_SOCKET_COMMAND_NORMAL                0x0001
 #define SPDM_SOCKET_STORAGE_CMD_IF_SEND           0x0002
 #define SPDM_SOCKET_STORAGE_CMD_IF_RECV           0x0003
-- 
2.51.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 3/5] hw/nvme: add NVMe Admin Security SPDM support
  2025-08-28  7:25 [PATCH v2 0/5] NVMe: Add SPDM over the storage transport support Wilfred Mallawa
  2025-08-28  7:25 ` [PATCH v2 1/5] spdm-socket: add seperate send/recv functions Wilfred Mallawa
  2025-08-28  7:25 ` [PATCH v2 2/5] spdm: add spdm storage transport virtual header Wilfred Mallawa
@ 2025-08-28  7:25 ` Wilfred Mallawa
  2025-08-30  4:17   ` Keith Busch
  2025-08-28  7:25 ` [PATCH v2 4/5] spdm: define SPDM transport enum types Wilfred Mallawa
  2025-08-28  7:25 ` [PATCH v2 5/5] hw/nvme: connect SPDM over NVMe Security Send/Recv Wilfred Mallawa
  4 siblings, 1 reply; 9+ messages in thread
From: Wilfred Mallawa @ 2025-08-28  7:25 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Keith Busch, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
	Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
	Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
	Jonathan Cameron, Wilfred Mallawa

From: Wilfred Mallawa <wilfred.mallawa@wdc.com>

Adds the NVMe Admin Security Send/Receive command support with support
for DMTFs SPDM. The transport binding for SPDM is defined in the
DMTF DSP0286.

Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
---
 hw/nvme/ctrl.c       | 197 ++++++++++++++++++++++++++++++++++++++++++-
 hw/nvme/nvme.h       |   5 ++
 include/block/nvme.h |  15 ++++
 3 files changed, 216 insertions(+), 1 deletion(-)

diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index f5ee6bf260..9358c9e540 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -282,6 +282,8 @@ static const uint32_t nvme_cse_acs_default[256] = {
     [NVME_ADM_CMD_FORMAT_NVM]       = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
     [NVME_ADM_CMD_DIRECTIVE_RECV]   = NVME_CMD_EFF_CSUPP,
     [NVME_ADM_CMD_DIRECTIVE_SEND]   = NVME_CMD_EFF_CSUPP,
+    [NVME_ADM_CMD_SECURITY_SEND]   = NVME_CMD_EFF_CSUPP,
+    [NVME_ADM_CMD_SECURITY_RECV]   = NVME_CMD_EFF_CSUPP,
 };
 
 static const uint32_t nvme_cse_iocs_nvm_default[256] = {
@@ -7282,6 +7284,194 @@ static uint16_t nvme_dbbuf_config(NvmeCtrl *n, const NvmeRequest *req)
     return NVME_SUCCESS;
 }
 
+static uint16_t nvme_sec_prot_spdm_send(NvmeCtrl *n, NvmeRequest *req)
+{
+    StorageSpdmTransportHeader hdr = {0};
+    g_autofree uint8_t *sec_buf = NULL;
+    uint32_t transfer_len = le32_to_cpu(req->cmd.cdw11);
+    uint32_t transport_transfer_len = transfer_len;
+    uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
+    uint32_t recvd;
+    uint16_t nvme_cmd_status, ret;
+    uint8_t secp = extract32(dw10, 24, 8);
+    uint8_t spsp1 = extract32(dw10, 16, 8);
+    uint8_t spsp0 = extract32(dw10, 8, 8);
+    bool spdm_res;
+
+    transport_transfer_len += sizeof(hdr);
+    if (transport_transfer_len > SPDM_SOCKET_MAX_MESSAGE_BUFFER_SIZE) {
+        return NVME_NO_COMPLETE | NVME_DNR;
+    }
+
+    /* Generate the NVMe transport header */
+    hdr.security_protocol = secp;
+    hdr.security_protocol_specific = cpu_to_le16((spsp1 << 8) | spsp0);
+    hdr.length = cpu_to_le32(transfer_len);
+
+    sec_buf = g_malloc0(transport_transfer_len);
+    if (!sec_buf) {
+        return NVME_NO_COMPLETE | NVME_DNR;
+    }
+
+    /* Attach the transport header */
+    memcpy(sec_buf, &hdr, sizeof(hdr));
+    ret = nvme_h2c(n, sec_buf + sizeof(hdr), transfer_len, req);
+    if (ret) {
+        return NVME_NO_COMPLETE | NVME_DNR;
+    }
+
+    spdm_res = spdm_socket_send(n->spdm_socket, SPDM_SOCKET_STORAGE_CMD_IF_SEND,
+                                SPDM_SOCKET_TRANSPORT_TYPE_NVME, sec_buf,
+                                transport_transfer_len);
+    if (!spdm_res) {
+        return NVME_NO_COMPLETE | NVME_DNR;
+    }
+
+    /* The responder shall ack with message status */
+    recvd = spdm_socket_receive(n->spdm_socket, SPDM_SOCKET_TRANSPORT_TYPE_NVME,
+                                (uint8_t *)&nvme_cmd_status,
+                                SPDM_SOCKET_MAX_MSG_STATUS_LEN);
+
+    nvme_cmd_status = cpu_to_be16(nvme_cmd_status);
+
+    if (recvd < SPDM_SOCKET_MAX_MSG_STATUS_LEN) {
+        return NVME_NO_COMPLETE | NVME_DNR;
+    }
+
+    return nvme_cmd_status;
+}
+
+/* From host to controller */
+static uint16_t nvme_security_send(NvmeCtrl *n, NvmeRequest *req)
+{
+    uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
+    uint8_t secp = (dw10 >> 24) & 0xff;
+
+    switch (secp) {
+    case NVME_SEC_PROT_DMTF_SPDM:
+        return nvme_sec_prot_spdm_send(n, req);
+    default:
+        /* Unsupported Security Protocol Type */
+        return NVME_INVALID_FIELD | NVME_DNR;
+    }
+
+    return NVME_INVALID_FIELD | NVME_DNR;
+}
+
+static uint16_t nvme_sec_prot_spdm_receive(NvmeCtrl *n, NvmeRequest *req)
+{
+    StorageSpdmTransportHeader hdr;
+    g_autofree uint8_t *rsp_spdm_buf = NULL;
+    uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
+    uint32_t alloc_len = le32_to_cpu(req->cmd.cdw11);
+    uint32_t recvd, spdm_res;
+    uint16_t nvme_cmd_status, ret;
+    uint8_t secp = extract32(dw10, 24, 8);
+    uint8_t spsp1 = extract32(dw10, 16, 8);
+    uint8_t spsp0 = extract32(dw10, 8, 8);
+
+    if (!alloc_len) {
+        return NVME_INVALID_FIELD | NVME_DNR;
+    }
+
+    /* Generate the NVMe transport header */
+    hdr = (StorageSpdmTransportHeader) {
+        .security_protocol = secp,
+        .security_protocol_specific = cpu_to_le16((spsp1 << 8) | spsp0),
+        .length = cpu_to_le32(alloc_len),
+    };
+
+    /* Forward if_recv to the SPDM Server with SPSP0 */
+    spdm_res = spdm_socket_send(n->spdm_socket, SPDM_SOCKET_STORAGE_CMD_IF_RECV,
+                                SPDM_SOCKET_TRANSPORT_TYPE_NVME,
+                                (uint8_t *)&hdr, sizeof(hdr));
+    if (!spdm_res) {
+        return NVME_NO_COMPLETE | NVME_DNR;
+    }
+
+    /* The responder shall ack with message status */
+    recvd = spdm_socket_receive(n->spdm_socket, SPDM_SOCKET_TRANSPORT_TYPE_NVME,
+                                (uint8_t *)&nvme_cmd_status,
+                                SPDM_SOCKET_MAX_MSG_STATUS_LEN);
+
+    nvme_cmd_status = cpu_to_be16(nvme_cmd_status);
+
+    if (recvd < SPDM_SOCKET_MAX_MSG_STATUS_LEN) {
+        return NVME_NO_COMPLETE | NVME_DNR;
+    }
+
+    /* An error here implies the prior if_recv from requester was spurious */
+    if (nvme_cmd_status != NVME_SUCCESS) {
+        return nvme_cmd_status;
+    }
+
+    /* Clear to start receiving data from the server */
+    rsp_spdm_buf = g_malloc0(alloc_len);
+    if (!rsp_spdm_buf) {
+        return NVME_NO_COMPLETE | NVME_DNR;
+    }
+
+    recvd = spdm_socket_receive(n->spdm_socket,
+                                SPDM_SOCKET_TRANSPORT_TYPE_NVME,
+                                rsp_spdm_buf, alloc_len);
+    if (!recvd) {
+        return NVME_NO_COMPLETE | NVME_DNR;
+    }
+
+    ret = nvme_c2h(n, rsp_spdm_buf, MIN(recvd, alloc_len), req);
+
+    if (ret) {
+        return NVME_NO_COMPLETE | NVME_DNR;
+    }
+
+    return NVME_SUCCESS;
+}
+
+static uint16_t nvme_get_sec_prot_info(NvmeCtrl *n, NvmeRequest *req)
+{
+    uint32_t alloc_len = le32_to_cpu(req->cmd.cdw11);
+    uint8_t resp[10] = {
+        /* Support Security Protol List Length */
+        [6] = 0, /* MSB */
+        [7] = 2, /* LSB */
+        /* Support Security Protocol List */
+        [8] = SFSC_SECURITY_PROT_INFO,
+        [9] = NVME_SEC_PROT_DMTF_SPDM,
+    };
+
+    if (alloc_len < 10) {
+        return NVME_INVALID_FIELD | NVME_DNR;
+    }
+
+    return nvme_c2h(n, resp, sizeof(resp), req);
+}
+
+/* From controller to host */
+static uint16_t nvme_security_receive(NvmeCtrl *n, NvmeRequest *req)
+{
+    uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
+    uint16_t spsp = extract32(dw10, 8, 16);
+    uint8_t secp = extract32(dw10, 24, 8);
+
+    switch (secp) {
+    case SFSC_SECURITY_PROT_INFO:
+        switch (spsp) {
+        case 0:
+            /* Supported security protocol list */
+            return nvme_get_sec_prot_info(n, req);
+        case 1:
+            /* Certificate data */
+            /* fallthrough */
+        default:
+            return NVME_INVALID_FIELD | NVME_DNR;
+        }
+    case NVME_SEC_PROT_DMTF_SPDM:
+        return nvme_sec_prot_spdm_receive(n, req);
+    default:
+        return NVME_INVALID_FIELD | NVME_DNR;
+    }
+}
+
 static uint16_t nvme_directive_send(NvmeCtrl *n, NvmeRequest *req)
 {
     return NVME_INVALID_FIELD | NVME_DNR;
@@ -7389,6 +7579,10 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req)
         return nvme_directive_send(n, req);
     case NVME_ADM_CMD_DIRECTIVE_RECV:
         return nvme_directive_receive(n, req);
+    case NVME_ADM_CMD_SECURITY_SEND:
+        return nvme_security_send(n, req);
+    case NVME_ADM_CMD_SECURITY_RECV:
+        return nvme_security_receive(n, req);
     default:
         g_assert_not_reached();
     }
@@ -8824,7 +9018,8 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev)
     id->mdts = n->params.mdts;
     id->ver = cpu_to_le32(NVME_SPEC_VER);
 
-    oacs = NVME_OACS_NMS | NVME_OACS_FORMAT | NVME_OACS_DIRECTIVES;
+    oacs = NVME_OACS_NMS | NVME_OACS_FORMAT | NVME_OACS_DIRECTIVES |
+           NVME_OACS_SECURITY;
 
     if (n->params.dbcs) {
         oacs |= NVME_OACS_DBCS;
diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
index b5c9378ea4..67ed562e00 100644
--- a/hw/nvme/nvme.h
+++ b/hw/nvme/nvme.h
@@ -461,6 +461,8 @@ static inline const char *nvme_adm_opc_str(uint8_t opc)
     case NVME_ADM_CMD_DIRECTIVE_RECV:   return "NVME_ADM_CMD_DIRECTIVE_RECV";
     case NVME_ADM_CMD_DBBUF_CONFIG:     return "NVME_ADM_CMD_DBBUF_CONFIG";
     case NVME_ADM_CMD_FORMAT_NVM:       return "NVME_ADM_CMD_FORMAT_NVM";
+    case NVME_ADM_CMD_SECURITY_SEND:    return "NVME_ADM_CMD_SECURITY_SEND";
+    case NVME_ADM_CMD_SECURITY_RECV:    return "NVME_ADM_CMD_SECURITY_RECV";
     default:                            return "NVME_ADM_CMD_UNKNOWN";
     }
 }
@@ -648,6 +650,9 @@ typedef struct NvmeCtrl {
     } next_pri_ctrl_cap;    /* These override pri_ctrl_cap after reset */
     uint32_t    dn; /* Disable Normal */
     NvmeAtomic  atomic;
+
+    /* Socket mapping to SPDM over NVMe Security In/Out commands */
+    int spdm_socket;
 } NvmeCtrl;
 
 typedef enum NvmeResetType {
diff --git a/include/block/nvme.h b/include/block/nvme.h
index 358e516e38..9fa2ecaf28 100644
--- a/include/block/nvme.h
+++ b/include/block/nvme.h
@@ -1779,6 +1779,21 @@ enum NvmeDirectiveOperations {
     NVME_DIRECTIVE_RETURN_PARAMS = 0x1,
 };
 
+typedef enum SfscSecurityProtocol {
+    SFSC_SECURITY_PROT_INFO = 0x00,
+} SfscSecurityProtocol;
+
+typedef enum NvmeSecurityProtocols {
+    NVME_SEC_PROT_DMTF_SPDM    = 0xE8,
+} NvmeSecurityProtocols;
+
+typedef enum SpdmOperationCodes {
+    SPDM_STORAGE_DISCOVERY      = 0x1, /* Mandatory */
+    SPDM_STORAGE_PENDING_INFO   = 0x2, /* Optional */
+    SPDM_STORAGE_MSG            = 0x5, /* Mandatory */
+    SPDM_STORAGE_SEC_MSG        = 0x6, /* Optional */
+} SpdmOperationCodes;
+
 typedef struct QEMU_PACKED NvmeFdpConfsHdr {
     uint16_t num_confs;
     uint8_t  version;
-- 
2.51.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 4/5] spdm: define SPDM transport enum types
  2025-08-28  7:25 [PATCH v2 0/5] NVMe: Add SPDM over the storage transport support Wilfred Mallawa
                   ` (2 preceding siblings ...)
  2025-08-28  7:25 ` [PATCH v2 3/5] hw/nvme: add NVMe Admin Security SPDM support Wilfred Mallawa
@ 2025-08-28  7:25 ` Wilfred Mallawa
  2025-08-28  7:25 ` [PATCH v2 5/5] hw/nvme: connect SPDM over NVMe Security Send/Recv Wilfred Mallawa
  4 siblings, 0 replies; 9+ messages in thread
From: Wilfred Mallawa @ 2025-08-28  7:25 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Keith Busch, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
	Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
	Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
	Jonathan Cameron, Wilfred Mallawa

From: Wilfred Mallawa <wilfred.mallawa@wdc.com>

SPDM maybe used over different transports, such as PCIe Data Object
Exchange (DoE) or Storage amongst others. This patch
specifies such types as an enum with a qdev property definition such that
a user input transport type (string) can be mapped directly into the
respective SPDM transport enum for internal use.

Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
---
 backends/spdm-socket.c       | 23 +++++++++++++++++++++++
 include/system/spdm-socket.h | 23 ++++++++++++++++++-----
 2 files changed, 41 insertions(+), 5 deletions(-)

diff --git a/backends/spdm-socket.c b/backends/spdm-socket.c
index 3d264814df..6943246372 100644
--- a/backends/spdm-socket.c
+++ b/backends/spdm-socket.c
@@ -13,6 +13,9 @@
 #include "qemu/osdep.h"
 #include "system/spdm-socket.h"
 #include "qapi/error.h"
+#include "hw/qdev-properties.h"
+#include "hw/qdev-properties-system.h"
+#include "hw/core/qdev-prop-internal.h"
 
 static bool read_bytes(const int socket, uint8_t *buffer,
                        size_t number_of_bytes)
@@ -248,3 +251,23 @@ void spdm_socket_close(const int socket, uint32_t transport_type)
     send_platform_data(socket, transport_type,
                        SPDM_SOCKET_COMMAND_SHUTDOWN, NULL, 0);
 }
+
+const QEnumLookup SpdmTransport_lookup = {
+    .array = (const char *const[]) {
+        [SPDM_SOCKET_TRANSPORT_TYPE_UNSPEC] = "unspecified",
+        [SPDM_SOCKET_TRANSPORT_TYPE_MCTP] = "mctp",
+        [SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE] = "doe",
+        [SPDM_SOCKET_TRANSPORT_TYPE_SCSI] = "scsi",
+        [SPDM_SOCKET_TRANSPORT_TYPE_NVME] = "nvme",
+    },
+    .size = SPDM_SOCKET_TRANSPORT_TYPE_MAX
+};
+
+const PropertyInfo qdev_prop_spdm_trans = {
+    .type = "SpdmTransportType",
+    .description = "Spdm Transport, doe/nvme/mctp/scsi/unspecified",
+    .enum_table = &SpdmTransport_lookup,
+    .get = qdev_propinfo_get_enum,
+    .set = qdev_propinfo_set_enum,
+    .set_default_value = qdev_propinfo_set_default_value_enum,
+};
diff --git a/include/system/spdm-socket.h b/include/system/spdm-socket.h
index 6c2cb7b926..8fb5f7cf40 100644
--- a/include/system/spdm-socket.h
+++ b/include/system/spdm-socket.h
@@ -110,12 +110,25 @@ typedef struct {
 #define SPDM_SOCKET_COMMAND_UNKOWN                0xFFFF
 #define SPDM_SOCKET_COMMAND_TEST                  0xDEAD
 
-#define SPDM_SOCKET_TRANSPORT_TYPE_MCTP           0x01
-#define SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE        0x02
-#define SPDM_SOCKET_TRANSPORT_TYPE_SCSI           0x03
-#define SPDM_SOCKET_TRANSPORT_TYPE_NVME           0x04
-
 #define SPDM_SOCKET_MAX_MESSAGE_BUFFER_SIZE       0x1200
 #define SPDM_SOCKET_MAX_MSG_STATUS_LEN            0x02
 
+typedef enum SpdmTransportType {
+    SPDM_SOCKET_TRANSPORT_TYPE_UNSPEC = 0,
+    SPDM_SOCKET_TRANSPORT_TYPE_MCTP,
+    SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE,
+    SPDM_SOCKET_TRANSPORT_TYPE_SCSI,
+    SPDM_SOCKET_TRANSPORT_TYPE_NVME,
+    SPDM_SOCKET_TRANSPORT_TYPE_MAX,
+} SpdmTransportType;
+
+extern const PropertyInfo qdev_prop_spdm_trans;
+
+#define DEFINE_PROP_SPDM_TRANS(_name, _state, _field, _default) \
+    DEFINE_PROP_UNSIGNED(_name, _state, _field, _default, \
+                         qdev_prop_spdm_trans, SpdmTransportType)
+#define DEFINE_PROP_SPDM_TRANS_NODEFAULT(_name, _state, _field) \
+    DEFINE_PROP_SPDM_TRANS(_name, _state, _field, \
+                           SPDM_SOCKET_TRANSPORT_TYPE_UNSPEC)
+
 #endif
-- 
2.51.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 5/5] hw/nvme: connect SPDM over NVMe Security Send/Recv
  2025-08-28  7:25 [PATCH v2 0/5] NVMe: Add SPDM over the storage transport support Wilfred Mallawa
                   ` (3 preceding siblings ...)
  2025-08-28  7:25 ` [PATCH v2 4/5] spdm: define SPDM transport enum types Wilfred Mallawa
@ 2025-08-28  7:25 ` Wilfred Mallawa
  4 siblings, 0 replies; 9+ messages in thread
From: Wilfred Mallawa @ 2025-08-28  7:25 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Keith Busch, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
	Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
	Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
	Jonathan Cameron, Wilfred Mallawa

From: Wilfred Mallawa <wilfred.mallawa@wdc.com>

This patch extends the existing support we have for NVMe with only DoE
to also add support to SPDM over the NVMe Security Send/Recv commands.

With the new definition of the `spdm-trans` argument, users can specify
`spdm_trans=nvme` or `spdm_trans=doe`. This allows us to select the SPDM
transport respectively. SPDM over the NVMe Security Send/Recv commands
are defined in the DMTF DSP0286.

Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
---
 docs/specs/spdm.rst         | 10 +++++++--
 hw/nvme/ctrl.c              | 45 ++++++++++++++++++++++++++++---------
 include/hw/pci/pci_device.h |  2 ++
 3 files changed, 44 insertions(+), 13 deletions(-)

diff --git a/docs/specs/spdm.rst b/docs/specs/spdm.rst
index f7de080ff0..dd6cfbbd68 100644
--- a/docs/specs/spdm.rst
+++ b/docs/specs/spdm.rst
@@ -98,7 +98,7 @@ Then you can add this to your QEMU command line:
 .. code-block:: shell
 
     -drive file=blknvme,if=none,id=mynvme,format=raw \
-        -device nvme,drive=mynvme,serial=deadbeef,spdm_port=2323
+        -device nvme,drive=mynvme,serial=deadbeef,spdm_port=2323,spdm_trans=doe
 
 At which point QEMU will try to connect to the SPDM server.
 
@@ -113,7 +113,13 @@ of the default. So the entire QEMU command might look like this
         -append "root=/dev/vda console=ttyS0" \
         -net none -nographic \
         -drive file=blknvme,if=none,id=mynvme,format=raw \
-        -device nvme,drive=mynvme,serial=deadbeef,spdm_port=2323
+        -device nvme,drive=mynvme,serial=deadbeef,spdm_port=2323,spdm_trans=doe
+
+The `spdm_trans` argument defines the underlying transport type that is emulated
+by QEMU. For an PCIe NVMe controller, both "doe" and "nvme" are supported. Where,
+"doe" does SPDM transport over the PCIe extended capability Data Object Exchange
+(DOE), and "nvme" uses the NVMe Admin Security Send/Receive commands to
+implement the SPDM transport.
 
 .. _DMTF:
    https://www.dmtf.org/standards/SPDM
diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index 9358c9e540..3ddaea6cd5 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -8932,19 +8932,31 @@ static bool nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp)
 
     pcie_cap_deverr_init(pci_dev);
 
-    /* DOE Initialisation */
+    /* SPDM Initialisation */
     if (pci_dev->spdm_port) {
-        uint16_t doe_offset = n->params.sriov_max_vfs ?
-                                  PCI_CONFIG_SPACE_SIZE + PCI_ARI_SIZEOF
-                                  : PCI_CONFIG_SPACE_SIZE;
+        switch  (pci_dev->spdm_trans) {
+        case SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE:
+            uint16_t doe_offset = n->params.sriov_max_vfs ?
+                                    PCI_CONFIG_SPACE_SIZE + PCI_ARI_SIZEOF
+                                    : PCI_CONFIG_SPACE_SIZE;
 
-        pcie_doe_init(pci_dev, &pci_dev->doe_spdm, doe_offset,
-                      doe_spdm_prot, true, 0);
+            pcie_doe_init(pci_dev, &pci_dev->doe_spdm, doe_offset,
+                          doe_spdm_prot, true, 0);
 
-        pci_dev->doe_spdm.spdm_socket = spdm_socket_connect(pci_dev->spdm_port,
-                                                            errp);
+            pci_dev->doe_spdm.spdm_socket =
+                spdm_socket_connect(pci_dev->spdm_port, errp);
 
-        if (pci_dev->doe_spdm.spdm_socket < 0) {
+            if (pci_dev->doe_spdm.spdm_socket < 0) {
+                return false;
+            }
+            break;
+        case SPDM_SOCKET_TRANSPORT_TYPE_NVME:
+            n->spdm_socket = spdm_socket_connect(pci_dev->spdm_port, errp);
+            if (n->spdm_socket < 0) {
+                return false;
+            }
+            break;
+        default:
             return false;
         }
     }
@@ -9235,11 +9247,17 @@ static void nvme_exit(PCIDevice *pci_dev)
         g_free(n->cmb.buf);
     }
 
+    /* Only one of the `spdm_socket` below would have been setup */
     if (pci_dev->doe_spdm.spdm_socket > 0) {
         spdm_socket_close(pci_dev->doe_spdm.spdm_socket,
                           SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE);
     }
 
+    if (n->spdm_socket > 0) {
+        spdm_socket_close(pci_dev->doe_spdm.spdm_socket,
+                          SPDM_SOCKET_TRANSPORT_TYPE_NVME);
+    }
+
     if (n->pmr.dev) {
         host_memory_backend_set_mapped(n->pmr.dev, false);
     }
@@ -9292,6 +9310,7 @@ static const Property nvme_props[] = {
                      false),
     DEFINE_PROP_UINT16("mqes", NvmeCtrl, params.mqes, 0x7ff),
     DEFINE_PROP_UINT16("spdm_port", PCIDevice, spdm_port, 0),
+    DEFINE_PROP_SPDM_TRANS_NODEFAULT("spdm_trans", PCIDevice, spdm_trans),
     DEFINE_PROP_BOOL("ctratt.mem", NvmeCtrl, params.ctratt.mem, false),
     DEFINE_PROP_BOOL("atomic.dn", NvmeCtrl, params.atomic_dn, 0),
     DEFINE_PROP_UINT16("atomic.awun", NvmeCtrl, params.atomic_awun, 0),
@@ -9367,7 +9386,9 @@ static void nvme_pci_write_config(PCIDevice *dev, uint32_t address,
 {
     uint16_t old_num_vfs = pcie_sriov_num_vfs(dev);
 
-    if (pcie_find_capability(dev, PCI_EXT_CAP_ID_DOE)) {
+    /* DOE is only initialised if SPDM over DOE is used */
+    if (pcie_find_capability(dev, PCI_EXT_CAP_ID_DOE) &&
+        dev->spdm_trans == SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE) {
         pcie_doe_write_config(&dev->doe_spdm, address, val, len);
     }
     pci_default_write_config(dev, address, val, len);
@@ -9378,7 +9399,9 @@ static void nvme_pci_write_config(PCIDevice *dev, uint32_t address,
 static uint32_t nvme_pci_read_config(PCIDevice *dev, uint32_t address, int len)
 {
     uint32_t val;
-    if (dev->spdm_port && pcie_find_capability(dev, PCI_EXT_CAP_ID_DOE)) {
+
+    if (dev->spdm_port && pcie_find_capability(dev, PCI_EXT_CAP_ID_DOE) &&
+        (dev->spdm_trans == SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE)) {
         if (pcie_doe_read_config(&dev->doe_spdm, address, len, &val)) {
             return val;
         }
diff --git a/include/hw/pci/pci_device.h b/include/hw/pci/pci_device.h
index eee0338568..88ccea5011 100644
--- a/include/hw/pci/pci_device.h
+++ b/include/hw/pci/pci_device.h
@@ -4,6 +4,7 @@
 #include "hw/pci/pci.h"
 #include "hw/pci/pcie.h"
 #include "hw/pci/pcie_doe.h"
+#include "system/spdm-socket.h"
 
 #define TYPE_PCI_DEVICE "pci-device"
 typedef struct PCIDeviceClass PCIDeviceClass;
@@ -166,6 +167,7 @@ struct PCIDevice {
 
     /* SPDM */
     uint16_t spdm_port;
+    SpdmTransportType spdm_trans;
 
     /* DOE */
     DOECap doe_spdm;
-- 
2.51.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/5] hw/nvme: add NVMe Admin Security SPDM support
  2025-08-28  7:25 ` [PATCH v2 3/5] hw/nvme: add NVMe Admin Security SPDM support Wilfred Mallawa
@ 2025-08-30  4:17   ` Keith Busch
  2025-09-01  3:11     ` Wilfred Mallawa
  0 siblings, 1 reply; 9+ messages in thread
From: Keith Busch @ 2025-08-30  4:17 UTC (permalink / raw)
  To: Wilfred Mallawa
  Cc: Alistair Francis, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
	Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
	Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
	Jonathan Cameron, Wilfred Mallawa

On Thu, Aug 28, 2025 at 05:25:31PM +1000, Wilfred Mallawa wrote:
> +static uint16_t nvme_sec_prot_spdm_send(NvmeCtrl *n, NvmeRequest *req)
> +{
> +    StorageSpdmTransportHeader hdr = {0};
> +    g_autofree uint8_t *sec_buf = NULL;
> +    uint32_t transfer_len = le32_to_cpu(req->cmd.cdw11);
> +    uint32_t transport_transfer_len = transfer_len;
> +    uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
> +    uint32_t recvd;
> +    uint16_t nvme_cmd_status, ret;
> +    uint8_t secp = extract32(dw10, 24, 8);
> +    uint8_t spsp1 = extract32(dw10, 16, 8);
> +    uint8_t spsp0 = extract32(dw10, 8, 8);
> +    bool spdm_res;
> +
> +    transport_transfer_len += sizeof(hdr);
> +    if (transport_transfer_len > SPDM_SOCKET_MAX_MESSAGE_BUFFER_SIZE) {
> +        return NVME_NO_COMPLETE | NVME_DNR;
> +    }
> +
> +    /* Generate the NVMe transport header */
> +    hdr.security_protocol = secp;
> +    hdr.security_protocol_specific = cpu_to_le16((spsp1 << 8) | spsp0);
> +    hdr.length = cpu_to_le32(transfer_len);
> +
> +    sec_buf = g_malloc0(transport_transfer_len);
> +    if (!sec_buf) {
> +        return NVME_NO_COMPLETE | NVME_DNR;
> +    }
> +
> +    /* Attach the transport header */
> +    memcpy(sec_buf, &hdr, sizeof(hdr));
> +    ret = nvme_h2c(n, sec_buf + sizeof(hdr), transfer_len, req);
> +    if (ret) {
> +        return NVME_NO_COMPLETE | NVME_DNR;

The "NO_COMPLETE" is a special -1 value so or'ing it with anything
doesn't make sense.

But more importantly, what's supposed to complete it? This is supposed
to be used as a return code when you have an async context to complete
it later, and there doesn't appear to be one here.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/5] hw/nvme: add NVMe Admin Security SPDM support
  2025-08-30  4:17   ` Keith Busch
@ 2025-09-01  3:11     ` Wilfred Mallawa
  0 siblings, 0 replies; 9+ messages in thread
From: Wilfred Mallawa @ 2025-09-01  3:11 UTC (permalink / raw)
  To: kbusch@kernel.org
  Cc: its@irrelevant.dk, hreitz@redhat.com, Alistair Francis,
	philmd@linaro.org, stefanha@redhat.com, fam@euphon.net,
	qemu-devel@nongnu.org, foss@defmacro.it, kwolf@redhat.com,
	qemu-block@nongnu.org, mst@redhat.com,
	Jonathan.Cameron@huawei.com, marcel.apfelbaum@gmail.com

On Fri, 2025-08-29 at 22:17 -0600, Keith Busch wrote:
> On Thu, Aug 28, 2025 at 05:25:31PM +1000, Wilfred Mallawa wrote:
> > +static uint16_t nvme_sec_prot_spdm_send(NvmeCtrl *n, NvmeRequest
> > *req)
> > +{
> > +    StorageSpdmTransportHeader hdr = {0};
> > +    g_autofree uint8_t *sec_buf = NULL;
> > +    uint32_t transfer_len = le32_to_cpu(req->cmd.cdw11);
> > +    uint32_t transport_transfer_len = transfer_len;
> > +    uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
> > +    uint32_t recvd;
> > +    uint16_t nvme_cmd_status, ret;
> > +    uint8_t secp = extract32(dw10, 24, 8);
> > +    uint8_t spsp1 = extract32(dw10, 16, 8);
> > +    uint8_t spsp0 = extract32(dw10, 8, 8);
> > +    bool spdm_res;
> > +
> > +    transport_transfer_len += sizeof(hdr);
> > +    if (transport_transfer_len >
> > SPDM_SOCKET_MAX_MESSAGE_BUFFER_SIZE) {
> > +        return NVME_NO_COMPLETE | NVME_DNR;
> > +    }
> > +
> > +    /* Generate the NVMe transport header */
> > +    hdr.security_protocol = secp;
> > +    hdr.security_protocol_specific = cpu_to_le16((spsp1 << 8) |
> > spsp0);
> > +    hdr.length = cpu_to_le32(transfer_len);
> > +
> > +    sec_buf = g_malloc0(transport_transfer_len);
> > +    if (!sec_buf) {
> > +        return NVME_NO_COMPLETE | NVME_DNR;
> > +    }
> > +
> > +    /* Attach the transport header */
> > +    memcpy(sec_buf, &hdr, sizeof(hdr));
> > +    ret = nvme_h2c(n, sec_buf + sizeof(hdr), transfer_len, req);
> > +    if (ret) {
> > +        return NVME_NO_COMPLETE | NVME_DNR;
> 
> The "NO_COMPLETE" is a special -1 value so or'ing it with anything
> doesn't make sense.
> 
> But more importantly, what's supposed to complete it? This is
> supposed
> to be used as a return code when you have an async context to
> complete
> it later, and there doesn't appear to be one here.
Okay I totally misinterpreted it's use-case, and yeah there's no async
context here. I will update the error return values for V3. Thanks for
the feedback!

Regards,
Wilfred

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 0/5] NVMe: Add SPDM over the storage transport support
@ 2025-09-01  3:47 Wilfred Mallawa
  0 siblings, 0 replies; 9+ messages in thread
From: Wilfred Mallawa @ 2025-09-01  3:47 UTC (permalink / raw)
  To: Alistair Francis
  Cc: Keith Busch, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
	Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
	Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
	Jonathan Cameron, Wilfred Mallawa

From: Wilfred Mallawa <wilfred.mallawa@wdc.com>

This series extends the existing SPDM support in QEMU to support the DSP0286
SPDM Storage Transport [1] for NVMe. SPDM Storage Transport uses the NVMe
Admin Security Send/Receive commands, as such, support for these commands have
also been added.

With the addition of a new `spdm-trans` CLI argument for NVMe controllers,
users can specify `spdm_trans=nvme` or `spdm_trans=doe`. This allows for the
selection of the SPDM transport. The `doe` option is the current default,
`nvme` would select SPDM Storage Transport for the controller, where SPDM
communication happens over the NVMe Admin Security Send/Receive commands.

Support for DSP0286 already exists in `libspdm` [2] and support for the QEMU
SPDM server is being upstreamed for `spdm-utils` [3]. This series was tested by
using `spdm-utils` as the qemu SPDM server with SPDM Storage Transport support
built with `libspdm` v3.8.0, and `spdm-utils` also as the SPDM requester.

Changes V1 -> V2:
	- spdm_socket_rsp() now uses the new spdm_socket_send/receive()
	  functions. spdm_socket_command_valid() is added to parse the
	  command value incase some bytes were received (result = true) but
	  with an invalid command.

	- Added inline comments to describe fields of
	  StorageSpdmTransportHeader. Checkpatch generates warnings, but lots of
	  existing code does this. The QEMU_PACKED attribute now follows the
	  StorageSpdmTransportHeader struct definition.

	- Use extract32() instead of manual shifting/masking in
	  nvme_sec_prot_spdm_send/recv().

	- Use g_autofree for send/recv buffer allocation
	  in nvme_sec_prot_spdm_send/recv().
	
	- Added explicit fallthrough comment for checking `secp` in
	  nvme_security_receive()
	
	- Added enum support for SPDM transport type, such that a user defined
	  transport type string, can be mapped to the respective enum for
	  internal use.

Changes V2 -> V3:
	- Fixed up the incorrect use of `NVME_NO_COMPLETE` to more appropriate
	  NVMe error codes in Patch [3/5]. Note that DSP0286 does not define
	  error codes for transport level failures.
	
	- Removed NULL check for g_malloc0(). Should abort instead.

Wilfred Mallawa (5):
  spdm-socket: add seperate send/recv functions
  spdm: add spdm storage transport virtual header
  hw/nvme: add NVMe Admin Security SPDM support
  spdm: define SPDM transport enum types
  hw/nvme: connect SPDM over NVMe Security Send/Recv

 backends/spdm-socket.c       |  79 ++++++++++--
 docs/specs/spdm.rst          |  10 +-
 hw/nvme/ctrl.c               | 233 +++++++++++++++++++++++++++++++++--
 hw/nvme/nvme.h               |   5 +
 include/block/nvme.h         |  15 +++
 include/hw/pci/pci_device.h  |   2 +
 include/system/spdm-socket.h |  66 +++++++++-
 7 files changed, 382 insertions(+), 28 deletions(-)

-- 
2.51.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-09-01  3:50 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-28  7:25 [PATCH v2 0/5] NVMe: Add SPDM over the storage transport support Wilfred Mallawa
2025-08-28  7:25 ` [PATCH v2 1/5] spdm-socket: add seperate send/recv functions Wilfred Mallawa
2025-08-28  7:25 ` [PATCH v2 2/5] spdm: add spdm storage transport virtual header Wilfred Mallawa
2025-08-28  7:25 ` [PATCH v2 3/5] hw/nvme: add NVMe Admin Security SPDM support Wilfred Mallawa
2025-08-30  4:17   ` Keith Busch
2025-09-01  3:11     ` Wilfred Mallawa
2025-08-28  7:25 ` [PATCH v2 4/5] spdm: define SPDM transport enum types Wilfred Mallawa
2025-08-28  7:25 ` [PATCH v2 5/5] hw/nvme: connect SPDM over NVMe Security Send/Recv Wilfred Mallawa
  -- strict thread matches above, loose matches on Subject: below --
2025-09-01  3:47 [PATCH v2 0/5] NVMe: Add SPDM over the storage transport support Wilfred Mallawa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).