* [PATCH 0/4] NVMe: Add SPDM over the storage transport support
@ 2025-08-26 5:46 Wilfred Mallawa
2025-08-26 5:46 ` [PATCH 1/4] spdm-socket: add seperate send/recv functions Wilfred Mallawa
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Wilfred Mallawa @ 2025-08-26 5:46 UTC (permalink / raw)
To: Alistair Francis
Cc: Keith Busch, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
Wilfred Mallawa
From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
This series extends the existing SPDM support in QEMU to support the DSP0286
SPDM Storage Transport [1] for NVMe. SPDM Storage Transport uses the NVMe
Admin Security Send/Receive commands, as such, support for these commands have
also been added.
With the addition of a new `spdm-trans` CLI argument for NVMe controllers,
users can specify `spdm_trans=nvme` or `spdm_trans=doe`. This allows for the
selection of the SPDM transport. The `doe` option is the current default,
`nvme` would select SPDM Storage Transport for the controller, where SPDM
communication happens over the NVMe Admin Security Send/Receive commands.
Support for DSP0286 already exists in `libspdm` [2] and support for the QEMU
SPDM server is being upstreamed for `spdm-utils` [3]. This series was tested by
using `spdm-utils` as the qemu SPDM server with SPDM Storage Transport support
built with `libspdm` v3.8.0, and `spdm-utils` also as the SPDM requester.
[1] https://www.dmtf.org/sites/default/files/standards/documents/DSP0286_1.0.0.pdf
[2] https://github.com/DMTF/libspdm/pull/2827
[3] https://github.com/westerndigitalcorporation/spdm-utils/pull/139
Wilfred Mallawa (4):
spdm-socket: add seperate send/recv functions
spdm: add spdm storage transport virtual header
hw/nvme: add NVMe Admin Security SPDM support
hw/nvme: connect SPDM over NVMe Security Send/Recv
backends/spdm-socket.c | 27 +++-
docs/specs/spdm.rst | 10 +-
hw/nvme/ctrl.c | 264 +++++++++++++++++++++++++++++++++--
hw/nvme/nvme.h | 5 +
include/block/nvme.h | 15 ++
include/hw/pci/pci_device.h | 1 +
include/system/spdm-socket.h | 46 ++++++
7 files changed, 351 insertions(+), 17 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/4] spdm-socket: add seperate send/recv functions
2025-08-26 5:46 [PATCH 0/4] NVMe: Add SPDM over the storage transport support Wilfred Mallawa
@ 2025-08-26 5:46 ` Wilfred Mallawa
2025-08-26 9:34 ` Jonathan Cameron via
2025-08-26 5:46 ` [PATCH 2/4] spdm: add spdm storage transport virtual header Wilfred Mallawa
` (2 subsequent siblings)
3 siblings, 1 reply; 9+ messages in thread
From: Wilfred Mallawa @ 2025-08-26 5:46 UTC (permalink / raw)
To: Alistair Francis
Cc: Keith Busch, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
Wilfred Mallawa
From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
This is to support uni-directional transports such as SPDM
over Storage. As specified by the DMTF DSP0286.
Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
---
backends/spdm-socket.c | 27 ++++++++++++++++++++++++---
include/system/spdm-socket.h | 35 +++++++++++++++++++++++++++++++++++
2 files changed, 59 insertions(+), 3 deletions(-)
diff --git a/backends/spdm-socket.c b/backends/spdm-socket.c
index 2c709c68c8..bab1b512c8 100644
--- a/backends/spdm-socket.c
+++ b/backends/spdm-socket.c
@@ -184,6 +184,29 @@ int spdm_socket_connect(uint16_t port, Error **errp)
return client_socket;
}
+uint32_t spdm_socket_receive(const int socket, uint32_t transport_type,
+ void *rsp, uint32_t rsp_len)
+{
+ uint32_t command;
+ bool result;
+
+ result = receive_platform_data(socket, transport_type, &command,
+ (uint8_t *)rsp, &rsp_len);
+
+ if (!result || command == 0) {
+ return 0;
+ }
+
+ return rsp_len;
+}
+
+bool spdm_socket_send(const int socket, uint32_t socket_cmd,
+ uint32_t transport_type, void *req, uint32_t req_len)
+{
+ return send_platform_data(socket, transport_type,
+ socket_cmd, req, req_len);
+}
+
uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
void *req, uint32_t req_len,
void *rsp, uint32_t rsp_len)
@@ -200,12 +223,10 @@ uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
result = receive_platform_data(socket, transport_type, &command,
(uint8_t *)rsp, &rsp_len);
- if (!result) {
+ if (!result || command == 0) {
return 0;
}
- assert(command != 0);
-
return rsp_len;
}
diff --git a/include/system/spdm-socket.h b/include/system/spdm-socket.h
index 5d8bd9aa4e..2b7d03f82d 100644
--- a/include/system/spdm-socket.h
+++ b/include/system/spdm-socket.h
@@ -50,6 +50,35 @@ uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
void *req, uint32_t req_len,
void *rsp, uint32_t rsp_len);
+/**
+ * spdm_socket_rsp: Receive a message from an SPDM server
+ * @socket: socket returned from spdm_socket_connect()
+ * @transport_type: SPDM_SOCKET_TRANSPORT_TYPE_* macro
+ * @rsp: response buffer
+ * @rsp_len: response buffer length
+ *
+ * Receives a message from the SPDM server and returns the number of bytes
+ * received or 0 on failure. This can be used to receive a message from the SPDM
+ * server without sending anything first.
+ */
+uint32_t spdm_socket_receive(const int socket, uint32_t transport_type,
+ void *rsp, uint32_t rsp_len);
+
+/**
+ * spdm_socket_rsp: Sends a message to an SPDM server
+ * @socket: socket returned from spdm_socket_connect()
+ * @socket_cmd: socket command type (normal/if_recv/if_send etc...)
+ * @transport_type: SPDM_SOCKET_TRANSPORT_TYPE_* macro
+ * @req: request buffer
+ * @req_len: request buffer length
+ *
+ * Sends platform data to a SPDM server on socket, returns true on success.
+ * The response from the server must then be fetched by using
+ * spdm_socket_receive().
+ */
+bool spdm_socket_send(const int socket, uint32_t socket_cmd,
+ uint32_t transport_type, void *req, uint32_t req_len);
+
/**
* spdm_socket_close: send a shutdown command to the server
* @socket: socket returned from spdm_socket_connect()
@@ -60,6 +89,9 @@ uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
void spdm_socket_close(const int socket, uint32_t transport_type);
#define SPDM_SOCKET_COMMAND_NORMAL 0x0001
+#define SPDM_SOCKET_STORAGE_CMD_IF_SEND 0x0002
+#define SPDM_SOCKET_STORAGE_CMD_IF_RECV 0x0003
+#define SOCKET_SPDM_STORAGE_ACK_STATUS 0x0004
#define SPDM_SOCKET_COMMAND_OOB_ENCAP_KEY_UPDATE 0x8001
#define SPDM_SOCKET_COMMAND_CONTINUE 0xFFFD
#define SPDM_SOCKET_COMMAND_SHUTDOWN 0xFFFE
@@ -68,7 +100,10 @@ void spdm_socket_close(const int socket, uint32_t transport_type);
#define SPDM_SOCKET_TRANSPORT_TYPE_MCTP 0x01
#define SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE 0x02
+#define SPDM_SOCKET_TRANSPORT_TYPE_SCSI 0x03
+#define SPDM_SOCKET_TRANSPORT_TYPE_NVME 0x04
#define SPDM_SOCKET_MAX_MESSAGE_BUFFER_SIZE 0x1200
+#define SPDM_SOCKET_MAX_MSG_STATUS_LEN 0x02
#endif
--
2.51.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/4] spdm: add spdm storage transport virtual header
2025-08-26 5:46 [PATCH 0/4] NVMe: Add SPDM over the storage transport support Wilfred Mallawa
2025-08-26 5:46 ` [PATCH 1/4] spdm-socket: add seperate send/recv functions Wilfred Mallawa
@ 2025-08-26 5:46 ` Wilfred Mallawa
2025-08-26 9:38 ` Jonathan Cameron via
2025-08-26 5:46 ` [PATCH 3/4] hw/nvme: add NVMe Admin Security SPDM support Wilfred Mallawa
2025-08-26 5:46 ` [PATCH 4/4] hw/nvme: connect SPDM over NVMe Security Send/Recv Wilfred Mallawa
3 siblings, 1 reply; 9+ messages in thread
From: Wilfred Mallawa @ 2025-08-26 5:46 UTC (permalink / raw)
To: Alistair Francis
Cc: Keith Busch, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
Wilfred Mallawa
From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
This header contains the transport encoding for an SPDM message that
uses the SPDM over Storage transport as defined by the DMTF DSP0286.
Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
---
include/system/spdm-socket.h | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/include/system/spdm-socket.h b/include/system/spdm-socket.h
index 2b7d03f82d..2469424cce 100644
--- a/include/system/spdm-socket.h
+++ b/include/system/spdm-socket.h
@@ -88,6 +88,17 @@ bool spdm_socket_send(const int socket, uint32_t socket_cmd,
*/
void spdm_socket_close(const int socket, uint32_t transport_type);
+/*
+ * Defines the transport encoding for SPDM, this information shall be passed
+ * down to the SPDM server, when conforming to the SPDM over Storage standard
+ * as defined by DSP0286.
+ */
+typedef struct QEMU_PACKED {
+ uint8_t security_protocol;
+ uint16_t security_protocol_specific;
+ uint32_t length;
+} StorageSpdmTransportHeader;
+
#define SPDM_SOCKET_COMMAND_NORMAL 0x0001
#define SPDM_SOCKET_STORAGE_CMD_IF_SEND 0x0002
#define SPDM_SOCKET_STORAGE_CMD_IF_RECV 0x0003
--
2.51.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/4] hw/nvme: add NVMe Admin Security SPDM support
2025-08-26 5:46 [PATCH 0/4] NVMe: Add SPDM over the storage transport support Wilfred Mallawa
2025-08-26 5:46 ` [PATCH 1/4] spdm-socket: add seperate send/recv functions Wilfred Mallawa
2025-08-26 5:46 ` [PATCH 2/4] spdm: add spdm storage transport virtual header Wilfred Mallawa
@ 2025-08-26 5:46 ` Wilfred Mallawa
2025-08-26 11:13 ` Jonathan Cameron via
2025-08-26 5:46 ` [PATCH 4/4] hw/nvme: connect SPDM over NVMe Security Send/Recv Wilfred Mallawa
3 siblings, 1 reply; 9+ messages in thread
From: Wilfred Mallawa @ 2025-08-26 5:46 UTC (permalink / raw)
To: Alistair Francis
Cc: Keith Busch, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
Wilfred Mallawa
From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
Adds the NVMe Admin Security Send/Receive command support with support
for DMTFs SPDM. The transport binding for SPDM is defined in the
DMTF DSP0286.
Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
---
hw/nvme/ctrl.c | 202 ++++++++++++++++++++++++++++++++++++++++++-
hw/nvme/nvme.h | 5 ++
include/block/nvme.h | 15 ++++
3 files changed, 221 insertions(+), 1 deletion(-)
diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index f5ee6bf260..442144642d 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -282,6 +282,8 @@ static const uint32_t nvme_cse_acs_default[256] = {
[NVME_ADM_CMD_FORMAT_NVM] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
[NVME_ADM_CMD_DIRECTIVE_RECV] = NVME_CMD_EFF_CSUPP,
[NVME_ADM_CMD_DIRECTIVE_SEND] = NVME_CMD_EFF_CSUPP,
+ [NVME_ADM_CMD_SECURITY_SEND] = NVME_CMD_EFF_CSUPP,
+ [NVME_ADM_CMD_SECURITY_RECV] = NVME_CMD_EFF_CSUPP,
};
static const uint32_t nvme_cse_iocs_nvm_default[256] = {
@@ -7282,6 +7284,199 @@ static uint16_t nvme_dbbuf_config(NvmeCtrl *n, const NvmeRequest *req)
return NVME_SUCCESS;
}
+static uint16_t nvme_sec_prot_spdm_send(NvmeCtrl *n, NvmeRequest *req)
+{
+ StorageSpdmTransportHeader hdr = {0};
+ uint8_t *sec_buf;
+ uint32_t transfer_len = le32_to_cpu(req->cmd.cdw11);
+ uint32_t transport_transfer_len = transfer_len;
+ uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
+ uint32_t recvd;
+ uint16_t nvme_cmd_status;
+ uint16_t ret;
+ uint8_t secp = (dw10 >> 24) & 0xFF;
+ uint8_t spsp1 = (dw10 >> 16) & 0xFF;
+ uint8_t spsp0 = (dw10 >> 8) & 0xFF;
+ bool spdm_res;
+
+ transport_transfer_len += sizeof(hdr);
+ if (transport_transfer_len > SPDM_SOCKET_MAX_MESSAGE_BUFFER_SIZE) {
+ return NVME_NO_COMPLETE | NVME_DNR;
+ }
+
+ /* Generate the NVMe transport header */
+ hdr.security_protocol = secp;
+ hdr.security_protocol_specific = cpu_to_le16((spsp1 << 8) | spsp0);
+ hdr.length = cpu_to_le32(transfer_len);
+
+ sec_buf = g_malloc0(transport_transfer_len);
+ if (!sec_buf) {
+ return NVME_NO_COMPLETE | NVME_DNR;
+ }
+
+ /* Attach the transport header */
+ memcpy(sec_buf, &hdr, sizeof(hdr));
+ ret = nvme_h2c(n, sec_buf + sizeof(hdr), transfer_len, req);
+ if (ret) {
+ return NVME_NO_COMPLETE | NVME_DNR;
+ }
+
+ spdm_res = spdm_socket_send(n->spdm_socket, SPDM_SOCKET_STORAGE_CMD_IF_SEND,
+ SPDM_SOCKET_TRANSPORT_TYPE_NVME, sec_buf,
+ transport_transfer_len);
+ if (!spdm_res) {
+ g_free(sec_buf);
+ return NVME_NO_COMPLETE | NVME_DNR;
+ }
+
+ /* The responder shall ack with message status */
+ recvd = spdm_socket_receive(n->spdm_socket, SPDM_SOCKET_TRANSPORT_TYPE_NVME,
+ (uint8_t *)&nvme_cmd_status,
+ SPDM_SOCKET_MAX_MSG_STATUS_LEN);
+
+ nvme_cmd_status = cpu_to_be16(nvme_cmd_status);
+
+ if (recvd < SPDM_SOCKET_MAX_MSG_STATUS_LEN) {
+ g_free(sec_buf);
+ return NVME_NO_COMPLETE | NVME_DNR;
+ }
+
+ g_free(sec_buf);
+ return nvme_cmd_status;
+}
+
+/* From host to controller */
+static uint16_t nvme_security_send(NvmeCtrl *n, NvmeRequest *req)
+{
+ uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
+ uint8_t secp = (dw10 >> 24) & 0xff;
+
+ switch (secp) {
+ case NVME_SEC_PROT_DMTF_SPDM:
+ return nvme_sec_prot_spdm_send(n, req);
+ default:
+ /* Unsupported Security Protocol Type */
+ return NVME_INVALID_FIELD | NVME_DNR;
+ }
+
+ return NVME_INVALID_FIELD | NVME_DNR;
+}
+
+static uint16_t nvme_sec_prot_spdm_receive(NvmeCtrl *n, NvmeRequest *req)
+{
+ StorageSpdmTransportHeader hdr = {0};
+ uint8_t *rsp_spdm_buf;
+ uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
+ uint32_t alloc_len = le32_to_cpu(req->cmd.cdw11);
+ uint32_t recvd, spdm_res;
+ uint16_t nvme_cmd_status;
+ uint16_t ret;
+ uint8_t secp = (dw10 >> 24) & 0xFF;
+ uint8_t spsp1 = (dw10 >> 16) & 0xFF;
+ uint8_t spsp0 = (dw10 >> 8) & 0xFF;
+
+ if (!alloc_len) {
+ return NVME_INVALID_FIELD | NVME_DNR;
+ }
+
+ /* Generate the NVMe transport header */
+ hdr.security_protocol = secp;
+ hdr.security_protocol_specific = cpu_to_le16((spsp1 << 8) | spsp0);
+ hdr.length = cpu_to_le32(alloc_len);
+
+ /* Forward if_recv to the SPDM Server with SPSP0 */
+ spdm_res = spdm_socket_send(n->spdm_socket, SPDM_SOCKET_STORAGE_CMD_IF_RECV,
+ SPDM_SOCKET_TRANSPORT_TYPE_NVME,
+ (uint8_t *)&hdr, sizeof(hdr));
+ if (!spdm_res) {
+ return NVME_NO_COMPLETE | NVME_DNR;
+ }
+
+ /* The responder shall ack with message status */
+ recvd = spdm_socket_receive(n->spdm_socket, SPDM_SOCKET_TRANSPORT_TYPE_NVME,
+ (uint8_t *)&nvme_cmd_status,
+ SPDM_SOCKET_MAX_MSG_STATUS_LEN);
+
+ nvme_cmd_status = cpu_to_be16(nvme_cmd_status);
+
+
+ if (recvd < SPDM_SOCKET_MAX_MSG_STATUS_LEN) {
+ return NVME_NO_COMPLETE | NVME_DNR;
+ }
+
+ /* An error here implies the prior if_recv from requester was spurious */
+ if (nvme_cmd_status != NVME_SUCCESS) {
+ return nvme_cmd_status;
+ }
+
+ /* Clear to start receiving data from the server */
+ rsp_spdm_buf = g_malloc0(alloc_len);
+ if (!rsp_spdm_buf) {
+ return NVME_NO_COMPLETE | NVME_DNR;
+ }
+
+ recvd = spdm_socket_receive(n->spdm_socket,
+ SPDM_SOCKET_TRANSPORT_TYPE_NVME,
+ rsp_spdm_buf, alloc_len);
+ if (!recvd) {
+ g_free(rsp_spdm_buf);
+ return NVME_NO_COMPLETE | NVME_DNR;
+ }
+
+ ret = nvme_c2h(n, rsp_spdm_buf, MIN(recvd, alloc_len), req);
+ g_free(rsp_spdm_buf);
+
+ if (ret) {
+ return NVME_NO_COMPLETE | NVME_DNR;
+ }
+
+ return NVME_SUCCESS;
+}
+
+static uint16_t nvme_get_sec_prot_info(NvmeCtrl *n, NvmeRequest *req)
+{
+ uint32_t alloc_len = le32_to_cpu(req->cmd.cdw11);
+ uint8_t resp[10] = {0};
+
+ if (alloc_len < 10) {
+ return NVME_INVALID_FIELD | NVME_DNR;
+ }
+
+ /* Support Security Protol List Length */
+ resp[6] = 0; /* MSB */
+ resp[7] = 2; /* LSB */
+ /* Support Security Protocol List */
+ resp[8] = SFSC_SECURITY_PROT_INFO;
+ resp[9] = NVME_SEC_PROT_DMTF_SPDM;
+
+ return nvme_c2h(n, resp, sizeof(resp), req);
+}
+
+/* From controller to host */
+static uint16_t nvme_security_receive(NvmeCtrl *n, NvmeRequest *req)
+{
+ uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
+ uint16_t spsp = (dw10 >> 8) & 0xFFFF;
+ uint8_t secp = (dw10 >> 24) & 0xFF;
+
+ switch (secp) {
+ case SFSC_SECURITY_PROT_INFO:
+ switch (spsp) {
+ case 0:
+ /* Supported security protocol list */
+ return nvme_get_sec_prot_info(n, req);
+ case 1:
+ /* Certificate data */
+ default:
+ return NVME_INVALID_FIELD | NVME_DNR;
+ }
+ case NVME_SEC_PROT_DMTF_SPDM:
+ return nvme_sec_prot_spdm_receive(n, req);
+ default:
+ return NVME_INVALID_FIELD | NVME_DNR;
+ }
+}
+
static uint16_t nvme_directive_send(NvmeCtrl *n, NvmeRequest *req)
{
return NVME_INVALID_FIELD | NVME_DNR;
@@ -7389,6 +7584,10 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req)
return nvme_directive_send(n, req);
case NVME_ADM_CMD_DIRECTIVE_RECV:
return nvme_directive_receive(n, req);
+ case NVME_ADM_CMD_SECURITY_SEND:
+ return nvme_security_send(n, req);
+ case NVME_ADM_CMD_SECURITY_RECV:
+ return nvme_security_receive(n, req);
default:
g_assert_not_reached();
}
@@ -8824,7 +9023,8 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev)
id->mdts = n->params.mdts;
id->ver = cpu_to_le32(NVME_SPEC_VER);
- oacs = NVME_OACS_NMS | NVME_OACS_FORMAT | NVME_OACS_DIRECTIVES;
+ oacs = NVME_OACS_NMS | NVME_OACS_FORMAT | NVME_OACS_DIRECTIVES |
+ NVME_OACS_SECURITY;
if (n->params.dbcs) {
oacs |= NVME_OACS_DBCS;
diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
index b5c9378ea4..67ed562e00 100644
--- a/hw/nvme/nvme.h
+++ b/hw/nvme/nvme.h
@@ -461,6 +461,8 @@ static inline const char *nvme_adm_opc_str(uint8_t opc)
case NVME_ADM_CMD_DIRECTIVE_RECV: return "NVME_ADM_CMD_DIRECTIVE_RECV";
case NVME_ADM_CMD_DBBUF_CONFIG: return "NVME_ADM_CMD_DBBUF_CONFIG";
case NVME_ADM_CMD_FORMAT_NVM: return "NVME_ADM_CMD_FORMAT_NVM";
+ case NVME_ADM_CMD_SECURITY_SEND: return "NVME_ADM_CMD_SECURITY_SEND";
+ case NVME_ADM_CMD_SECURITY_RECV: return "NVME_ADM_CMD_SECURITY_RECV";
default: return "NVME_ADM_CMD_UNKNOWN";
}
}
@@ -648,6 +650,9 @@ typedef struct NvmeCtrl {
} next_pri_ctrl_cap; /* These override pri_ctrl_cap after reset */
uint32_t dn; /* Disable Normal */
NvmeAtomic atomic;
+
+ /* Socket mapping to SPDM over NVMe Security In/Out commands */
+ int spdm_socket;
} NvmeCtrl;
typedef enum NvmeResetType {
diff --git a/include/block/nvme.h b/include/block/nvme.h
index 358e516e38..9fa2ecaf28 100644
--- a/include/block/nvme.h
+++ b/include/block/nvme.h
@@ -1779,6 +1779,21 @@ enum NvmeDirectiveOperations {
NVME_DIRECTIVE_RETURN_PARAMS = 0x1,
};
+typedef enum SfscSecurityProtocol {
+ SFSC_SECURITY_PROT_INFO = 0x00,
+} SfscSecurityProtocol;
+
+typedef enum NvmeSecurityProtocols {
+ NVME_SEC_PROT_DMTF_SPDM = 0xE8,
+} NvmeSecurityProtocols;
+
+typedef enum SpdmOperationCodes {
+ SPDM_STORAGE_DISCOVERY = 0x1, /* Mandatory */
+ SPDM_STORAGE_PENDING_INFO = 0x2, /* Optional */
+ SPDM_STORAGE_MSG = 0x5, /* Mandatory */
+ SPDM_STORAGE_SEC_MSG = 0x6, /* Optional */
+} SpdmOperationCodes;
+
typedef struct QEMU_PACKED NvmeFdpConfsHdr {
uint16_t num_confs;
uint8_t version;
--
2.51.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/4] hw/nvme: connect SPDM over NVMe Security Send/Recv
2025-08-26 5:46 [PATCH 0/4] NVMe: Add SPDM over the storage transport support Wilfred Mallawa
` (2 preceding siblings ...)
2025-08-26 5:46 ` [PATCH 3/4] hw/nvme: add NVMe Admin Security SPDM support Wilfred Mallawa
@ 2025-08-26 5:46 ` Wilfred Mallawa
2025-08-26 11:21 ` Jonathan Cameron via
3 siblings, 1 reply; 9+ messages in thread
From: Wilfred Mallawa @ 2025-08-26 5:46 UTC (permalink / raw)
To: Alistair Francis
Cc: Keith Busch, Klaus Jensen, Jesper Devantier, Stefan Hajnoczi,
Fam Zheng, Philippe Mathieu-Daudé, Kevin Wolf, Hanna Reitz,
Michael S . Tsirkin, Marcel Apfelbaum, qemu-devel, qemu-block,
Wilfred Mallawa
From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
This patch extends the existing support we have for NVMe with only DoE
to also add support to SPDM over the NVMe Security Send/Recv commands.
With the new definition of the `spdm-trans` argument, users can specify
`spdm_trans=nvme` or `spdm_trans=doe`. This allows us to select the SPDM
transport respectively. SPDM over the NVMe Security Send/Recv commands
are defined in the DMTF DSP0286.
Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
---
docs/specs/spdm.rst | 10 ++++--
hw/nvme/ctrl.c | 62 ++++++++++++++++++++++++++++++-------
include/hw/pci/pci_device.h | 1 +
3 files changed, 60 insertions(+), 13 deletions(-)
diff --git a/docs/specs/spdm.rst b/docs/specs/spdm.rst
index f7de080ff0..dd6cfbbd68 100644
--- a/docs/specs/spdm.rst
+++ b/docs/specs/spdm.rst
@@ -98,7 +98,7 @@ Then you can add this to your QEMU command line:
.. code-block:: shell
-drive file=blknvme,if=none,id=mynvme,format=raw \
- -device nvme,drive=mynvme,serial=deadbeef,spdm_port=2323
+ -device nvme,drive=mynvme,serial=deadbeef,spdm_port=2323,spdm_trans=doe
At which point QEMU will try to connect to the SPDM server.
@@ -113,7 +113,13 @@ of the default. So the entire QEMU command might look like this
-append "root=/dev/vda console=ttyS0" \
-net none -nographic \
-drive file=blknvme,if=none,id=mynvme,format=raw \
- -device nvme,drive=mynvme,serial=deadbeef,spdm_port=2323
+ -device nvme,drive=mynvme,serial=deadbeef,spdm_port=2323,spdm_trans=doe
+
+The `spdm_trans` argument defines the underlying transport type that is emulated
+by QEMU. For an PCIe NVMe controller, both "doe" and "nvme" are supported. Where,
+"doe" does SPDM transport over the PCIe extended capability Data Object Exchange
+(DOE), and "nvme" uses the NVMe Admin Security Send/Receive commands to
+implement the SPDM transport.
.. _DMTF:
https://www.dmtf.org/standards/SPDM
diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index 442144642d..61feb9b35a 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -8849,6 +8849,23 @@ static DOEProtocol doe_spdm_prot[] = {
{ }
};
+static inline uint32_t nvme_get_spdm_trans_type(PCIDevice *pci_dev)
+{
+ if (!pci_dev) {
+ return false;
+ }
+
+ if (!strcmp(pci_dev->spdm_trans, "nvme")) {
+ return SPDM_SOCKET_TRANSPORT_TYPE_NVME;
+ }
+
+ if (!strcmp(pci_dev->spdm_trans, "doe")) {
+ return SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE;
+ }
+
+ return 0;
+}
+
static bool nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp)
{
ERRP_GUARD();
@@ -8937,19 +8954,31 @@ static bool nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp)
pcie_cap_deverr_init(pci_dev);
- /* DOE Initialisation */
+ /* SPDM Initialisation */
if (pci_dev->spdm_port) {
- uint16_t doe_offset = n->params.sriov_max_vfs ?
- PCI_CONFIG_SPACE_SIZE + PCI_ARI_SIZEOF
- : PCI_CONFIG_SPACE_SIZE;
+ switch (nvme_get_spdm_trans_type(pci_dev)) {
+ case SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE:
+ uint16_t doe_offset = n->params.sriov_max_vfs ?
+ PCI_CONFIG_SPACE_SIZE + PCI_ARI_SIZEOF
+ : PCI_CONFIG_SPACE_SIZE;
- pcie_doe_init(pci_dev, &pci_dev->doe_spdm, doe_offset,
- doe_spdm_prot, true, 0);
+ pcie_doe_init(pci_dev, &pci_dev->doe_spdm, doe_offset,
+ doe_spdm_prot, true, 0);
- pci_dev->doe_spdm.spdm_socket = spdm_socket_connect(pci_dev->spdm_port,
- errp);
+ pci_dev->doe_spdm.spdm_socket =
+ spdm_socket_connect(pci_dev->spdm_port, errp);
- if (pci_dev->doe_spdm.spdm_socket < 0) {
+ if (pci_dev->doe_spdm.spdm_socket < 0) {
+ return false;
+ }
+ break;
+ case SPDM_SOCKET_TRANSPORT_TYPE_NVME:
+ n->spdm_socket = spdm_socket_connect(pci_dev->spdm_port, errp);
+ if (n->spdm_socket < 0) {
+ return false;
+ }
+ break;
+ default:
return false;
}
}
@@ -9240,11 +9269,17 @@ static void nvme_exit(PCIDevice *pci_dev)
g_free(n->cmb.buf);
}
+ /* Only one of the `spdm_socket` below would have been setup */
if (pci_dev->doe_spdm.spdm_socket > 0) {
spdm_socket_close(pci_dev->doe_spdm.spdm_socket,
SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE);
}
+ if (n->spdm_socket > 0) {
+ spdm_socket_close(pci_dev->doe_spdm.spdm_socket,
+ SPDM_SOCKET_TRANSPORT_TYPE_NVME);
+ }
+
if (n->pmr.dev) {
host_memory_backend_set_mapped(n->pmr.dev, false);
}
@@ -9297,6 +9332,7 @@ static const Property nvme_props[] = {
false),
DEFINE_PROP_UINT16("mqes", NvmeCtrl, params.mqes, 0x7ff),
DEFINE_PROP_UINT16("spdm_port", PCIDevice, spdm_port, 0),
+ DEFINE_PROP_STRING("spdm_trans", PCIDevice, spdm_trans),
DEFINE_PROP_BOOL("ctratt.mem", NvmeCtrl, params.ctratt.mem, false),
DEFINE_PROP_BOOL("atomic.dn", NvmeCtrl, params.atomic_dn, 0),
DEFINE_PROP_UINT16("atomic.awun", NvmeCtrl, params.atomic_awun, 0),
@@ -9372,7 +9408,9 @@ static void nvme_pci_write_config(PCIDevice *dev, uint32_t address,
{
uint16_t old_num_vfs = pcie_sriov_num_vfs(dev);
- if (pcie_find_capability(dev, PCI_EXT_CAP_ID_DOE)) {
+ /* DOE is only initialised if SPDM over DOE is used */
+ if (pcie_find_capability(dev, PCI_EXT_CAP_ID_DOE) &&
+ nvme_get_spdm_trans_type(dev) == SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE) {
pcie_doe_write_config(&dev->doe_spdm, address, val, len);
}
pci_default_write_config(dev, address, val, len);
@@ -9383,7 +9421,9 @@ static void nvme_pci_write_config(PCIDevice *dev, uint32_t address,
static uint32_t nvme_pci_read_config(PCIDevice *dev, uint32_t address, int len)
{
uint32_t val;
- if (dev->spdm_port && pcie_find_capability(dev, PCI_EXT_CAP_ID_DOE)) {
+
+ if (dev->spdm_port && pcie_find_capability(dev, PCI_EXT_CAP_ID_DOE) &&
+ (nvme_get_spdm_trans_type(dev) == SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE)) {
if (pcie_doe_read_config(&dev->doe_spdm, address, len, &val)) {
return val;
}
diff --git a/include/hw/pci/pci_device.h b/include/hw/pci/pci_device.h
index eee0338568..c98dc24001 100644
--- a/include/hw/pci/pci_device.h
+++ b/include/hw/pci/pci_device.h
@@ -166,6 +166,7 @@ struct PCIDevice {
/* SPDM */
uint16_t spdm_port;
+ char *spdm_trans;
/* DOE */
DOECap doe_spdm;
--
2.51.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 1/4] spdm-socket: add seperate send/recv functions
2025-08-26 5:46 ` [PATCH 1/4] spdm-socket: add seperate send/recv functions Wilfred Mallawa
@ 2025-08-26 9:34 ` Jonathan Cameron via
0 siblings, 0 replies; 9+ messages in thread
From: Jonathan Cameron via @ 2025-08-26 9:34 UTC (permalink / raw)
To: Wilfred Mallawa
Cc: Alistair Francis, Keith Busch, Klaus Jensen, Jesper Devantier,
Stefan Hajnoczi, Fam Zheng, Philippe Mathieu-Daudé,
Kevin Wolf, Hanna Reitz, Michael S . Tsirkin, Marcel Apfelbaum,
qemu-devel, qemu-block, Wilfred Mallawa
On Tue, 26 Aug 2025 15:46:27 +1000
Wilfred Mallawa <wilfred.opensource@gmail.com> wrote:
> From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
>
Hi Wilfred,
Great to see this support.
> This is to support uni-directional transports such as SPDM
> over Storage. As specified by the DMTF DSP0286.
Trivial, wrap commit closer to 75 chars.
>
> Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
> ---
> backends/spdm-socket.c | 27 ++++++++++++++++++++++++---
> include/system/spdm-socket.h | 35 +++++++++++++++++++++++++++++++++++
> 2 files changed, 59 insertions(+), 3 deletions(-)
>
> diff --git a/backends/spdm-socket.c b/backends/spdm-socket.c
> index 2c709c68c8..bab1b512c8 100644
> --- a/backends/spdm-socket.c
> +++ b/backends/spdm-socket.c
> @@ -184,6 +184,29 @@ int spdm_socket_connect(uint16_t port, Error **errp)
> return client_socket;
> }
>
> +uint32_t spdm_socket_receive(const int socket, uint32_t transport_type,
> + void *rsp, uint32_t rsp_len)
> +{
> + uint32_t command;
> + bool result;
> +
> + result = receive_platform_data(socket, transport_type, &command,
> + (uint8_t *)rsp, &rsp_len);
> +
> + if (!result || command == 0) {
Comment on why command == 0 is good even if result is a fail.
In the existing code, there is an assert if result is true and command != 0,
why don't we need similar here? Might be worth adding a comment on that
special case to the existing code as I for one can't remember what it is for :(
Ah I see you modify it below.
> + return 0;
> + }
> +
> + return rsp_len;
> +}
> +
> +bool spdm_socket_send(const int socket, uint32_t socket_cmd,
> + uint32_t transport_type, void *req, uint32_t req_len)
> +{
> + return send_platform_data(socket, transport_type,
> + socket_cmd, req, req_len);
> +}
> +
> uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
> void *req, uint32_t req_len,
> void *rsp, uint32_t rsp_len)
> @@ -200,12 +223,10 @@ uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
>
> result = receive_platform_data(socket, transport_type, &command,
> (uint8_t *)rsp, &rsp_len);
> - if (!result) {
> + if (!result || command == 0) {
Add a comment here as well on 'why'. Or can we have socket_rsp just call spdm_socket_send
+ spdm_socket_receive? That would at least put the comment in just one place.
> return 0;
> }
>
> - assert(command != 0);
> -
> return rsp_len;
> }
>
> diff --git a/include/system/spdm-socket.h b/include/system/spdm-socket.h
> index 5d8bd9aa4e..2b7d03f82d 100644
> --- a/include/system/spdm-socket.h
> +++ b/include/system/spdm-socket.h
> @@ -50,6 +50,35 @@ uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
> void *req, uint32_t req_len,
> void *rsp, uint32_t rsp_len);
>
> +/**
> + * spdm_socket_rsp: Receive a message from an SPDM server
> + * @socket: socket returned from spdm_socket_connect()
> + * @transport_type: SPDM_SOCKET_TRANSPORT_TYPE_* macro
> + * @rsp: response buffer
> + * @rsp_len: response buffer length
> + *
> + * Receives a message from the SPDM server and returns the number of bytes
> + * received or 0 on failure. This can be used to receive a message from the SPDM
> + * server without sending anything first.
> + */
> +uint32_t spdm_socket_receive(const int socket, uint32_t transport_type,
> + void *rsp, uint32_t rsp_len);
> +
> +/**
> + * spdm_socket_rsp: Sends a message to an SPDM server
> + * @socket: socket returned from spdm_socket_connect()
> + * @socket_cmd: socket command type (normal/if_recv/if_send etc...)
> + * @transport_type: SPDM_SOCKET_TRANSPORT_TYPE_* macro
> + * @req: request buffer
> + * @req_len: request buffer length
> + *
> + * Sends platform data to a SPDM server on socket, returns true on success.
> + * The response from the server must then be fetched by using
> + * spdm_socket_receive().
> + */
> +bool spdm_socket_send(const int socket, uint32_t socket_cmd,
> + uint32_t transport_type, void *req, uint32_t req_len);
> +
> /**
> * spdm_socket_close: send a shutdown command to the server
> * @socket: socket returned from spdm_socket_connect()
> @@ -60,6 +89,9 @@ uint32_t spdm_socket_rsp(const int socket, uint32_t transport_type,
> void spdm_socket_close(const int socket, uint32_t transport_type);
>
> #define SPDM_SOCKET_COMMAND_NORMAL 0x0001
> +#define SPDM_SOCKET_STORAGE_CMD_IF_SEND 0x0002
> +#define SPDM_SOCKET_STORAGE_CMD_IF_RECV 0x0003
> +#define SOCKET_SPDM_STORAGE_ACK_STATUS 0x0004
> #define SPDM_SOCKET_COMMAND_OOB_ENCAP_KEY_UPDATE 0x8001
> #define SPDM_SOCKET_COMMAND_CONTINUE 0xFFFD
> #define SPDM_SOCKET_COMMAND_SHUTDOWN 0xFFFE
> @@ -68,7 +100,10 @@ void spdm_socket_close(const int socket, uint32_t transport_type);
>
> #define SPDM_SOCKET_TRANSPORT_TYPE_MCTP 0x01
> #define SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE 0x02
> +#define SPDM_SOCKET_TRANSPORT_TYPE_SCSI 0x03
> +#define SPDM_SOCKET_TRANSPORT_TYPE_NVME 0x04
>
> #define SPDM_SOCKET_MAX_MESSAGE_BUFFER_SIZE 0x1200
> +#define SPDM_SOCKET_MAX_MSG_STATUS_LEN 0x02
>
> #endif
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/4] spdm: add spdm storage transport virtual header
2025-08-26 5:46 ` [PATCH 2/4] spdm: add spdm storage transport virtual header Wilfred Mallawa
@ 2025-08-26 9:38 ` Jonathan Cameron via
0 siblings, 0 replies; 9+ messages in thread
From: Jonathan Cameron via @ 2025-08-26 9:38 UTC (permalink / raw)
To: Wilfred Mallawa
Cc: Alistair Francis, Keith Busch, Klaus Jensen, Jesper Devantier,
Stefan Hajnoczi, Fam Zheng, Philippe Mathieu-Daudé,
Kevin Wolf, Hanna Reitz, Michael S . Tsirkin, Marcel Apfelbaum,
qemu-devel, qemu-block, Wilfred Mallawa
On Tue, 26 Aug 2025 15:46:28 +1000
Wilfred Mallawa <wilfred.opensource@gmail.com> wrote:
> From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
>
> This header contains the transport encoding for an SPDM message that
> uses the SPDM over Storage transport as defined by the DMTF DSP0286.
>
> Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
> ---
> include/system/spdm-socket.h | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
> diff --git a/include/system/spdm-socket.h b/include/system/spdm-socket.h
> index 2b7d03f82d..2469424cce 100644
> --- a/include/system/spdm-socket.h
> +++ b/include/system/spdm-socket.h
> @@ -88,6 +88,17 @@ bool spdm_socket_send(const int socket, uint32_t socket_cmd,
> */
> void spdm_socket_close(const int socket, uint32_t transport_type);
>
> +/*
> + * Defines the transport encoding for SPDM, this information shall be passed
> + * down to the SPDM server, when conforming to the SPDM over Storage standard
> + * as defined by DSP0286.
I'd use /** kernel-doc (I guess qemu-doc) style comments and fully document the parameters.
> + */
> +typedef struct QEMU_PACKED {
Hmm. I've always seen the QEMU_PACKED after the struct definition.
Not sure if there are any 'rules' on this as such but it was there
for the first 5 or so random cases I looked at.
> + uint8_t security_protocol;
> + uint16_t security_protocol_specific;
> + uint32_t length;
> +} StorageSpdmTransportHeader;
> +
> #define SPDM_SOCKET_COMMAND_NORMAL 0x0001
> #define SPDM_SOCKET_STORAGE_CMD_IF_SEND 0x0002
> #define SPDM_SOCKET_STORAGE_CMD_IF_RECV 0x0003
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 3/4] hw/nvme: add NVMe Admin Security SPDM support
2025-08-26 5:46 ` [PATCH 3/4] hw/nvme: add NVMe Admin Security SPDM support Wilfred Mallawa
@ 2025-08-26 11:13 ` Jonathan Cameron via
0 siblings, 0 replies; 9+ messages in thread
From: Jonathan Cameron via @ 2025-08-26 11:13 UTC (permalink / raw)
To: Wilfred Mallawa
Cc: Alistair Francis, Keith Busch, Klaus Jensen, Jesper Devantier,
Stefan Hajnoczi, Fam Zheng, Philippe Mathieu-Daudé,
Kevin Wolf, Hanna Reitz, Michael S . Tsirkin, Marcel Apfelbaum,
qemu-devel, qemu-block, Wilfred Mallawa
On Tue, 26 Aug 2025 15:46:29 +1000
Wilfred Mallawa <wilfred.opensource@gmail.com> wrote:
> From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
>
> Adds the NVMe Admin Security Send/Receive command support with support
> for DMTFs SPDM. The transport binding for SPDM is defined in the
> DMTF DSP0286.
>
> Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
A few suggestions inline that might slightly improve this.
Looks to be in a good state to me.
Jonathan
> ---
> hw/nvme/ctrl.c | 202 ++++++++++++++++++++++++++++++++++++++++++-
> hw/nvme/nvme.h | 5 ++
> include/block/nvme.h | 15 ++++
> 3 files changed, 221 insertions(+), 1 deletion(-)
>
> diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
> index f5ee6bf260..442144642d 100644
> --- a/hw/nvme/ctrl.c
> +++ b/hw/nvme/ctrl.c
> @@ -282,6 +282,8 @@ static const uint32_t nvme_cse_acs_default[256] = {
> [NVME_ADM_CMD_FORMAT_NVM] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
> [NVME_ADM_CMD_DIRECTIVE_RECV] = NVME_CMD_EFF_CSUPP,
> [NVME_ADM_CMD_DIRECTIVE_SEND] = NVME_CMD_EFF_CSUPP,
> + [NVME_ADM_CMD_SECURITY_SEND] = NVME_CMD_EFF_CSUPP,
> + [NVME_ADM_CMD_SECURITY_RECV] = NVME_CMD_EFF_CSUPP,
> };
>
> static const uint32_t nvme_cse_iocs_nvm_default[256] = {
> @@ -7282,6 +7284,199 @@ static uint16_t nvme_dbbuf_config(NvmeCtrl *n, const NvmeRequest *req)
> return NVME_SUCCESS;
> }
>
> +static uint16_t nvme_sec_prot_spdm_send(NvmeCtrl *n, NvmeRequest *req)
> +{
> + StorageSpdmTransportHeader hdr = {0};
> + uint8_t *sec_buf;
> + uint32_t transfer_len = le32_to_cpu(req->cmd.cdw11);
> + uint32_t transport_transfer_len = transfer_len;
> + uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
> + uint32_t recvd;
> + uint16_t nvme_cmd_status;
> + uint16_t ret;
> + uint8_t secp = (dw10 >> 24) & 0xFF;
> + uint8_t spsp1 = (dw10 >> 16) & 0xFF;
> + uint8_t spsp0 = (dw10 >> 8) & 0xFF;
> + bool spdm_res;
> +
> + transport_transfer_len += sizeof(hdr);
> + if (transport_transfer_len > SPDM_SOCKET_MAX_MESSAGE_BUFFER_SIZE) {
> + return NVME_NO_COMPLETE | NVME_DNR;
> + }
> +
> + /* Generate the NVMe transport header */
> + hdr.security_protocol = secp;
> + hdr.security_protocol_specific = cpu_to_le16((spsp1 << 8) | spsp0);
> + hdr.length = cpu_to_le32(transfer_len);
Similar comments to below apply to this function.
> +
> + sec_buf = g_malloc0(transport_transfer_len);
> + if (!sec_buf) {
> + return NVME_NO_COMPLETE | NVME_DNR;
> + }
> +
> + /* Attach the transport header */
> + memcpy(sec_buf, &hdr, sizeof(hdr));
> + ret = nvme_h2c(n, sec_buf + sizeof(hdr), transfer_len, req);
> + if (ret) {
> + return NVME_NO_COMPLETE | NVME_DNR;
> + }
> +
> + spdm_res = spdm_socket_send(n->spdm_socket, SPDM_SOCKET_STORAGE_CMD_IF_SEND,
> + SPDM_SOCKET_TRANSPORT_TYPE_NVME, sec_buf,
> + transport_transfer_len);
> + if (!spdm_res) {
> + g_free(sec_buf);
> + return NVME_NO_COMPLETE | NVME_DNR;
> + }
> +
> + /* The responder shall ack with message status */
> + recvd = spdm_socket_receive(n->spdm_socket, SPDM_SOCKET_TRANSPORT_TYPE_NVME,
> + (uint8_t *)&nvme_cmd_status,
> + SPDM_SOCKET_MAX_MSG_STATUS_LEN);
> +
> + nvme_cmd_status = cpu_to_be16(nvme_cmd_status);
> +
> + if (recvd < SPDM_SOCKET_MAX_MSG_STATUS_LEN) {
> + g_free(sec_buf);
> + return NVME_NO_COMPLETE | NVME_DNR;
> + }
> +
> + g_free(sec_buf);
> + return nvme_cmd_status;
> +}
> +
> +/* From host to controller */
> +static uint16_t nvme_security_send(NvmeCtrl *n, NvmeRequest *req)
> +{
> + uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
> + uint8_t secp = (dw10 >> 24) & 0xff;
> +
> + switch (secp) {
> + case NVME_SEC_PROT_DMTF_SPDM:
> + return nvme_sec_prot_spdm_send(n, req);
> + default:
> + /* Unsupported Security Protocol Type */
> + return NVME_INVALID_FIELD | NVME_DNR;
> + }
> +
> + return NVME_INVALID_FIELD | NVME_DNR;
> +}
> +
> +static uint16_t nvme_sec_prot_spdm_receive(NvmeCtrl *n, NvmeRequest *req)
> +{
> + StorageSpdmTransportHeader hdr = {0};
> + uint8_t *rsp_spdm_buf;
> + uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
> + uint32_t alloc_len = le32_to_cpu(req->cmd.cdw11);
> + uint32_t recvd, spdm_res;
> + uint16_t nvme_cmd_status;
> + uint16_t ret;
Maybe combine with previous line.
> + uint8_t secp = (dw10 >> 24) & 0xFF;
> + uint8_t spsp1 = (dw10 >> 16) & 0xFF;
> + uint8_t spsp0 = (dw10 >> 8) & 0xFF;
Perhaps extract32()
> +
> + if (!alloc_len) {
> + return NVME_INVALID_FIELD | NVME_DNR;
> + }
> +
> + /* Generate the NVMe transport header */
Could do
hdr = (StorageSpdmTransportHeader) {
.security_protocol = secp,
.security_protocol_specific = cpu_to_le...
.length = cpu_to_le32(alloc_len),
};
and skip the {0} above. Up to you but I find this style a little more readable
as can see here that other fields are all zero'd rather than having to look
above.
> + hdr.security_protocol = secp;
> + hdr.security_protocol_specific = cpu_to_le16((spsp1 << 8) | spsp0);
> + hdr.length = cpu_to_le32(alloc_len);
> +
> + /* Forward if_recv to the SPDM Server with SPSP0 */
> + spdm_res = spdm_socket_send(n->spdm_socket, SPDM_SOCKET_STORAGE_CMD_IF_RECV,
> + SPDM_SOCKET_TRANSPORT_TYPE_NVME,
> + (uint8_t *)&hdr, sizeof(hdr));
> + if (!spdm_res) {
> + return NVME_NO_COMPLETE | NVME_DNR;
> + }
> +
> + /* The responder shall ack with message status */
> + recvd = spdm_socket_receive(n->spdm_socket, SPDM_SOCKET_TRANSPORT_TYPE_NVME,
> + (uint8_t *)&nvme_cmd_status,
> + SPDM_SOCKET_MAX_MSG_STATUS_LEN);
> +
> + nvme_cmd_status = cpu_to_be16(nvme_cmd_status);
> +
Single blank line enough.
> +
> + if (recvd < SPDM_SOCKET_MAX_MSG_STATUS_LEN) {
> + return NVME_NO_COMPLETE | NVME_DNR;
> + }
> +
> + /* An error here implies the prior if_recv from requester was spurious */
> + if (nvme_cmd_status != NVME_SUCCESS) {
> + return nvme_cmd_status;
> + }
> +
> + /* Clear to start receiving data from the server */
> + rsp_spdm_buf = g_malloc0(alloc_len);
maybe
g_autofree uint8_t *rsp_spdm_buf = g_malloc0(alloc_len);
and drop explicit frees later.
> + if (!rsp_spdm_buf) {
> + return NVME_NO_COMPLETE | NVME_DNR;
> + }
> +
> + recvd = spdm_socket_receive(n->spdm_socket,
> + SPDM_SOCKET_TRANSPORT_TYPE_NVME,
> + rsp_spdm_buf, alloc_len);
> + if (!recvd) {
> + g_free(rsp_spdm_buf);
> + return NVME_NO_COMPLETE | NVME_DNR;
> + }
> +
> + ret = nvme_c2h(n, rsp_spdm_buf, MIN(recvd, alloc_len), req);
> + g_free(rsp_spdm_buf);
> +
> + if (ret) {
> + return NVME_NO_COMPLETE | NVME_DNR;
> + }
> +
> + return NVME_SUCCESS;
> +}
> +
> +static uint16_t nvme_get_sec_prot_info(NvmeCtrl *n, NvmeRequest *req)
> +{
> + uint32_t alloc_len = le32_to_cpu(req->cmd.cdw11);
> + uint8_t resp[10] = {0};
Could do
uint8_t resp[10] = {
/* Support Security Protol List Length */
[6] = 0,
[7] = 2,
etc but not particularly important.
> +
> + if (alloc_len < 10) {
> + return NVME_INVALID_FIELD | NVME_DNR;
> + }
> +
> + /* Support Security Protol List Length */
> + resp[6] = 0; /* MSB */
> + resp[7] = 2; /* LSB */
> + /* Support Security Protocol List */
> + resp[8] = SFSC_SECURITY_PROT_INFO;
> + resp[9] = NVME_SEC_PROT_DMTF_SPDM;
> +
> + return nvme_c2h(n, resp, sizeof(resp), req);
> +}
> +
> +/* From controller to host */
> +static uint16_t nvme_security_receive(NvmeCtrl *n, NvmeRequest *req)
> +{
> + uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
> + uint16_t spsp = (dw10 >> 8) & 0xFFFF;
> + uint8_t secp = (dw10 >> 24) & 0xFF;
Perhaps use extract32().
> +
> + switch (secp) {
> + case SFSC_SECURITY_PROT_INFO:
> + switch (spsp) {
> + case 0:
> + /* Supported security protocol list */
> + return nvme_get_sec_prot_info(n, req);
> + case 1:
> + /* Certificate data */
Even though I guess this is temporary this probably should have
an explicit fall through marking.
> + default:
> + return NVME_INVALID_FIELD | NVME_DNR;
> + }
> + case NVME_SEC_PROT_DMTF_SPDM:
> + return nvme_sec_prot_spdm_receive(n, req);
> + default:
> + return NVME_INVALID_FIELD | NVME_DNR;
> + }
> +}
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 4/4] hw/nvme: connect SPDM over NVMe Security Send/Recv
2025-08-26 5:46 ` [PATCH 4/4] hw/nvme: connect SPDM over NVMe Security Send/Recv Wilfred Mallawa
@ 2025-08-26 11:21 ` Jonathan Cameron via
0 siblings, 0 replies; 9+ messages in thread
From: Jonathan Cameron via @ 2025-08-26 11:21 UTC (permalink / raw)
To: Wilfred Mallawa
Cc: Alistair Francis, Keith Busch, Klaus Jensen, Jesper Devantier,
Stefan Hajnoczi, Fam Zheng, Philippe Mathieu-Daudé,
Kevin Wolf, Hanna Reitz, Michael S . Tsirkin, Marcel Apfelbaum,
qemu-devel, qemu-block, Wilfred Mallawa
On Tue, 26 Aug 2025 15:46:30 +1000
Wilfred Mallawa <wilfred.opensource@gmail.com> wrote:
> From: Wilfred Mallawa <wilfred.mallawa@wdc.com>
>
> This patch extends the existing support we have for NVMe with only DoE
> to also add support to SPDM over the NVMe Security Send/Recv commands.
>
> With the new definition of the `spdm-trans` argument, users can specify
> `spdm_trans=nvme` or `spdm_trans=doe`. This allows us to select the SPDM
> transport respectively. SPDM over the NVMe Security Send/Recv commands
> are defined in the DMTF DSP0286.
>
> Signed-off-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
A few comments inline.
Jonathan
> diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
> index 442144642d..61feb9b35a 100644
> --- a/hw/nvme/ctrl.c
> +++ b/hw/nvme/ctrl.c
> @@ -8849,6 +8849,23 @@ static DOEProtocol doe_spdm_prot[] = {
> { }
> };
>
> +static inline uint32_t nvme_get_spdm_trans_type(PCIDevice *pci_dev)
> +{
> + if (!pci_dev) {
> + return false;
It's a uin32_t, false doesn't make sense as a return value.
> + }
> +
> + if (!strcmp(pci_dev->spdm_trans, "nvme")) {
> + return SPDM_SOCKET_TRANSPORT_TYPE_NVME;
> + }
> +
> + if (!strcmp(pci_dev->spdm_trans, "doe")) {
> + return SPDM_SOCKET_TRANSPORT_TYPE_PCI_DOE;
> + }
> +
> + return 0;
> +}
> +
> @@ -9297,6 +9332,7 @@ static const Property nvme_props[] = {
> false),
> DEFINE_PROP_UINT16("mqes", NvmeCtrl, params.mqes, 0x7ff),
> DEFINE_PROP_UINT16("spdm_port", PCIDevice, spdm_port, 0),
> + DEFINE_PROP_STRING("spdm_trans", PCIDevice, spdm_trans),
There is enum support, (look for qdev_propinfo_get_enum() usage)
but seems not that much used and there are examples of strings
used for things that are enums underneath.
> DEFINE_PROP_BOOL("ctratt.mem", NvmeCtrl, params.ctratt.mem, false),
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-08-26 11:23 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-26 5:46 [PATCH 0/4] NVMe: Add SPDM over the storage transport support Wilfred Mallawa
2025-08-26 5:46 ` [PATCH 1/4] spdm-socket: add seperate send/recv functions Wilfred Mallawa
2025-08-26 9:34 ` Jonathan Cameron via
2025-08-26 5:46 ` [PATCH 2/4] spdm: add spdm storage transport virtual header Wilfred Mallawa
2025-08-26 9:38 ` Jonathan Cameron via
2025-08-26 5:46 ` [PATCH 3/4] hw/nvme: add NVMe Admin Security SPDM support Wilfred Mallawa
2025-08-26 11:13 ` Jonathan Cameron via
2025-08-26 5:46 ` [PATCH 4/4] hw/nvme: connect SPDM over NVMe Security Send/Recv Wilfred Mallawa
2025-08-26 11:21 ` Jonathan Cameron via
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).