* [PATCH v3 0/4] hw/nvme: FDP and SR-IOV enhancements
@ 2024-05-08 12:31 Minwoo Im
2024-05-08 12:31 ` [PATCH v3 1/4] hw/nvme: add Identify Endurance Group List Minwoo Im
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Minwoo Im @ 2024-05-08 12:31 UTC (permalink / raw)
To: Klaus Jensen, Keith Busch; +Cc: qemu-devel, qemu-block, Minwoo Im, Minwoo Im
Hello,
This patchset has rebased on the latest master and replaced 3rd patch
to one which allocates a dynamic array for secondary controller list
based on the maximum number of VFs (sriov_max_vfs) rather than a static
size of static array as Klaus suggested. Rest of the patchset are the
same with the previous one.
This patchset has been tested with the following simple script more than
127 VFs.
-device nvme-subsys,id=subsys0 \
-device ioh3420,id=rp2,multifunction=on,chassis=12 \
-device nvme,serial=foo,id=nvme0,bus=rp2,subsys=subsys0,mdts=9,msix_qsize=130,max_ioqpairs=260,sriov_max_vfs=129,sriov_vq_flexible=258,sriov_vi_flexible=129 \
$ cat nvme-enable-vfs.sh
#!/bin/bash
nr_vfs=129
for (( i=1; i<=$nr_vfs; i++ ))
do
nvme virt-mgmt /dev/nvme0 -c $i -r 0 -a 8 -n 2
nvme virt-mgmt /dev/nvme0 -c $i -r 1 -a 8 -n 1
done
bdf="0000:01:00.0"
sysfs="/sys/bus/pci/devices/$bdf"
nvme="/sys/bus/pci/drivers/nvme"
echo 0 > $sysfs/sriov_drivers_autoprobe
echo $nr_vfs > $sysfs/sriov_numvfs
for (( i=1; i<=$nr_vfs; i++ ))
do
nvme virt-mgmt /dev/nvme0 -c $i -a 9
echo "nvme" > $sysfs/virtfn$(($i-1))/driver_override
bdf="$(basename $(readlink $sysfs/virtfn$(($i-1))))"
echo $bdf > $nvme/bind
done
Thanks,
v3:
- Replace [3/4] patch with one allocating a dyanmic array of secondary
controller list rather than a static array with a fixed size of
maximum number of VF to support (Suggested by Klaus).
v2:
- Added [2/4] commit to fix crash due to entry overflow
Minwoo Im (4):
hw/nvme: add Identify Endurance Group List
hw/nvme: separate identify data for sec. ctrl list
hw/nvme: Allocate sec-ctrl-list as a dynamic array
hw/nvme: Expand VI/VQ resource to uint32
hw/nvme/ctrl.c | 59 +++++++++++++++++++++++++++-----------------
hw/nvme/nvme.h | 19 +++++++-------
hw/nvme/subsys.c | 10 +++++---
include/block/nvme.h | 1 +
4 files changed, 54 insertions(+), 35 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v3 1/4] hw/nvme: add Identify Endurance Group List
2024-05-08 12:31 [PATCH v3 0/4] hw/nvme: FDP and SR-IOV enhancements Minwoo Im
@ 2024-05-08 12:31 ` Minwoo Im
2024-05-08 12:31 ` [PATCH v3 2/4] hw/nvme: separate identify data for sec. ctrl list Minwoo Im
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Minwoo Im @ 2024-05-08 12:31 UTC (permalink / raw)
To: Klaus Jensen, Keith Busch; +Cc: qemu-devel, qemu-block, Minwoo Im, Klaus Jensen
From: Minwoo Im <minwoo.im@samsung.com>
Commit 73064edfb864 ("hw/nvme: flexible data placement emulation")
intorudced NVMe FDP feature to nvme-subsys and nvme-ctrl with a
single endurance group #1 supported. This means that controller should
return proper identify data to host with Identify Endurance Group List
(CNS 19h). But, yes, only just for the endurance group #1. This patch
allows host applications to ask for which endurance group is available
and utilize FDP through that endurance group.
Signed-off-by: Minwoo Im <minwoo.im@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
---
hw/nvme/ctrl.c | 22 ++++++++++++++++++++++
include/block/nvme.h | 1 +
2 files changed, 23 insertions(+)
diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index 127c3d238346..18672f66193f 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -5629,6 +5629,26 @@ static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req,
return nvme_c2h(n, list, data_len, req);
}
+static uint16_t nvme_endurance_group_list(NvmeCtrl *n, NvmeRequest *req)
+{
+ uint16_t list[NVME_CONTROLLER_LIST_SIZE] = {};
+ uint16_t *nr_ids = &list[0];
+ uint16_t *ids = &list[1];
+ uint16_t endgid = le32_to_cpu(req->cmd.cdw11) & 0xffff;
+
+ /*
+ * The current nvme-subsys only supports Endurance Group #1.
+ */
+ if (!endgid) {
+ *nr_ids = 1;
+ ids[0] = 1;
+ } else {
+ *nr_ids = 0;
+ }
+
+ return nvme_c2h(n, list, sizeof(list), req);
+}
+
static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req)
{
NvmeNamespace *ns;
@@ -5744,6 +5764,8 @@ static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req)
return nvme_identify_nslist(n, req, false);
case NVME_ID_CNS_CS_NS_ACTIVE_LIST:
return nvme_identify_nslist_csi(n, req, true);
+ case NVME_ID_CNS_ENDURANCE_GROUP_LIST:
+ return nvme_endurance_group_list(n, req);
case NVME_ID_CNS_CS_NS_PRESENT_LIST:
return nvme_identify_nslist_csi(n, req, false);
case NVME_ID_CNS_NS_DESCR_LIST:
diff --git a/include/block/nvme.h b/include/block/nvme.h
index bb231d0b9ad0..7c77d38174a7 100644
--- a/include/block/nvme.h
+++ b/include/block/nvme.h
@@ -1074,6 +1074,7 @@ enum NvmeIdCns {
NVME_ID_CNS_CTRL_LIST = 0x13,
NVME_ID_CNS_PRIMARY_CTRL_CAP = 0x14,
NVME_ID_CNS_SECONDARY_CTRL_LIST = 0x15,
+ NVME_ID_CNS_ENDURANCE_GROUP_LIST = 0x19,
NVME_ID_CNS_CS_NS_PRESENT_LIST = 0x1a,
NVME_ID_CNS_CS_NS_PRESENT = 0x1b,
NVME_ID_CNS_IO_COMMAND_SET = 0x1c,
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v3 2/4] hw/nvme: separate identify data for sec. ctrl list
2024-05-08 12:31 [PATCH v3 0/4] hw/nvme: FDP and SR-IOV enhancements Minwoo Im
2024-05-08 12:31 ` [PATCH v3 1/4] hw/nvme: add Identify Endurance Group List Minwoo Im
@ 2024-05-08 12:31 ` Minwoo Im
2024-05-08 12:31 ` [PATCH v3 3/4] hw/nvme: Allocate sec-ctrl-list as a dynamic array Minwoo Im
2024-05-08 12:31 ` [PATCH v3 4/4] hw/nvme: Expand VI/VQ resource to uint32 Minwoo Im
3 siblings, 0 replies; 5+ messages in thread
From: Minwoo Im @ 2024-05-08 12:31 UTC (permalink / raw)
To: Klaus Jensen, Keith Busch; +Cc: qemu-devel, qemu-block, Minwoo Im
From: Minwoo Im <minwoo.im@samsung.com>
Secondary controller list for virtualization has been managed by
Identify Secondary Controller List data structure with NvmeSecCtrlList
where up to 127 secondary controller entries can be managed. The
problem hasn't arisen so far because NVME_MAX_VFS has been 127.
This patch separated identify data itself from the actual secondary
controller list managed by controller to support more than 127 secondary
controllers with the following patch. This patch reused
NvmeSecCtrlEntry structure to manage all the possible secondary
controllers, and copy entries to identify data structure when the
command comes in.
Signed-off-by: Minwoo Im <minwoo.im@samsung.com>
---
hw/nvme/ctrl.c | 21 ++++++++++-----------
hw/nvme/nvme.h | 14 ++++++++------
hw/nvme/subsys.c | 8 ++++----
3 files changed, 22 insertions(+), 21 deletions(-)
diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index 18672f66193f..7cf1e8e384b7 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -219,7 +219,6 @@
#define NVME_TEMPERATURE_CRITICAL 0x175
#define NVME_NUM_FW_SLOTS 1
#define NVME_DEFAULT_MAX_ZA_SIZE (128 * KiB)
-#define NVME_MAX_VFS 127
#define NVME_VF_RES_GRANULARITY 1
#define NVME_VF_OFFSET 0x1
#define NVME_VF_STRIDE 1
@@ -5480,14 +5479,14 @@ static uint16_t nvme_identify_sec_ctrl_list(NvmeCtrl *n, NvmeRequest *req)
NvmeIdentify *c = (NvmeIdentify *)&req->cmd;
uint16_t pri_ctrl_id = le16_to_cpu(n->pri_ctrl_cap.cntlid);
uint16_t min_id = le16_to_cpu(c->ctrlid);
- uint8_t num_sec_ctrl = n->sec_ctrl_list.numcntl;
+ uint8_t num_sec_ctrl = n->nr_sec_ctrls;
NvmeSecCtrlList list = {0};
uint8_t i;
for (i = 0; i < num_sec_ctrl; i++) {
- if (n->sec_ctrl_list.sec[i].scid >= min_id) {
- list.numcntl = num_sec_ctrl - i;
- memcpy(&list.sec, n->sec_ctrl_list.sec + i,
+ if (n->sec_ctrl_list[i].scid >= min_id) {
+ list.numcntl = MIN(num_sec_ctrl - i, 127);
+ memcpy(&list.sec, n->sec_ctrl_list + i,
list.numcntl * sizeof(NvmeSecCtrlEntry));
break;
}
@@ -7144,8 +7143,8 @@ static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst)
if (n->params.sriov_max_vfs) {
if (!pci_is_vf(pci_dev)) {
- for (i = 0; i < n->sec_ctrl_list.numcntl; i++) {
- sctrl = &n->sec_ctrl_list.sec[i];
+ for (i = 0; i < n->nr_sec_ctrls; i++) {
+ sctrl = &n->sec_ctrl_list[i];
nvme_virt_set_state(n, le16_to_cpu(sctrl->scid), false);
}
}
@@ -7934,7 +7933,7 @@ static bool nvme_check_params(NvmeCtrl *n, Error **errp)
static void nvme_init_state(NvmeCtrl *n)
{
NvmePriCtrlCap *cap = &n->pri_ctrl_cap;
- NvmeSecCtrlList *list = &n->sec_ctrl_list;
+ NvmeSecCtrlEntry *list = n->sec_ctrl_list;
NvmeSecCtrlEntry *sctrl;
PCIDevice *pci = PCI_DEVICE(n);
uint8_t max_vfs;
@@ -7959,9 +7958,9 @@ static void nvme_init_state(NvmeCtrl *n)
n->aer_reqs = g_new0(NvmeRequest *, n->params.aerl + 1);
QTAILQ_INIT(&n->aer_queue);
- list->numcntl = max_vfs;
+ n->nr_sec_ctrls = max_vfs;
for (i = 0; i < max_vfs; i++) {
- sctrl = &list->sec[i];
+ sctrl = &list[i];
sctrl->pcid = cpu_to_le16(n->cntlid);
sctrl->vfn = cpu_to_le16(i + 1);
}
@@ -8534,7 +8533,7 @@ static void nvme_sriov_post_write_config(PCIDevice *dev, uint16_t old_num_vfs)
int i;
for (i = pcie_sriov_num_vfs(dev); i < old_num_vfs; i++) {
- sctrl = &n->sec_ctrl_list.sec[i];
+ sctrl = &n->sec_ctrl_list[i];
nvme_virt_set_state(n, le16_to_cpu(sctrl->scid), false);
}
}
diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
index bed8191bd5fd..485b42c104ea 100644
--- a/hw/nvme/nvme.h
+++ b/hw/nvme/nvme.h
@@ -26,6 +26,7 @@
#define NVME_MAX_CONTROLLERS 256
#define NVME_MAX_NAMESPACES 256
+#define NVME_MAX_VFS 127
#define NVME_EUI64_DEFAULT ((uint64_t)0x5254000000000000)
#define NVME_FDP_MAX_EVENTS 63
#define NVME_FDP_MAXPIDS 128
@@ -612,7 +613,8 @@ typedef struct NvmeCtrl {
} features;
NvmePriCtrlCap pri_ctrl_cap;
- NvmeSecCtrlList sec_ctrl_list;
+ uint32_t nr_sec_ctrls;
+ NvmeSecCtrlEntry sec_ctrl_list[NVME_MAX_VFS];
struct {
uint16_t vqrfap;
uint16_t virfap;
@@ -662,7 +664,7 @@ static inline NvmeSecCtrlEntry *nvme_sctrl(NvmeCtrl *n)
NvmeCtrl *pf = NVME(pcie_sriov_get_pf(pci_dev));
if (pci_is_vf(pci_dev)) {
- return &pf->sec_ctrl_list.sec[pcie_sriov_vf_number(pci_dev)];
+ return &pf->sec_ctrl_list[pcie_sriov_vf_number(pci_dev)];
}
return NULL;
@@ -671,12 +673,12 @@ static inline NvmeSecCtrlEntry *nvme_sctrl(NvmeCtrl *n)
static inline NvmeSecCtrlEntry *nvme_sctrl_for_cntlid(NvmeCtrl *n,
uint16_t cntlid)
{
- NvmeSecCtrlList *list = &n->sec_ctrl_list;
+ NvmeSecCtrlEntry *list = n->sec_ctrl_list;
uint8_t i;
- for (i = 0; i < list->numcntl; i++) {
- if (le16_to_cpu(list->sec[i].scid) == cntlid) {
- return &list->sec[i];
+ for (i = 0; i < n->nr_sec_ctrls; i++) {
+ if (le16_to_cpu(list[i].scid) == cntlid) {
+ return &list[i];
}
}
diff --git a/hw/nvme/subsys.c b/hw/nvme/subsys.c
index d30bb8bfd5b4..561ed04a5317 100644
--- a/hw/nvme/subsys.c
+++ b/hw/nvme/subsys.c
@@ -17,13 +17,13 @@
static int nvme_subsys_reserve_cntlids(NvmeCtrl *n, int start, int num)
{
NvmeSubsystem *subsys = n->subsys;
- NvmeSecCtrlList *list = &n->sec_ctrl_list;
+ NvmeSecCtrlEntry *list = n->sec_ctrl_list;
NvmeSecCtrlEntry *sctrl;
int i, cnt = 0;
for (i = start; i < ARRAY_SIZE(subsys->ctrls) && cnt < num; i++) {
if (!subsys->ctrls[i]) {
- sctrl = &list->sec[cnt];
+ sctrl = &list[cnt];
sctrl->scid = cpu_to_le16(i);
subsys->ctrls[i] = SUBSYS_SLOT_RSVD;
cnt++;
@@ -36,12 +36,12 @@ static int nvme_subsys_reserve_cntlids(NvmeCtrl *n, int start, int num)
static void nvme_subsys_unreserve_cntlids(NvmeCtrl *n)
{
NvmeSubsystem *subsys = n->subsys;
- NvmeSecCtrlList *list = &n->sec_ctrl_list;
+ NvmeSecCtrlEntry *list = n->sec_ctrl_list;
NvmeSecCtrlEntry *sctrl;
int i, cntlid;
for (i = 0; i < n->params.sriov_max_vfs; i++) {
- sctrl = &list->sec[i];
+ sctrl = &list[i];
cntlid = le16_to_cpu(sctrl->scid);
if (cntlid) {
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v3 3/4] hw/nvme: Allocate sec-ctrl-list as a dynamic array
2024-05-08 12:31 [PATCH v3 0/4] hw/nvme: FDP and SR-IOV enhancements Minwoo Im
2024-05-08 12:31 ` [PATCH v3 1/4] hw/nvme: add Identify Endurance Group List Minwoo Im
2024-05-08 12:31 ` [PATCH v3 2/4] hw/nvme: separate identify data for sec. ctrl list Minwoo Im
@ 2024-05-08 12:31 ` Minwoo Im
2024-05-08 12:31 ` [PATCH v3 4/4] hw/nvme: Expand VI/VQ resource to uint32 Minwoo Im
3 siblings, 0 replies; 5+ messages in thread
From: Minwoo Im @ 2024-05-08 12:31 UTC (permalink / raw)
To: Klaus Jensen, Keith Busch; +Cc: qemu-devel, qemu-block, Minwoo Im
From: Minwoo Im <minwoo.im@samsung.com>
To prevent further bumping up the number of maximum VF te support, this
patch allocates a dynamic array (NvmeCtrl *)->sec_ctrl_list based on
number of VF supported by sriov_max_vfs property.
Signed-off-by: Minwoo Im <minwoo.im@samsung.com>
---
hw/nvme/ctrl.c | 8 +-------
hw/nvme/nvme.h | 5 ++---
hw/nvme/subsys.c | 2 ++
3 files changed, 5 insertions(+), 10 deletions(-)
diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index 7cf1e8e384b7..8db6828ab2a9 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -7863,12 +7863,6 @@ static bool nvme_check_params(NvmeCtrl *n, Error **errp)
return false;
}
- if (params->sriov_max_vfs > NVME_MAX_VFS) {
- error_setg(errp, "sriov_max_vfs must be between 0 and %d",
- NVME_MAX_VFS);
- return false;
- }
-
if (params->cmb_size_mb) {
error_setg(errp, "CMB is not supported with SR-IOV");
return false;
@@ -8461,7 +8455,7 @@ static Property nvme_props[] = {
DEFINE_PROP_UINT8("zoned.zasl", NvmeCtrl, params.zasl, 0),
DEFINE_PROP_BOOL("zoned.auto_transition", NvmeCtrl,
params.auto_transition_zones, true),
- DEFINE_PROP_UINT8("sriov_max_vfs", NvmeCtrl, params.sriov_max_vfs, 0),
+ DEFINE_PROP_UINT32("sriov_max_vfs", NvmeCtrl, params.sriov_max_vfs, 0),
DEFINE_PROP_UINT16("sriov_vq_flexible", NvmeCtrl,
params.sriov_vq_flexible, 0),
DEFINE_PROP_UINT16("sriov_vi_flexible", NvmeCtrl,
diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
index 485b42c104ea..cc6b4a3a64c2 100644
--- a/hw/nvme/nvme.h
+++ b/hw/nvme/nvme.h
@@ -26,7 +26,6 @@
#define NVME_MAX_CONTROLLERS 256
#define NVME_MAX_NAMESPACES 256
-#define NVME_MAX_VFS 127
#define NVME_EUI64_DEFAULT ((uint64_t)0x5254000000000000)
#define NVME_FDP_MAX_EVENTS 63
#define NVME_FDP_MAXPIDS 128
@@ -532,7 +531,7 @@ typedef struct NvmeParams {
bool auto_transition_zones;
bool legacy_cmb;
bool ioeventfd;
- uint8_t sriov_max_vfs;
+ uint32_t sriov_max_vfs;
uint16_t sriov_vq_flexible;
uint16_t sriov_vi_flexible;
uint8_t sriov_max_vq_per_vf;
@@ -614,7 +613,7 @@ typedef struct NvmeCtrl {
NvmePriCtrlCap pri_ctrl_cap;
uint32_t nr_sec_ctrls;
- NvmeSecCtrlEntry sec_ctrl_list[NVME_MAX_VFS];
+ NvmeSecCtrlEntry *sec_ctrl_list;
struct {
uint16_t vqrfap;
uint16_t virfap;
diff --git a/hw/nvme/subsys.c b/hw/nvme/subsys.c
index 561ed04a5317..77deaf2c2c97 100644
--- a/hw/nvme/subsys.c
+++ b/hw/nvme/subsys.c
@@ -61,6 +61,8 @@ int nvme_subsys_register_ctrl(NvmeCtrl *n, Error **errp)
if (pci_is_vf(&n->parent_obj)) {
cntlid = le16_to_cpu(sctrl->scid);
} else {
+ n->sec_ctrl_list = g_new0(NvmeSecCtrlEntry, num_vfs);
+
for (cntlid = 0; cntlid < ARRAY_SIZE(subsys->ctrls); cntlid++) {
if (!subsys->ctrls[cntlid]) {
break;
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v3 4/4] hw/nvme: Expand VI/VQ resource to uint32
2024-05-08 12:31 [PATCH v3 0/4] hw/nvme: FDP and SR-IOV enhancements Minwoo Im
` (2 preceding siblings ...)
2024-05-08 12:31 ` [PATCH v3 3/4] hw/nvme: Allocate sec-ctrl-list as a dynamic array Minwoo Im
@ 2024-05-08 12:31 ` Minwoo Im
3 siblings, 0 replies; 5+ messages in thread
From: Minwoo Im @ 2024-05-08 12:31 UTC (permalink / raw)
To: Klaus Jensen, Keith Busch; +Cc: qemu-devel, qemu-block, Minwoo Im, Klaus Jensen
From: Minwoo Im <minwoo.im@samsung.com>
VI and VQ resources cover queue resources in each VFs in SR-IOV.
Current maximum I/O queue pair size is 0xffff, we can expand them to
cover the full number of I/O queue pairs.
This patch also fixed Identify Secondary Controller List overflow due to
expand of number of secondary controllers.
Signed-off-by: Minwoo Im <minwoo.im@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
---
hw/nvme/ctrl.c | 8 ++++----
hw/nvme/nvme.h | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index 8db6828ab2a9..5a94f47b1cf1 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -8460,10 +8460,10 @@ static Property nvme_props[] = {
params.sriov_vq_flexible, 0),
DEFINE_PROP_UINT16("sriov_vi_flexible", NvmeCtrl,
params.sriov_vi_flexible, 0),
- DEFINE_PROP_UINT8("sriov_max_vi_per_vf", NvmeCtrl,
- params.sriov_max_vi_per_vf, 0),
- DEFINE_PROP_UINT8("sriov_max_vq_per_vf", NvmeCtrl,
- params.sriov_max_vq_per_vf, 0),
+ DEFINE_PROP_UINT32("sriov_max_vi_per_vf", NvmeCtrl,
+ params.sriov_max_vi_per_vf, 0),
+ DEFINE_PROP_UINT32("sriov_max_vq_per_vf", NvmeCtrl,
+ params.sriov_max_vq_per_vf, 0),
DEFINE_PROP_BOOL("msix-exclusive-bar", NvmeCtrl, params.msix_exclusive_bar,
false),
DEFINE_PROP_END_OF_LIST(),
diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
index cc6b4a3a64c2..aa708725c875 100644
--- a/hw/nvme/nvme.h
+++ b/hw/nvme/nvme.h
@@ -534,8 +534,8 @@ typedef struct NvmeParams {
uint32_t sriov_max_vfs;
uint16_t sriov_vq_flexible;
uint16_t sriov_vi_flexible;
- uint8_t sriov_max_vq_per_vf;
- uint8_t sriov_max_vi_per_vf;
+ uint32_t sriov_max_vq_per_vf;
+ uint32_t sriov_max_vi_per_vf;
bool msix_exclusive_bar;
} NvmeParams;
--
2.34.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-05-08 12:33 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-08 12:31 [PATCH v3 0/4] hw/nvme: FDP and SR-IOV enhancements Minwoo Im
2024-05-08 12:31 ` [PATCH v3 1/4] hw/nvme: add Identify Endurance Group List Minwoo Im
2024-05-08 12:31 ` [PATCH v3 2/4] hw/nvme: separate identify data for sec. ctrl list Minwoo Im
2024-05-08 12:31 ` [PATCH v3 3/4] hw/nvme: Allocate sec-ctrl-list as a dynamic array Minwoo Im
2024-05-08 12:31 ` [PATCH v3 4/4] hw/nvme: Expand VI/VQ resource to uint32 Minwoo Im
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).