qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] hw/nvme: FDP and SR-IOV enhancements
@ 2024-03-31 19:30 Minwoo Im
  2024-03-31 19:30 ` [PATCH v2 1/4] hw/nvme: add Identify Endurance Group List Minwoo Im
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Minwoo Im @ 2024-03-31 19:30 UTC (permalink / raw)
  To: Klaus Jensen, Keith Busch; +Cc: qemu-devel, qemu-block, Minwoo Im, Minwoo Im

Hello,

This patch set added support for Identify Endurance Group List only just
for 'endgrpid=1' for FDP.  Along with this, the following three patches
are to support more than 127 secondary controllers for SR-IOV with VI/VQ
resources.  [2/4] separated Identify controller data structure for
secondary controller list from the actual secondary controller list
managed by the pf to support proper identify data based on the given
cntlid which is a minimum controller id to retrieve.  [3/4] and [4/4]
are actual patches increasing the number of resources of SR-IOV.

Thanks,

v2:
 - Added [2/4] commit to fix crash due to entry overflow

Minwoo Im (4):
  hw/nvme: add Identify Endurance Group List
  hw/nvme: separate identify data for sec. ctrl list
  hw/nvme: Support SR-IOV VFs more than 127
  hw/nvme: Expand VI/VQ resource to uint32

 hw/nvme/ctrl.c       | 53 +++++++++++++++++++++++++++++++-------------
 hw/nvme/nvme.h       | 20 +++++++++--------
 hw/nvme/subsys.c     |  8 +++----
 include/block/nvme.h |  1 +
 4 files changed, 53 insertions(+), 29 deletions(-)

-- 
2.34.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/4] hw/nvme: add Identify Endurance Group List
  2024-03-31 19:30 [PATCH v2 0/4] hw/nvme: FDP and SR-IOV enhancements Minwoo Im
@ 2024-03-31 19:30 ` Minwoo Im
  2024-03-31 19:30 ` [PATCH v2 2/4] hw/nvme: separate identify data for sec. ctrl list Minwoo Im
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Minwoo Im @ 2024-03-31 19:30 UTC (permalink / raw)
  To: Klaus Jensen, Keith Busch; +Cc: qemu-devel, qemu-block, Minwoo Im, Klaus Jensen

From: Minwoo Im <minwoo.im@samsung.com>

Commit 73064edfb864 ("hw/nvme: flexible data placement emulation")
intorudced NVMe FDP feature to nvme-subsys and nvme-ctrl with a
single endurance group #1 supported.  This means that controller should
return proper identify data to host with Identify Endurance Group List
(CNS 19h).  But, yes, only just for the endurance group #1.  This patch
allows host applications to ask for which endurance group is available
and utilize FDP through that endurance group.

Signed-off-by: Minwoo Im <minwoo.im@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
---
 hw/nvme/ctrl.c       | 22 ++++++++++++++++++++++
 include/block/nvme.h |  1 +
 2 files changed, 23 insertions(+)

diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index f026245d1e9e..cfe53a358871 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -5629,6 +5629,26 @@ static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req,
     return nvme_c2h(n, list, data_len, req);
 }
 
+static uint16_t nvme_endurance_group_list(NvmeCtrl *n, NvmeRequest *req)
+{
+    uint16_t list[NVME_CONTROLLER_LIST_SIZE] = {};
+    uint16_t *nr_ids = &list[0];
+    uint16_t *ids = &list[1];
+    uint16_t endgid = le32_to_cpu(req->cmd.cdw11) & 0xffff;
+
+    /*
+     * The current nvme-subsys only supports Endurance Group #1.
+     */
+    if (!endgid) {
+        *nr_ids = 1;
+        ids[0] = 1;
+    } else {
+        *nr_ids = 0;
+    }
+
+    return nvme_c2h(n, list, sizeof(list), req);
+}
+
 static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req)
 {
     NvmeNamespace *ns;
@@ -5732,6 +5752,8 @@ static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req)
         return nvme_identify_nslist(n, req, false);
     case NVME_ID_CNS_CS_NS_ACTIVE_LIST:
         return nvme_identify_nslist_csi(n, req, true);
+    case NVME_ID_CNS_ENDURANCE_GROUP_LIST:
+        return nvme_endurance_group_list(n, req);
     case NVME_ID_CNS_CS_NS_PRESENT_LIST:
         return nvme_identify_nslist_csi(n, req, false);
     case NVME_ID_CNS_NS_DESCR_LIST:
diff --git a/include/block/nvme.h b/include/block/nvme.h
index bb231d0b9ad0..7c77d38174a7 100644
--- a/include/block/nvme.h
+++ b/include/block/nvme.h
@@ -1074,6 +1074,7 @@ enum NvmeIdCns {
     NVME_ID_CNS_CTRL_LIST             = 0x13,
     NVME_ID_CNS_PRIMARY_CTRL_CAP      = 0x14,
     NVME_ID_CNS_SECONDARY_CTRL_LIST   = 0x15,
+    NVME_ID_CNS_ENDURANCE_GROUP_LIST  = 0x19,
     NVME_ID_CNS_CS_NS_PRESENT_LIST    = 0x1a,
     NVME_ID_CNS_CS_NS_PRESENT         = 0x1b,
     NVME_ID_CNS_IO_COMMAND_SET        = 0x1c,
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/4] hw/nvme: separate identify data for sec. ctrl list
  2024-03-31 19:30 [PATCH v2 0/4] hw/nvme: FDP and SR-IOV enhancements Minwoo Im
  2024-03-31 19:30 ` [PATCH v2 1/4] hw/nvme: add Identify Endurance Group List Minwoo Im
@ 2024-03-31 19:30 ` Minwoo Im
  2024-03-31 19:30 ` [PATCH v2 3/4] hw/nvme: Support SR-IOV VFs more than 127 Minwoo Im
  2024-03-31 19:30 ` [PATCH v2 4/4] hw/nvme: Expand VI/VQ resource to uint32 Minwoo Im
  3 siblings, 0 replies; 7+ messages in thread
From: Minwoo Im @ 2024-03-31 19:30 UTC (permalink / raw)
  To: Klaus Jensen, Keith Busch; +Cc: qemu-devel, qemu-block, Minwoo Im

From: Minwoo Im <minwoo.im@samsung.com>

Secondary controller list for virtualization has been managed by
Identify Secondary Controller List data structure with NvmeSecCtrlList
where up to 127 secondary controller entries can be managed.  The
problem hasn't arisen so far because NVME_MAX_VFS has been 127.

This patch separated identify data itself from the actual secondary
controller list managed by controller to support more than 127 secondary
controllers with the following patch.  This patch reused
NvmeSecCtrlEntry structure to manage all the possible secondary
controllers, and copy entries to identify data structure when the
command comes in.

Signed-off-by: Minwoo Im <minwoo.im@samsung.com>
---
 hw/nvme/ctrl.c   | 21 ++++++++++-----------
 hw/nvme/nvme.h   | 14 ++++++++------
 hw/nvme/subsys.c |  8 ++++----
 3 files changed, 22 insertions(+), 21 deletions(-)

diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index cfe53a358871..7e60bc9f2075 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -219,7 +219,6 @@
 #define NVME_TEMPERATURE_CRITICAL 0x175
 #define NVME_NUM_FW_SLOTS 1
 #define NVME_DEFAULT_MAX_ZA_SIZE (128 * KiB)
-#define NVME_MAX_VFS 127
 #define NVME_VF_RES_GRANULARITY 1
 #define NVME_VF_OFFSET 0x1
 #define NVME_VF_STRIDE 1
@@ -5480,14 +5479,14 @@ static uint16_t nvme_identify_sec_ctrl_list(NvmeCtrl *n, NvmeRequest *req)
     NvmeIdentify *c = (NvmeIdentify *)&req->cmd;
     uint16_t pri_ctrl_id = le16_to_cpu(n->pri_ctrl_cap.cntlid);
     uint16_t min_id = le16_to_cpu(c->ctrlid);
-    uint8_t num_sec_ctrl = n->sec_ctrl_list.numcntl;
+    uint8_t num_sec_ctrl = n->nr_sec_ctrls;
     NvmeSecCtrlList list = {0};
     uint8_t i;
 
     for (i = 0; i < num_sec_ctrl; i++) {
-        if (n->sec_ctrl_list.sec[i].scid >= min_id) {
-            list.numcntl = num_sec_ctrl - i;
-            memcpy(&list.sec, n->sec_ctrl_list.sec + i,
+        if (n->sec_ctrl_list[i].scid >= min_id) {
+            list.numcntl = MIN(num_sec_ctrl - i, 127);
+            memcpy(&list.sec, n->sec_ctrl_list + i,
                    list.numcntl * sizeof(NvmeSecCtrlEntry));
             break;
         }
@@ -7132,8 +7131,8 @@ static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst)
 
     if (n->params.sriov_max_vfs) {
         if (!pci_is_vf(pci_dev)) {
-            for (i = 0; i < n->sec_ctrl_list.numcntl; i++) {
-                sctrl = &n->sec_ctrl_list.sec[i];
+            for (i = 0; i < n->nr_sec_ctrls; i++) {
+                sctrl = &n->sec_ctrl_list[i];
                 nvme_virt_set_state(n, le16_to_cpu(sctrl->scid), false);
             }
 
@@ -7921,7 +7920,7 @@ static bool nvme_check_params(NvmeCtrl *n, Error **errp)
 static void nvme_init_state(NvmeCtrl *n)
 {
     NvmePriCtrlCap *cap = &n->pri_ctrl_cap;
-    NvmeSecCtrlList *list = &n->sec_ctrl_list;
+    NvmeSecCtrlEntry *list = n->sec_ctrl_list;
     NvmeSecCtrlEntry *sctrl;
     PCIDevice *pci = PCI_DEVICE(n);
     uint8_t max_vfs;
@@ -7946,9 +7945,9 @@ static void nvme_init_state(NvmeCtrl *n)
     n->aer_reqs = g_new0(NvmeRequest *, n->params.aerl + 1);
     QTAILQ_INIT(&n->aer_queue);
 
-    list->numcntl = cpu_to_le16(max_vfs);
+    n->nr_sec_ctrls = max_vfs;
     for (i = 0; i < max_vfs; i++) {
-        sctrl = &list->sec[i];
+        sctrl = &list[i];
         sctrl->pcid = cpu_to_le16(n->cntlid);
         sctrl->vfn = cpu_to_le16(i + 1);
     }
@@ -8505,7 +8504,7 @@ static void nvme_sriov_pre_write_ctrl(PCIDevice *dev, uint32_t address,
         if (!(val & PCI_SRIOV_CTRL_VFE)) {
             num_vfs = pci_get_word(dev->config + sriov_cap + PCI_SRIOV_NUM_VF);
             for (i = 0; i < num_vfs; i++) {
-                sctrl = &n->sec_ctrl_list.sec[i];
+                sctrl = &n->sec_ctrl_list[i];
                 nvme_virt_set_state(n, le16_to_cpu(sctrl->scid), false);
             }
         }
diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
index 5f2ae7b28b9c..02c11d909cd1 100644
--- a/hw/nvme/nvme.h
+++ b/hw/nvme/nvme.h
@@ -26,6 +26,7 @@
 
 #define NVME_MAX_CONTROLLERS 256
 #define NVME_MAX_NAMESPACES  256
+#define NVME_MAX_VFS 127
 #define NVME_EUI64_DEFAULT ((uint64_t)0x5254000000000000)
 #define NVME_FDP_MAX_EVENTS 63
 #define NVME_FDP_MAXPIDS 128
@@ -597,7 +598,8 @@ typedef struct NvmeCtrl {
     } features;
 
     NvmePriCtrlCap  pri_ctrl_cap;
-    NvmeSecCtrlList sec_ctrl_list;
+    uint32_t nr_sec_ctrls;
+    NvmeSecCtrlEntry sec_ctrl_list[NVME_MAX_VFS];
     struct {
         uint16_t    vqrfap;
         uint16_t    virfap;
@@ -647,7 +649,7 @@ static inline NvmeSecCtrlEntry *nvme_sctrl(NvmeCtrl *n)
     NvmeCtrl *pf = NVME(pcie_sriov_get_pf(pci_dev));
 
     if (pci_is_vf(pci_dev)) {
-        return &pf->sec_ctrl_list.sec[pcie_sriov_vf_number(pci_dev)];
+        return &pf->sec_ctrl_list[pcie_sriov_vf_number(pci_dev)];
     }
 
     return NULL;
@@ -656,12 +658,12 @@ static inline NvmeSecCtrlEntry *nvme_sctrl(NvmeCtrl *n)
 static inline NvmeSecCtrlEntry *nvme_sctrl_for_cntlid(NvmeCtrl *n,
                                                       uint16_t cntlid)
 {
-    NvmeSecCtrlList *list = &n->sec_ctrl_list;
+    NvmeSecCtrlEntry *list = n->sec_ctrl_list;
     uint8_t i;
 
-    for (i = 0; i < list->numcntl; i++) {
-        if (le16_to_cpu(list->sec[i].scid) == cntlid) {
-            return &list->sec[i];
+    for (i = 0; i < n->nr_sec_ctrls; i++) {
+        if (le16_to_cpu(list[i].scid) == cntlid) {
+            return &list[i];
         }
     }
 
diff --git a/hw/nvme/subsys.c b/hw/nvme/subsys.c
index d30bb8bfd5b4..561ed04a5317 100644
--- a/hw/nvme/subsys.c
+++ b/hw/nvme/subsys.c
@@ -17,13 +17,13 @@
 static int nvme_subsys_reserve_cntlids(NvmeCtrl *n, int start, int num)
 {
     NvmeSubsystem *subsys = n->subsys;
-    NvmeSecCtrlList *list = &n->sec_ctrl_list;
+    NvmeSecCtrlEntry *list = n->sec_ctrl_list;
     NvmeSecCtrlEntry *sctrl;
     int i, cnt = 0;
 
     for (i = start; i < ARRAY_SIZE(subsys->ctrls) && cnt < num; i++) {
         if (!subsys->ctrls[i]) {
-            sctrl = &list->sec[cnt];
+            sctrl = &list[cnt];
             sctrl->scid = cpu_to_le16(i);
             subsys->ctrls[i] = SUBSYS_SLOT_RSVD;
             cnt++;
@@ -36,12 +36,12 @@ static int nvme_subsys_reserve_cntlids(NvmeCtrl *n, int start, int num)
 static void nvme_subsys_unreserve_cntlids(NvmeCtrl *n)
 {
     NvmeSubsystem *subsys = n->subsys;
-    NvmeSecCtrlList *list = &n->sec_ctrl_list;
+    NvmeSecCtrlEntry *list = n->sec_ctrl_list;
     NvmeSecCtrlEntry *sctrl;
     int i, cntlid;
 
     for (i = 0; i < n->params.sriov_max_vfs; i++) {
-        sctrl = &list->sec[i];
+        sctrl = &list[i];
         cntlid = le16_to_cpu(sctrl->scid);
 
         if (cntlid) {
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 3/4] hw/nvme: Support SR-IOV VFs more than 127
  2024-03-31 19:30 [PATCH v2 0/4] hw/nvme: FDP and SR-IOV enhancements Minwoo Im
  2024-03-31 19:30 ` [PATCH v2 1/4] hw/nvme: add Identify Endurance Group List Minwoo Im
  2024-03-31 19:30 ` [PATCH v2 2/4] hw/nvme: separate identify data for sec. ctrl list Minwoo Im
@ 2024-03-31 19:30 ` Minwoo Im
  2024-05-01 12:46   ` Klaus Jensen
  2024-03-31 19:30 ` [PATCH v2 4/4] hw/nvme: Expand VI/VQ resource to uint32 Minwoo Im
  3 siblings, 1 reply; 7+ messages in thread
From: Minwoo Im @ 2024-03-31 19:30 UTC (permalink / raw)
  To: Klaus Jensen, Keith Busch; +Cc: qemu-devel, qemu-block, Minwoo Im, Klaus Jensen

From: Minwoo Im <minwoo.im@samsung.com>

The number of virtual functions(VFs) supported in SR-IOV is 64k as per
spec.  To test a large number of MSI-X vectors mapping to CPU matrix in
the QEMU system, we need much more than 127 VFs.  This patch made
support for 256 VFs per a physical function(PF).

Signed-off-by: Minwoo Im <minwoo.im@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
---
 hw/nvme/ctrl.c | 2 +-
 hw/nvme/nvme.h | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index 7e60bc9f2075..893d4e96656b 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -8424,7 +8424,7 @@ static Property nvme_props[] = {
     DEFINE_PROP_UINT8("zoned.zasl", NvmeCtrl, params.zasl, 0),
     DEFINE_PROP_BOOL("zoned.auto_transition", NvmeCtrl,
                      params.auto_transition_zones, true),
-    DEFINE_PROP_UINT8("sriov_max_vfs", NvmeCtrl, params.sriov_max_vfs, 0),
+    DEFINE_PROP_UINT16("sriov_max_vfs", NvmeCtrl, params.sriov_max_vfs, 0),
     DEFINE_PROP_UINT16("sriov_vq_flexible", NvmeCtrl,
                        params.sriov_vq_flexible, 0),
     DEFINE_PROP_UINT16("sriov_vi_flexible", NvmeCtrl,
diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
index 02c11d909cd1..ad928c28f2c5 100644
--- a/hw/nvme/nvme.h
+++ b/hw/nvme/nvme.h
@@ -26,7 +26,7 @@
 
 #define NVME_MAX_CONTROLLERS 256
 #define NVME_MAX_NAMESPACES  256
-#define NVME_MAX_VFS 127
+#define NVME_MAX_VFS 256
 #define NVME_EUI64_DEFAULT ((uint64_t)0x5254000000000000)
 #define NVME_FDP_MAX_EVENTS 63
 #define NVME_FDP_MAXPIDS 128
@@ -518,7 +518,7 @@ typedef struct NvmeParams {
     bool     auto_transition_zones;
     bool     legacy_cmb;
     bool     ioeventfd;
-    uint8_t  sriov_max_vfs;
+    uint16_t  sriov_max_vfs;
     uint16_t sriov_vq_flexible;
     uint16_t sriov_vi_flexible;
     uint8_t  sriov_max_vq_per_vf;
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 4/4] hw/nvme: Expand VI/VQ resource to uint32
  2024-03-31 19:30 [PATCH v2 0/4] hw/nvme: FDP and SR-IOV enhancements Minwoo Im
                   ` (2 preceding siblings ...)
  2024-03-31 19:30 ` [PATCH v2 3/4] hw/nvme: Support SR-IOV VFs more than 127 Minwoo Im
@ 2024-03-31 19:30 ` Minwoo Im
  3 siblings, 0 replies; 7+ messages in thread
From: Minwoo Im @ 2024-03-31 19:30 UTC (permalink / raw)
  To: Klaus Jensen, Keith Busch; +Cc: qemu-devel, qemu-block, Minwoo Im, Klaus Jensen

From: Minwoo Im <minwoo.im@samsung.com>

VI and VQ resources cover queue resources in each VFs in SR-IOV.
Current maximum I/O queue pair size is 0xffff, we can expand them to
cover the full number of I/O queue pairs.

This patch also fixed Identify Secondary Controller List overflow due to
expand of number of secondary controllers.

Signed-off-by: Minwoo Im <minwoo.im@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
---
 hw/nvme/ctrl.c | 8 ++++----
 hw/nvme/nvme.h | 4 ++--
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
index 893d4e96656b..893afae29336 100644
--- a/hw/nvme/ctrl.c
+++ b/hw/nvme/ctrl.c
@@ -8429,10 +8429,10 @@ static Property nvme_props[] = {
                        params.sriov_vq_flexible, 0),
     DEFINE_PROP_UINT16("sriov_vi_flexible", NvmeCtrl,
                        params.sriov_vi_flexible, 0),
-    DEFINE_PROP_UINT8("sriov_max_vi_per_vf", NvmeCtrl,
-                      params.sriov_max_vi_per_vf, 0),
-    DEFINE_PROP_UINT8("sriov_max_vq_per_vf", NvmeCtrl,
-                      params.sriov_max_vq_per_vf, 0),
+    DEFINE_PROP_UINT32("sriov_max_vi_per_vf", NvmeCtrl,
+                       params.sriov_max_vi_per_vf, 0),
+    DEFINE_PROP_UINT32("sriov_max_vq_per_vf", NvmeCtrl,
+                       params.sriov_max_vq_per_vf, 0),
     DEFINE_PROP_END_OF_LIST(),
 };
 
diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h
index ad928c28f2c5..492617f19515 100644
--- a/hw/nvme/nvme.h
+++ b/hw/nvme/nvme.h
@@ -521,8 +521,8 @@ typedef struct NvmeParams {
     uint16_t  sriov_max_vfs;
     uint16_t sriov_vq_flexible;
     uint16_t sriov_vi_flexible;
-    uint8_t  sriov_max_vq_per_vf;
-    uint8_t  sriov_max_vi_per_vf;
+    uint32_t  sriov_max_vq_per_vf;
+    uint32_t  sriov_max_vi_per_vf;
 } NvmeParams;
 
 typedef struct NvmeCtrl {
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 3/4] hw/nvme: Support SR-IOV VFs more than 127
  2024-03-31 19:30 ` [PATCH v2 3/4] hw/nvme: Support SR-IOV VFs more than 127 Minwoo Im
@ 2024-05-01 12:46   ` Klaus Jensen
  2024-05-07 20:48     ` Minwoo Im
  0 siblings, 1 reply; 7+ messages in thread
From: Klaus Jensen @ 2024-05-01 12:46 UTC (permalink / raw)
  To: Minwoo Im; +Cc: Keith Busch, qemu-devel, qemu-block, Minwoo Im, Klaus Jensen

[-- Attachment #1: Type: text/plain, Size: 601 bytes --]

On Apr  1 04:30, Minwoo Im wrote:
> From: Minwoo Im <minwoo.im@samsung.com>
> 
> The number of virtual functions(VFs) supported in SR-IOV is 64k as per
> spec.  To test a large number of MSI-X vectors mapping to CPU matrix in
> the QEMU system, we need much more than 127 VFs.  This patch made
> support for 256 VFs per a physical function(PF).
> 

With patch 2 in place, shouldn't it be relatively straight forward to
convert the static array to be dynamic and just use numvfs to size it?
Then we won't have to add another patch when someone comes around and
wants to bump this again ;)

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 3/4] hw/nvme: Support SR-IOV VFs more than 127
  2024-05-01 12:46   ` Klaus Jensen
@ 2024-05-07 20:48     ` Minwoo Im
  0 siblings, 0 replies; 7+ messages in thread
From: Minwoo Im @ 2024-05-07 20:48 UTC (permalink / raw)
  To: Klaus Jensen; +Cc: Keith Busch, qemu-devel, qemu-block, Minwoo Im, Klaus Jensen

On 24-05-01 14:46:39, Klaus Jensen wrote:
> On Apr  1 04:30, Minwoo Im wrote:
> > From: Minwoo Im <minwoo.im@samsung.com>
> > 
> > The number of virtual functions(VFs) supported in SR-IOV is 64k as per
> > spec.  To test a large number of MSI-X vectors mapping to CPU matrix in
> > the QEMU system, we need much more than 127 VFs.  This patch made
> > support for 256 VFs per a physical function(PF).
> > 
> 
> With patch 2 in place, shouldn't it be relatively straight forward to
> convert the static array to be dynamic and just use numvfs to size it?
> Then we won't have to add another patch when someone comes around and
> wants to bump this again ;)

Sorry for the late response here.  I will update the 3rd patch to
convert secondary controller list static array to a dynamic array with
making the max_vfs parameter to uint32.


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-05-07 20:49 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-31 19:30 [PATCH v2 0/4] hw/nvme: FDP and SR-IOV enhancements Minwoo Im
2024-03-31 19:30 ` [PATCH v2 1/4] hw/nvme: add Identify Endurance Group List Minwoo Im
2024-03-31 19:30 ` [PATCH v2 2/4] hw/nvme: separate identify data for sec. ctrl list Minwoo Im
2024-03-31 19:30 ` [PATCH v2 3/4] hw/nvme: Support SR-IOV VFs more than 127 Minwoo Im
2024-05-01 12:46   ` Klaus Jensen
2024-05-07 20:48     ` Minwoo Im
2024-03-31 19:30 ` [PATCH v2 4/4] hw/nvme: Expand VI/VQ resource to uint32 Minwoo Im

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).