* [PATCH net-next 0/7] bnxt_en: Add interface to support RDMA driver.
@ 2016-12-06 17:09 Michael Chan
2016-12-06 17:09 ` [PATCH net-next 1/7] bnxt_en: Add bnxt_set_max_func_irqs() Michael Chan
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Michael Chan @ 2016-12-06 17:09 UTC (permalink / raw)
To: davem; +Cc: netdev, selvin.xavier, somnath.kotur, dledford, linux-rdma
This series adds an interface to support a brand new RDMA driver bnxt_re.
The first step is to re-arrange some code so that pci_enable_msix() can
be called during pci probe. The purpose is to allow the RDMA driver to
initialize and stay initialized whether the netdev is up or down.
Then we make some changes to VF resource allocation so that there is
enough resources to support RDMA.
Finally the last patch adds a simple interface to allow the RDMA driver to
probe and register itself with any bnxt_en devices that support RDMA.
Once registered, the RDMA driver can request MSIX, send fw messages, and
receive some notifications.
David, please consider this series for net-next. Thanks.
Michael Chan (7):
bnxt_en: Add bnxt_set_max_func_irqs().
bnxt_en: Enable MSIX early in bnxt_init_one().
bnxt_en: Move function reset to bnxt_init_one().
bnxt_en: Improve completion ring allocation for VFs.
bnxt_en: Reserve RDMA resources by default.
bnxt_en: Refactor the driver registration function with firmware.
bnxt_en: Add interface to support RDMA driver.
drivers/net/ethernet/broadcom/bnxt/Makefile | 2 +-
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 360 +++++++++++++++++-------
drivers/net/ethernet/broadcom/bnxt/bnxt.h | 22 +-
drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c | 14 +-
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c | 346 +++++++++++++++++++++++
drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h | 93 ++++++
6 files changed, 722 insertions(+), 115 deletions(-)
create mode 100644 drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
create mode 100644 drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h
--
1.8.3.1
^ permalink raw reply [flat|nested] 10+ messages in thread* [PATCH net-next 1/7] bnxt_en: Add bnxt_set_max_func_irqs(). 2016-12-06 17:09 [PATCH net-next 0/7] bnxt_en: Add interface to support RDMA driver Michael Chan @ 2016-12-06 17:09 ` Michael Chan 2016-12-06 17:09 ` [PATCH net-next 3/7] bnxt_en: Move function reset to bnxt_init_one() Michael Chan ` (3 subsequent siblings) 4 siblings, 0 replies; 10+ messages in thread From: Michael Chan @ 2016-12-06 17:09 UTC (permalink / raw) To: davem; +Cc: netdev, selvin.xavier, somnath.kotur, dledford, linux-rdma By refactoring existing code into this new function. The new function will be used in subsequent patches. Signed-off-by: Michael Chan <michael.chan@broadcom.com> --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 17 +++++++++++------ drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 + 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index e8ab5fd..6cdfe3e 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -4743,6 +4743,16 @@ static int bnxt_trim_rings(struct bnxt *bp, int *rx, int *tx, int max, return 0; } +void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max_irqs) +{ + if (BNXT_PF(bp)) + bp->pf.max_irqs = max_irqs; +#if defined(CONFIG_BNXT_SRIOV) + else + bp->vf.max_irqs = max_irqs; +#endif +} + static int bnxt_setup_msix(struct bnxt *bp) { struct msix_entry *msix_ent; @@ -6949,12 +6959,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) bnxt_set_tpa_flags(bp); bnxt_set_ring_params(bp); - if (BNXT_PF(bp)) - bp->pf.max_irqs = max_irqs; -#if defined(CONFIG_BNXT_SRIOV) - else - bp->vf.max_irqs = max_irqs; -#endif + bnxt_set_max_func_irqs(bp, max_irqs); bnxt_set_dflt_rings(bp); /* Default RSS hash cfg. */ diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index b4abc1b..8327d0d 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1235,6 +1235,7 @@ static inline void bnxt_disable_poll(struct bnxt_napi *bnapi) int hwrm_send_message_silent(struct bnxt *, void *, u32, int); int bnxt_hwrm_set_coal(struct bnxt *); int bnxt_hwrm_func_qcaps(struct bnxt *); +void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max); void bnxt_tx_disable(struct bnxt *bp); void bnxt_tx_enable(struct bnxt *bp); int bnxt_hwrm_set_pause(struct bnxt *); -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH net-next 3/7] bnxt_en: Move function reset to bnxt_init_one(). 2016-12-06 17:09 [PATCH net-next 0/7] bnxt_en: Add interface to support RDMA driver Michael Chan 2016-12-06 17:09 ` [PATCH net-next 1/7] bnxt_en: Add bnxt_set_max_func_irqs() Michael Chan @ 2016-12-06 17:09 ` Michael Chan 2016-12-06 17:09 ` [PATCH net-next 4/7] bnxt_en: Improve completion ring allocation for VFs Michael Chan ` (2 subsequent siblings) 4 siblings, 0 replies; 10+ messages in thread From: Michael Chan @ 2016-12-06 17:09 UTC (permalink / raw) To: davem; +Cc: netdev, selvin.xavier, somnath.kotur, dledford, linux-rdma Now that MSIX is enabled in bnxt_init_one(), resources may be allocated by the RDMA driver before the network device is opened. So we cannot do function reset in bnxt_open() which will clear all the resources. The proper place to do function reset now is in bnxt_init_one(). If we get AER, we'll do function reset as well. Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 25 ++++++------------------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 - 2 files changed, 6 insertions(+), 20 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 9178bf8..8b00ef4 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -5613,22 +5613,7 @@ int bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init) static int bnxt_open(struct net_device *dev) { struct bnxt *bp = netdev_priv(dev); - int rc = 0; - if (!test_bit(BNXT_STATE_FN_RST_DONE, &bp->state)) { - rc = bnxt_hwrm_func_reset(bp); - if (rc) { - netdev_err(bp->dev, "hwrm chip reset failure rc: %x\n", - rc); - rc = -EBUSY; - return rc; - } - /* Do func_reset during the 1st PF open only to prevent killing - * the VFs when the PF is brought down and up. - */ - if (BNXT_PF(bp)) - set_bit(BNXT_STATE_FN_RST_DONE, &bp->state); - } return __bnxt_open_nic(bp, true, true); } @@ -7028,6 +7013,10 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) if (rc) goto init_err; + rc = bnxt_hwrm_func_reset(bp); + if (rc) + goto init_err; + rc = bnxt_init_int_mode(bp); if (rc) goto init_err; @@ -7069,7 +7058,6 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state) { struct net_device *netdev = pci_get_drvdata(pdev); - struct bnxt *bp = netdev_priv(netdev); netdev_info(netdev, "PCI I/O error detected\n"); @@ -7084,8 +7072,6 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev, if (netif_running(netdev)) bnxt_close(netdev); - /* So that func_reset will be done during slot_reset */ - clear_bit(BNXT_STATE_FN_RST_DONE, &bp->state); pci_disable_device(pdev); rtnl_unlock(); @@ -7119,7 +7105,8 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev) } else { pci_set_master(pdev); - if (netif_running(netdev)) + err = bnxt_hwrm_func_reset(bp); + if (!err && netif_running(netdev)) err = bnxt_open(netdev); if (!err) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 1461355..0ee2cc4 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1021,7 +1021,6 @@ struct bnxt { unsigned long state; #define BNXT_STATE_OPEN 0 #define BNXT_STATE_IN_SP_TASK 1 -#define BNXT_STATE_FN_RST_DONE 2 struct bnxt_irq *irq_tbl; int total_irqs; -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH net-next 4/7] bnxt_en: Improve completion ring allocation for VFs. 2016-12-06 17:09 [PATCH net-next 0/7] bnxt_en: Add interface to support RDMA driver Michael Chan 2016-12-06 17:09 ` [PATCH net-next 1/7] bnxt_en: Add bnxt_set_max_func_irqs() Michael Chan 2016-12-06 17:09 ` [PATCH net-next 3/7] bnxt_en: Move function reset to bnxt_init_one() Michael Chan @ 2016-12-06 17:09 ` Michael Chan [not found] ` <1481044178-25193-1-git-send-email-michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> 2016-12-06 17:09 ` [PATCH net-next 7/7] bnxt_en: Add interface to support RDMA driver Michael Chan 4 siblings, 0 replies; 10+ messages in thread From: Michael Chan @ 2016-12-06 17:09 UTC (permalink / raw) To: davem; +Cc: netdev, selvin.xavier, somnath.kotur, dledford, linux-rdma All available remaining completion rings not used by the PF should be made available for the VFs so that there are enough rings in the VF to support RDMA. The earlier workaround code of capping the rings by the statistics context is removed. When SRIOV is disabled, call a new function bnxt_restore_pf_fw_resources() to restore FW resources. Later on we need to add some logic to account for RDMA resources. Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 8 +++++++- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 +- drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c | 14 ++++---------- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 8b00ef4..1f6be83 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -4152,7 +4152,7 @@ static int bnxt_hwrm_func_qcfg(struct bnxt *bp) return rc; } -int bnxt_hwrm_func_qcaps(struct bnxt *bp) +static int bnxt_hwrm_func_qcaps(struct bnxt *bp) { int rc = 0; struct hwrm_func_qcaps_input req = {0}; @@ -6856,6 +6856,12 @@ static int bnxt_set_dflt_rings(struct bnxt *bp) return rc; } +void bnxt_restore_pf_fw_resources(struct bnxt *bp) +{ + ASSERT_RTNL(); + bnxt_hwrm_func_qcaps(bp); +} + static void bnxt_parse_log_pcie_link(struct bnxt *bp) { enum pcie_link_width width = PCIE_LNK_WIDTH_UNKNOWN; diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 0ee2cc4..43a4b17 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1234,7 +1234,6 @@ static inline void bnxt_disable_poll(struct bnxt_napi *bnapi) int hwrm_send_message(struct bnxt *, void *, u32, int); int hwrm_send_message_silent(struct bnxt *, void *, u32, int); int bnxt_hwrm_set_coal(struct bnxt *); -int bnxt_hwrm_func_qcaps(struct bnxt *); void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max); void bnxt_tx_disable(struct bnxt *bp); void bnxt_tx_enable(struct bnxt *bp); @@ -1245,4 +1244,5 @@ static inline void bnxt_disable_poll(struct bnxt_napi *bnapi) int bnxt_close_nic(struct bnxt *, bool, bool); int bnxt_setup_mq_tc(struct net_device *dev, u8 tc); int bnxt_get_max_rings(struct bnxt *, int *, int *, bool); +void bnxt_restore_pf_fw_resources(struct bnxt *bp); #endif diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c index bff626a..c696025 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c @@ -420,15 +420,7 @@ static int bnxt_hwrm_func_cfg(struct bnxt *bp, int num_vfs) bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_CFG, -1, -1); /* Remaining rings are distributed equally amongs VF's for now */ - /* TODO: the following workaroud is needed to restrict total number - * of vf_cp_rings not exceed number of HW ring groups. This WA should - * be removed once new HWRM provides HW ring groups capability in - * hwrm_func_qcap. - */ - vf_cp_rings = min_t(u16, pf->max_cp_rings, pf->max_stat_ctxs); - vf_cp_rings = (vf_cp_rings - bp->cp_nr_rings) / num_vfs; - /* TODO: restore this logic below once the WA above is removed */ - /* vf_cp_rings = (pf->max_cp_rings - bp->cp_nr_rings) / num_vfs; */ + vf_cp_rings = (pf->max_cp_rings - bp->cp_nr_rings) / num_vfs; vf_stat_ctx = (pf->max_stat_ctxs - bp->num_stat_ctxs) / num_vfs; if (bp->flags & BNXT_FLAG_AGG_RINGS) vf_rx_rings = (pf->max_rx_rings - bp->rx_nr_rings * 2) / @@ -590,7 +582,9 @@ void bnxt_sriov_disable(struct bnxt *bp) bp->pf.active_vfs = 0; /* Reclaim all resources for the PF. */ - bnxt_hwrm_func_qcaps(bp); + rtnl_lock(); + bnxt_restore_pf_fw_resources(bp); + rtnl_unlock(); } int bnxt_sriov_configure(struct pci_dev *pdev, int num_vfs) -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
[parent not found: <1481044178-25193-1-git-send-email-michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>]
* [PATCH net-next 2/7] bnxt_en: Enable MSIX early in bnxt_init_one(). [not found] ` <1481044178-25193-1-git-send-email-michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> @ 2016-12-06 17:09 ` Michael Chan [not found] ` <1481044178-25193-3-git-send-email-michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> 2016-12-06 17:09 ` [PATCH net-next 5/7] bnxt_en: Reserve RDMA resources by default Michael Chan 2016-12-06 17:09 ` [PATCH net-next 6/7] bnxt_en: Refactor the driver registration function with firmware Michael Chan 2 siblings, 1 reply; 10+ messages in thread From: Michael Chan @ 2016-12-06 17:09 UTC (permalink / raw) To: davem-fT/PcQaiUtIeIZ0/mPfg9Q Cc: netdev-u79uwXL29TY76Z2rM5mHXA, selvin.xavier-dY08KVG/lbpWk0Htik3J/w, somnath.kotur-dY08KVG/lbpWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA To better support the new RDMA driver, we need to move pci_enable_msix() from bnxt_open() to bnxt_init_one(). This way, MSIX vectors are available to the RDMA driver whether the network device is up or down. Part of the existing bnxt_setup_int_mode() function is now refactored into a new bnxt_init_int_mode(). bnxt_init_int_mode() is called during bnxt_init_one() to enable MSIX. The remaining logic in bnxt_setup_int_mode() to map the IRQs to the completion rings is called during bnxt_open(). Signed-off-by: Somnath Kotur <somnath.kotur-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> Signed-off-by: Michael Chan <michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 183 +++++++++++++++++++----------- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 1 + 2 files changed, 115 insertions(+), 69 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 6cdfe3e..9178bf8 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -4743,6 +4743,80 @@ static int bnxt_trim_rings(struct bnxt *bp, int *rx, int *tx, int max, return 0; } +static void bnxt_setup_msix(struct bnxt *bp) +{ + const int len = sizeof(bp->irq_tbl[0].name); + struct net_device *dev = bp->dev; + int tcs, i; + + tcs = netdev_get_num_tc(dev); + if (tcs > 1) { + bp->tx_nr_rings_per_tc = bp->tx_nr_rings / tcs; + if (bp->tx_nr_rings_per_tc == 0) { + netdev_reset_tc(dev); + bp->tx_nr_rings_per_tc = bp->tx_nr_rings; + } else { + int i, off, count; + + bp->tx_nr_rings = bp->tx_nr_rings_per_tc * tcs; + for (i = 0; i < tcs; i++) { + count = bp->tx_nr_rings_per_tc; + off = i * count; + netdev_set_tc_queue(dev, i, count, off); + } + } + } + + for (i = 0; i < bp->cp_nr_rings; i++) { + char *attr; + + if (bp->flags & BNXT_FLAG_SHARED_RINGS) + attr = "TxRx"; + else if (i < bp->rx_nr_rings) + attr = "rx"; + else + attr = "tx"; + + snprintf(bp->irq_tbl[i].name, len, "%s-%s-%d", dev->name, attr, + i); + bp->irq_tbl[i].handler = bnxt_msix; + } +} + +static void bnxt_setup_inta(struct bnxt *bp) +{ + const int len = sizeof(bp->irq_tbl[0].name); + + if (netdev_get_num_tc(bp->dev)) + netdev_reset_tc(bp->dev); + + snprintf(bp->irq_tbl[0].name, len, "%s-%s-%d", bp->dev->name, "TxRx", + 0); + bp->irq_tbl[0].handler = bnxt_inta; +} + +static int bnxt_setup_int_mode(struct bnxt *bp) +{ + int rc; + + if (bp->flags & BNXT_FLAG_USING_MSIX) + bnxt_setup_msix(bp); + else + bnxt_setup_inta(bp); + + rc = bnxt_set_real_num_queues(bp); + return rc; +} + +static unsigned int bnxt_get_max_func_irqs(struct bnxt *bp) +{ + if (BNXT_PF(bp)) + return bp->pf.max_irqs; +#if defined(CONFIG_BNXT_SRIOV) + return bp->vf.max_irqs; +#endif +} + void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max_irqs) { if (BNXT_PF(bp)) @@ -4753,16 +4827,12 @@ void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max_irqs) #endif } -static int bnxt_setup_msix(struct bnxt *bp) +static int bnxt_init_msix(struct bnxt *bp) { - struct msix_entry *msix_ent; - struct net_device *dev = bp->dev; int i, total_vecs, rc = 0, min = 1; - const int len = sizeof(bp->irq_tbl[0].name); - - bp->flags &= ~BNXT_FLAG_USING_MSIX; - total_vecs = bp->cp_nr_rings; + struct msix_entry *msix_ent; + total_vecs = bnxt_get_max_func_irqs(bp); msix_ent = kcalloc(total_vecs, sizeof(struct msix_entry), GFP_KERNEL); if (!msix_ent) return -ENOMEM; @@ -4783,8 +4853,10 @@ static int bnxt_setup_msix(struct bnxt *bp) bp->irq_tbl = kcalloc(total_vecs, sizeof(struct bnxt_irq), GFP_KERNEL); if (bp->irq_tbl) { - int tcs; + for (i = 0; i < total_vecs; i++) + bp->irq_tbl[i].vector = msix_ent[i].vector; + bp->total_irqs = total_vecs; /* Trim rings based upon num of vectors allocated */ rc = bnxt_trim_rings(bp, &bp->rx_nr_rings, &bp->tx_nr_rings, total_vecs, min == 1); @@ -4792,43 +4864,10 @@ static int bnxt_setup_msix(struct bnxt *bp) goto msix_setup_exit; bp->tx_nr_rings_per_tc = bp->tx_nr_rings; - tcs = netdev_get_num_tc(dev); - if (tcs > 1) { - bp->tx_nr_rings_per_tc = bp->tx_nr_rings / tcs; - if (bp->tx_nr_rings_per_tc == 0) { - netdev_reset_tc(dev); - bp->tx_nr_rings_per_tc = bp->tx_nr_rings; - } else { - int i, off, count; + bp->cp_nr_rings = (min == 1) ? + max_t(int, bp->tx_nr_rings, bp->rx_nr_rings) : + bp->tx_nr_rings + bp->rx_nr_rings; - bp->tx_nr_rings = bp->tx_nr_rings_per_tc * tcs; - for (i = 0; i < tcs; i++) { - count = bp->tx_nr_rings_per_tc; - off = i * count; - netdev_set_tc_queue(dev, i, count, off); - } - } - } - bp->cp_nr_rings = total_vecs; - - for (i = 0; i < bp->cp_nr_rings; i++) { - char *attr; - - bp->irq_tbl[i].vector = msix_ent[i].vector; - if (bp->flags & BNXT_FLAG_SHARED_RINGS) - attr = "TxRx"; - else if (i < bp->rx_nr_rings) - attr = "rx"; - else - attr = "tx"; - - snprintf(bp->irq_tbl[i].name, len, - "%s-%s-%d", dev->name, attr, i); - bp->irq_tbl[i].handler = bnxt_msix; - } - rc = bnxt_set_real_num_queues(bp); - if (rc) - goto msix_setup_exit; } else { rc = -ENOMEM; goto msix_setup_exit; @@ -4838,52 +4877,54 @@ static int bnxt_setup_msix(struct bnxt *bp) return 0; msix_setup_exit: - netdev_err(bp->dev, "bnxt_setup_msix err: %x\n", rc); + netdev_err(bp->dev, "bnxt_init_msix err: %x\n", rc); + kfree(bp->irq_tbl); + bp->irq_tbl = NULL; pci_disable_msix(bp->pdev); kfree(msix_ent); return rc; } -static int bnxt_setup_inta(struct bnxt *bp) +static int bnxt_init_inta(struct bnxt *bp) { - int rc; - const int len = sizeof(bp->irq_tbl[0].name); - - if (netdev_get_num_tc(bp->dev)) - netdev_reset_tc(bp->dev); - bp->irq_tbl = kcalloc(1, sizeof(struct bnxt_irq), GFP_KERNEL); - if (!bp->irq_tbl) { - rc = -ENOMEM; - return rc; - } + if (!bp->irq_tbl) + return -ENOMEM; + + bp->total_irqs = 1; bp->rx_nr_rings = 1; bp->tx_nr_rings = 1; bp->cp_nr_rings = 1; bp->tx_nr_rings_per_tc = bp->tx_nr_rings; bp->flags |= BNXT_FLAG_SHARED_RINGS; bp->irq_tbl[0].vector = bp->pdev->irq; - snprintf(bp->irq_tbl[0].name, len, - "%s-%s-%d", bp->dev->name, "TxRx", 0); - bp->irq_tbl[0].handler = bnxt_inta; - rc = bnxt_set_real_num_queues(bp); - return rc; + return 0; } -static int bnxt_setup_int_mode(struct bnxt *bp) +static int bnxt_init_int_mode(struct bnxt *bp) { int rc = 0; if (bp->flags & BNXT_FLAG_MSIX_CAP) - rc = bnxt_setup_msix(bp); + rc = bnxt_init_msix(bp); if (!(bp->flags & BNXT_FLAG_USING_MSIX) && BNXT_PF(bp)) { /* fallback to INTA */ - rc = bnxt_setup_inta(bp); + rc = bnxt_init_inta(bp); } return rc; } +static void bnxt_clear_int_mode(struct bnxt *bp) +{ + if (bp->flags & BNXT_FLAG_USING_MSIX) + pci_disable_msix(bp->pdev); + + kfree(bp->irq_tbl); + bp->irq_tbl = NULL; + bp->flags &= ~BNXT_FLAG_USING_MSIX; +} + static void bnxt_free_irq(struct bnxt *bp) { struct bnxt_irq *irq; @@ -4902,10 +4943,6 @@ static void bnxt_free_irq(struct bnxt *bp) free_irq(irq->vector, bp->bnapi[i]); irq->requested = 0; } - if (bp->flags & BNXT_FLAG_USING_MSIX) - pci_disable_msix(bp->pdev); - kfree(bp->irq_tbl); - bp->irq_tbl = NULL; } static int bnxt_request_irq(struct bnxt *bp) @@ -6695,6 +6732,7 @@ static void bnxt_remove_one(struct pci_dev *pdev) cancel_work_sync(&bp->sp_task); bp->sp_event = 0; + bnxt_clear_int_mode(bp); bnxt_hwrm_func_drv_unrgtr(bp); bnxt_free_hwrm_resources(bp); bnxt_dcb_free(bp); @@ -6990,10 +7028,14 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) if (rc) goto init_err; - rc = register_netdev(dev); + rc = bnxt_init_int_mode(bp); if (rc) goto init_err; + rc = register_netdev(dev); + if (rc) + goto init_err_clr_int; + netdev_info(dev, "%s found at mem %lx, node addr %pM\n", board_info[ent->driver_data].name, (long)pci_resource_start(pdev, 0), dev->dev_addr); @@ -7002,6 +7044,9 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) return 0; +init_err_clr_int: + bnxt_clear_int_mode(bp); + init_err: pci_iounmap(pdev, bp->bar0); pci_release_regions(pdev); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 8327d0d..1461355 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1024,6 +1024,7 @@ struct bnxt { #define BNXT_STATE_FN_RST_DONE 2 struct bnxt_irq *irq_tbl; + int total_irqs; u8 mac_addr[ETH_ALEN]; #ifdef CONFIG_BNXT_DCB -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 10+ messages in thread
[parent not found: <1481044178-25193-3-git-send-email-michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>]
* Re: [PATCH net-next 2/7] bnxt_en: Enable MSIX early in bnxt_init_one(). [not found] ` <1481044178-25193-3-git-send-email-michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> @ 2016-12-06 23:55 ` kbuild test robot 0 siblings, 0 replies; 10+ messages in thread From: kbuild test robot @ 2016-12-06 23:55 UTC (permalink / raw) To: Michael Chan Cc: kbuild-all-JC7UmRfGjtg, davem-fT/PcQaiUtIeIZ0/mPfg9Q, netdev-u79uwXL29TY76Z2rM5mHXA, selvin.xavier-dY08KVG/lbpWk0Htik3J/w, somnath.kotur-dY08KVG/lbpWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA [-- Attachment #1: Type: text/plain, Size: 8301 bytes --] Hi Michael, [auto build test WARNING on net-next/master] url: https://github.com/0day-ci/linux/commits/Michael-Chan/bnxt_en-Add-interface-to-support-RDMA-driver/20161207-053721 config: i386-randconfig-h1-12070631 (attached as .config) compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901 reproduce: # save the attached .config to linux build tree make ARCH=i386 All warnings (new ones prefixed by >>): drivers/net/ethernet/broadcom/bnxt/bnxt.c: In function 'bnxt_get_max_func_irqs': >> drivers/net/ethernet/broadcom/bnxt/bnxt.c:4818:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ Cyclomatic Complexity 5 include/linux/compiler.h:__read_once_size Cyclomatic Complexity 5 include/linux/compiler.h:__write_once_size Cyclomatic Complexity 2 arch/x86/include/asm/bitops.h:set_bit Cyclomatic Complexity 1 arch/x86/include/asm/bitops.h:__set_bit Cyclomatic Complexity 2 arch/x86/include/asm/bitops.h:clear_bit Cyclomatic Complexity 1 arch/x86/include/asm/bitops.h:__clear_bit Cyclomatic Complexity 1 arch/x86/include/asm/bitops.h:test_and_set_bit Cyclomatic Complexity 1 arch/x86/include/asm/bitops.h:test_and_clear_bit Cyclomatic Complexity 1 arch/x86/include/asm/bitops.h:constant_test_bit Cyclomatic Complexity 1 arch/x86/include/asm/bitops.h:variable_test_bit Cyclomatic Complexity 1 arch/x86/include/asm/bitops.h:fls Cyclomatic Complexity 1 include/linux/log2.h:__ilog2_u32 Cyclomatic Complexity 1 include/linux/list.h:INIT_LIST_HEAD Cyclomatic Complexity 1 include/asm-generic/getorder.h:__get_order Cyclomatic Complexity 1 include/linux/err.h:ERR_PTR Cyclomatic Complexity 1 arch/x86/include/asm/irqflags.h:arch_irqs_disabled_flags Cyclomatic Complexity 1 arch/x86/include/asm/processor.h:prefetch Cyclomatic Complexity 1 arch/x86/include/asm/atomic.h:atomic_read Cyclomatic Complexity 1 arch/x86/include/asm/atomic.h:atomic_set Cyclomatic Complexity 1 arch/x86/include/asm/atomic.h:atomic_inc Cyclomatic Complexity 1 arch/x86/include/asm/atomic.h:atomic_cmpxchg Cyclomatic Complexity 5 arch/x86/include/asm/preempt.h:__preempt_count_add Cyclomatic Complexity 1 include/linux/bottom_half.h:__local_bh_disable_ip Cyclomatic Complexity 1 include/linux/bottom_half.h:local_bh_disable Cyclomatic Complexity 1 include/linux/spinlock.h:spinlock_check Cyclomatic Complexity 1 include/linux/spinlock.h:spin_lock Cyclomatic Complexity 1 include/linux/spinlock.h:spin_lock_bh Cyclomatic Complexity 1 include/linux/spinlock.h:spin_unlock Cyclomatic Complexity 1 include/linux/spinlock.h:spin_unlock_bh Cyclomatic Complexity 1 include/linux/workqueue.h:__init_work Cyclomatic Complexity 1 arch/x86/include/asm/topology.h:numa_node_id Cyclomatic Complexity 1 include/linux/topology.h:numa_mem_id Cyclomatic Complexity 1 include/linux/gfp.h:gfp_zonelist Cyclomatic Complexity 1 include/linux/gfp.h:node_zonelist Cyclomatic Complexity 1 include/linux/kasan.h:kasan_kmalloc Cyclomatic Complexity 28 include/linux/slab.h:kmalloc_index Cyclomatic Complexity 1 include/linux/slab.h:kmem_cache_alloc_trace Cyclomatic Complexity 1 include/linux/slab.h:kmalloc_order_trace Cyclomatic Complexity 67 include/linux/slab.h:kmalloc_large Cyclomatic Complexity 5 include/linux/slab.h:kmalloc Cyclomatic Complexity 1 include/linux/slab.h:kzalloc Cyclomatic Complexity 1 arch/x86/include/asm/io.h:readl Cyclomatic Complexity 1 arch/x86/include/asm/io.h:writel Cyclomatic Complexity 1 include/linux/device.h:dev_get_drvdata Cyclomatic Complexity 1 include/linux/device.h:dev_set_drvdata Cyclomatic Complexity 1 include/linux/pci.h:pci_is_bridge Cyclomatic Complexity 1 include/linux/mm.h:lowmem_page_address Cyclomatic Complexity 1 include/linux/mm.h:page_is_pfmemalloc Cyclomatic Complexity 1 include/linux/pci.h:pci_disable_msix Cyclomatic Complexity 1 include/linux/pci.h:pci_enable_msix_range Cyclomatic Complexity 1 include/linux/pci.h:pci_get_drvdata Cyclomatic Complexity 1 include/linux/pci.h:pci_set_drvdata Cyclomatic Complexity 1 include/linux/dma-debug.h:debug_dma_map_page Cyclomatic Complexity 1 include/linux/dma-debug.h:debug_dma_mapping_error Cyclomatic Complexity 1 include/linux/dma-debug.h:debug_dma_unmap_page Cyclomatic Complexity 1 include/linux/dma-debug.h:debug_dma_alloc_coherent Cyclomatic Complexity 1 include/linux/dma-debug.h:debug_dma_free_coherent Cyclomatic Complexity 1 include/linux/dma-debug.h:debug_dma_sync_single_for_cpu Cyclomatic Complexity 1 include/linux/dma-debug.h:debug_dma_sync_single_for_device Cyclomatic Complexity 1 include/linux/kmemcheck.h:kmemcheck_mark_initialized Cyclomatic Complexity 1 include/linux/dma-mapping.h:valid_dma_direction Cyclomatic Complexity 1 arch/x86/include/asm/dma-mapping.h:get_dma_ops Cyclomatic Complexity 2 include/linux/dma-mapping.h:dma_mapping_error Cyclomatic Complexity 1 include/linux/dynamic_queue_limits.h:dql_avail Cyclomatic Complexity 1 include/linux/skbuff.h:skb_frag_size Cyclomatic Complexity 1 include/linux/skbuff.h:skb_frag_size_set Cyclomatic Complexity 1 include/linux/skbuff.h:__skb_set_hash Cyclomatic Complexity 1 include/linux/skbuff.h:skb_set_hash Cyclomatic Complexity 1 include/linux/skbuff.h:skb_end_pointer Cyclomatic Complexity 1 include/linux/skbuff.h:skb_headlen Cyclomatic Complexity 1 include/linux/skbuff.h:skb_reserve Cyclomatic Complexity 1 include/linux/skbuff.h:skb_inner_transport_header Cyclomatic Complexity 1 include/linux/skbuff.h:skb_inner_network_header Cyclomatic Complexity 1 include/linux/skbuff.h:skb_transport_header Cyclomatic Complexity 1 include/linux/skbuff.h:skb_transport_offset Cyclomatic Complexity 1 include/linux/skbuff.h:skb_inner_network_header_len Cyclomatic Complexity 1 include/linux/skbuff.h:skb_inner_network_offset Cyclomatic Complexity 1 include/linux/skbuff.h:skb_frag_page Cyclomatic Complexity 1 include/linux/skbuff.h:__skb_frag_set_page Cyclomatic Complexity 1 include/linux/skbuff.h:skb_get_queue_mapping Cyclomatic Complexity 1 include/linux/skbuff.h:skb_record_rx_queue Cyclomatic Complexity 1 include/linux/skbuff.h:skb_is_gso Cyclomatic Complexity 1 include/linux/skbuff.h:skb_checksum_none_assert Cyclomatic Complexity 1 include/linux/netdevice.h:napi_disable_pending Cyclomatic Complexity 3 include/linux/netdevice.h:napi_schedule_prep Cyclomatic Complexity 1 include/linux/netdevice.h:netdev_get_num_tc Cyclomatic Complexity 1 include/linux/netdevice.h:netdev_get_tx_queue Cyclomatic Complexity 1 include/linux/netdevice.h:netdev_priv Cyclomatic Complexity 1 include/linux/netdevice.h:netif_tx_stop_queue Cyclomatic Complexity 1 include/linux/netdevice.h:netif_tx_queue_stopped Cyclomatic Complexity 1 include/linux/netdevice.h:netif_running Cyclomatic Complexity 1 include/linux/netdevice.h:netif_carrier_ok Cyclomatic Complexity 1 include/linux/netdevice.h:__netif_tx_lock Cyclomatic Complexity 1 include/linux/netdevice.h:__netif_tx_unlock Cyclomatic Complexity 1 include/linux/netdevice.h:netif_addr_lock_bh Cyclomatic Complexity 1 include/linux/netdevice.h:netif_addr_unlock_bh Cyclomatic Complexity 1 include/linux/etherdevice.h:is_zero_ether_addr Cyclomatic Complexity 1 include/linux/etherdevice.h:is_multicast_ether_addr vim +4818 drivers/net/ethernet/broadcom/bnxt/bnxt.c 4802 if (bp->flags & BNXT_FLAG_USING_MSIX) 4803 bnxt_setup_msix(bp); 4804 else 4805 bnxt_setup_inta(bp); 4806 4807 rc = bnxt_set_real_num_queues(bp); 4808 return rc; 4809 } 4810 4811 static unsigned int bnxt_get_max_func_irqs(struct bnxt *bp) 4812 { 4813 if (BNXT_PF(bp)) 4814 return bp->pf.max_irqs; 4815 #if defined(CONFIG_BNXT_SRIOV) 4816 return bp->vf.max_irqs; 4817 #endif > 4818 } 4819 4820 void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max_irqs) 4821 { 4822 if (BNXT_PF(bp)) 4823 bp->pf.max_irqs = max_irqs; 4824 #if defined(CONFIG_BNXT_SRIOV) 4825 else 4826 bp->vf.max_irqs = max_irqs; --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 28906 bytes --] ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH net-next 5/7] bnxt_en: Reserve RDMA resources by default. [not found] ` <1481044178-25193-1-git-send-email-michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> 2016-12-06 17:09 ` [PATCH net-next 2/7] bnxt_en: Enable MSIX early in bnxt_init_one() Michael Chan @ 2016-12-06 17:09 ` Michael Chan 2016-12-06 17:09 ` [PATCH net-next 6/7] bnxt_en: Refactor the driver registration function with firmware Michael Chan 2 siblings, 0 replies; 10+ messages in thread From: Michael Chan @ 2016-12-06 17:09 UTC (permalink / raw) To: davem-fT/PcQaiUtIeIZ0/mPfg9Q Cc: netdev-u79uwXL29TY76Z2rM5mHXA, selvin.xavier-dY08KVG/lbpWk0Htik3J/w, somnath.kotur-dY08KVG/lbpWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA If the device supports RDMA, we'll setup network default rings so that there are enough minimum resources for RDMA, if possible. However, the user can still increase network rings to the max if he wants. The actual RDMA resources won't be reserved until the RDMA driver registers. Signed-off-by: Somnath Kotur <somnath.kotur-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> Signed-off-by: Michael Chan <michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 58 ++++++++++++++++++++++++++++++- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 9 +++++ 2 files changed, 66 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 1f6be83..7218d65 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -4166,6 +4166,11 @@ static int bnxt_hwrm_func_qcaps(struct bnxt *bp) if (rc) goto hwrm_func_qcaps_exit; + if (resp->flags & cpu_to_le32(FUNC_QCAPS_RESP_FLAGS_ROCE_V1_SUPPORTED)) + bp->flags |= BNXT_FLAG_ROCEV1_CAP; + if (resp->flags & cpu_to_le32(FUNC_QCAPS_RESP_FLAGS_ROCE_V2_SUPPORTED)) + bp->flags |= BNXT_FLAG_ROCEV2_CAP; + bp->tx_push_thresh = 0; if (resp->flags & cpu_to_le32(FUNC_QCAPS_RESP_FLAGS_PUSH_MODE_SUPPORTED)) @@ -4808,6 +4813,24 @@ static int bnxt_setup_int_mode(struct bnxt *bp) return rc; } +unsigned int bnxt_get_max_func_stat_ctxs(struct bnxt *bp) +{ + if (BNXT_PF(bp)) + return bp->pf.max_stat_ctxs; +#if defined(CONFIG_BNXT_SRIOV) + return bp->vf.max_stat_ctxs; +#endif +} + +unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp) +{ + if (BNXT_PF(bp)) + return bp->pf.max_cp_rings; +#if defined(CONFIG_BNXT_SRIOV) + return bp->vf.max_cp_rings; +#endif +} + static unsigned int bnxt_get_max_func_irqs(struct bnxt *bp) { if (BNXT_PF(bp)) @@ -6832,6 +6855,39 @@ int bnxt_get_max_rings(struct bnxt *bp, int *max_rx, int *max_tx, bool shared) return bnxt_trim_rings(bp, max_rx, max_tx, cp, shared); } +static int bnxt_get_dflt_rings(struct bnxt *bp, int *max_rx, int *max_tx, + bool shared) +{ + int rc; + + rc = bnxt_get_max_rings(bp, max_rx, max_tx, shared); + if (rc) + return rc; + + if (bp->flags & BNXT_FLAG_ROCE_CAP) { + int max_cp, max_stat, max_irq; + + /* Reserve minimum resources for RoCE */ + max_cp = bnxt_get_max_func_cp_rings(bp); + max_stat = bnxt_get_max_func_stat_ctxs(bp); + max_irq = bnxt_get_max_func_irqs(bp); + if (max_cp <= BNXT_MIN_ROCE_CP_RINGS || + max_irq <= BNXT_MIN_ROCE_CP_RINGS || + max_stat <= BNXT_MIN_ROCE_STAT_CTXS) + return 0; + + max_cp -= BNXT_MIN_ROCE_CP_RINGS; + max_irq -= BNXT_MIN_ROCE_CP_RINGS; + max_stat -= BNXT_MIN_ROCE_STAT_CTXS; + max_cp = min_t(int, max_cp, max_irq); + max_cp = min_t(int, max_cp, max_stat); + rc = bnxt_trim_rings(bp, max_rx, max_tx, max_cp, shared); + if (rc) + rc = 0; + } + return rc; +} + static int bnxt_set_dflt_rings(struct bnxt *bp) { int dflt_rings, max_rx_rings, max_tx_rings, rc; @@ -6840,7 +6896,7 @@ static int bnxt_set_dflt_rings(struct bnxt *bp) if (sh) bp->flags |= BNXT_FLAG_SHARED_RINGS; dflt_rings = netif_get_num_default_rss_queues(); - rc = bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, sh); + rc = bnxt_get_dflt_rings(bp, &max_rx_rings, &max_tx_rings, sh); if (rc) return rc; bp->rx_nr_rings = min_t(int, dflt_rings, max_rx_rings); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index 43a4b17..d796836 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -387,6 +387,9 @@ struct rx_tpa_end_cmp_ext { #define DB_KEY_TX_PUSH (0x4 << 28) #define DB_LONG_TX_PUSH (0x2 << 24) +#define BNXT_MIN_ROCE_CP_RINGS 2 +#define BNXT_MIN_ROCE_STAT_CTXS 1 + #define INVALID_HW_RING_ID ((u16)-1) /* The hardware supports certain page sizes. Use the supported page sizes @@ -953,6 +956,10 @@ struct bnxt { #define BNXT_FLAG_PORT_STATS 0x400 #define BNXT_FLAG_UDP_RSS_CAP 0x800 #define BNXT_FLAG_EEE_CAP 0x1000 + #define BNXT_FLAG_ROCEV1_CAP 0x8000 + #define BNXT_FLAG_ROCEV2_CAP 0x10000 + #define BNXT_FLAG_ROCE_CAP (BNXT_FLAG_ROCEV1_CAP | \ + BNXT_FLAG_ROCEV2_CAP) #define BNXT_FLAG_CHIP_NITRO_A0 0x1000000 #define BNXT_FLAG_ALL_CONFIG_FEATS (BNXT_FLAG_TPA | \ @@ -1234,6 +1241,8 @@ static inline void bnxt_disable_poll(struct bnxt_napi *bnapi) int hwrm_send_message(struct bnxt *, void *, u32, int); int hwrm_send_message_silent(struct bnxt *, void *, u32, int); int bnxt_hwrm_set_coal(struct bnxt *); +unsigned int bnxt_get_max_func_stat_ctxs(struct bnxt *bp); +unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp); void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max); void bnxt_tx_disable(struct bnxt *bp); void bnxt_tx_enable(struct bnxt *bp); -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH net-next 6/7] bnxt_en: Refactor the driver registration function with firmware. [not found] ` <1481044178-25193-1-git-send-email-michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> 2016-12-06 17:09 ` [PATCH net-next 2/7] bnxt_en: Enable MSIX early in bnxt_init_one() Michael Chan 2016-12-06 17:09 ` [PATCH net-next 5/7] bnxt_en: Reserve RDMA resources by default Michael Chan @ 2016-12-06 17:09 ` Michael Chan 2 siblings, 0 replies; 10+ messages in thread From: Michael Chan @ 2016-12-06 17:09 UTC (permalink / raw) To: davem-fT/PcQaiUtIeIZ0/mPfg9Q Cc: netdev-u79uwXL29TY76Z2rM5mHXA, selvin.xavier-dY08KVG/lbpWk0Htik3J/w, somnath.kotur-dY08KVG/lbpWk0Htik3J/w, dledford-H+wXaHxf7aLQT0dZR+AlfA, linux-rdma-u79uwXL29TY76Z2rM5mHXA The driver register function with firmware consists of passing version information and registering for async events. To support the RDMA driver, the async events that we need to register may change. Separate the driver register function into 2 parts so that we can just update the async events for the RDMA driver. Signed-off-by: Michael Chan <michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 34 ++++++++++++++++++++++++++----- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 ++ 2 files changed, 31 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 7218d65..c26735ea 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -3117,27 +3117,46 @@ int hwrm_send_message_silent(struct bnxt *bp, void *msg, u32 msg_len, return rc; } -static int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp) +int bnxt_hwrm_func_rgtr_async_events(struct bnxt *bp, unsigned long *bmap, + int bmap_size) { struct hwrm_func_drv_rgtr_input req = {0}; - int i; DECLARE_BITMAP(async_events_bmap, 256); u32 *events = (u32 *)async_events_bmap; + int i; bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_DRV_RGTR, -1, -1); req.enables = - cpu_to_le32(FUNC_DRV_RGTR_REQ_ENABLES_OS_TYPE | - FUNC_DRV_RGTR_REQ_ENABLES_VER | - FUNC_DRV_RGTR_REQ_ENABLES_ASYNC_EVENT_FWD); + cpu_to_le32(FUNC_DRV_RGTR_REQ_ENABLES_ASYNC_EVENT_FWD); memset(async_events_bmap, 0, sizeof(async_events_bmap)); for (i = 0; i < ARRAY_SIZE(bnxt_async_events_arr); i++) __set_bit(bnxt_async_events_arr[i], async_events_bmap); + if (bmap && bmap_size) { + for (i = 0; i < bmap_size; i++) { + if (test_bit(i, bmap)) + __set_bit(i, async_events_bmap); + } + } + for (i = 0; i < 8; i++) req.async_event_fwd[i] |= cpu_to_le32(events[i]); + return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); +} + +static int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp) +{ + struct hwrm_func_drv_rgtr_input req = {0}; + + bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_DRV_RGTR, -1, -1); + + req.enables = + cpu_to_le32(FUNC_DRV_RGTR_REQ_ENABLES_OS_TYPE | + FUNC_DRV_RGTR_REQ_ENABLES_VER); + req.os_type = cpu_to_le16(FUNC_DRV_RGTR_REQ_OS_TYPE_LINUX); req.ver_maj = DRV_VER_MAJ; req.ver_min = DRV_VER_MIN; @@ -3146,6 +3165,7 @@ static int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp) if (BNXT_PF(bp)) { DECLARE_BITMAP(vf_req_snif_bmap, 256); u32 *data = (u32 *)vf_req_snif_bmap; + int i; memset(vf_req_snif_bmap, 0, sizeof(vf_req_snif_bmap)); for (i = 0; i < ARRAY_SIZE(bnxt_vf_req_snif); i++) @@ -7023,6 +7043,10 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) if (rc) goto init_err; + rc = bnxt_hwrm_func_rgtr_async_events(bp, NULL, 0); + if (rc) + goto init_err; + /* Get the MAX capabilities for this function */ rc = bnxt_hwrm_func_qcaps(bp); if (rc) { diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index d796836..eec2415 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -1240,6 +1240,8 @@ static inline void bnxt_disable_poll(struct bnxt_napi *bnapi) int _hwrm_send_message(struct bnxt *, void *, u32, int); int hwrm_send_message(struct bnxt *, void *, u32, int); int hwrm_send_message_silent(struct bnxt *, void *, u32, int); +int bnxt_hwrm_func_rgtr_async_events(struct bnxt *bp, unsigned long *bmap, + int bmap_size); int bnxt_hwrm_set_coal(struct bnxt *); unsigned int bnxt_get_max_func_stat_ctxs(struct bnxt *bp); unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp); -- 1.8.3.1 -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH net-next 7/7] bnxt_en: Add interface to support RDMA driver. 2016-12-06 17:09 [PATCH net-next 0/7] bnxt_en: Add interface to support RDMA driver Michael Chan ` (3 preceding siblings ...) [not found] ` <1481044178-25193-1-git-send-email-michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> @ 2016-12-06 17:09 ` Michael Chan 2016-12-06 22:33 ` kbuild test robot 4 siblings, 1 reply; 10+ messages in thread From: Michael Chan @ 2016-12-06 17:09 UTC (permalink / raw) To: davem; +Cc: netdev, selvin.xavier, somnath.kotur, dledford, linux-rdma Since the network driver and RDMA driver operate on the same PCI function, we need to create an interface to allow the RDMA driver to share resources with the network driver. 1. Create a new bnxt_en_dev struct which will be returned by bnxt_ulp_probe() upon success. After that, all calls from the RDMA driver to bnxt_en will pass a pointer to this struct. 2. This struct contains additional function pointers to register, request msix, send fw messages, register for async events. 3. If the RDMA driver wants to enable RDMA on the function, it needs to call the function pointer bnxt_register_device(). A ulp_ops structure is passed for RCU protected upcalls from bnxt_en to the RDMA driver. 4. The RDMA driver can call firmware APIs using the bnxt_send_fw_msg() function pointer. 5. 1 stats context is reserved when the RDMA driver registers. MSIX and completion rings are reserved when the RDMA driver calls bnxt_request_msix() function pointer. 6. When the RDMA driver calls bnxt_unregister_device(), all RDMA resources will be cleaned up. Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> --- drivers/net/ethernet/broadcom/bnxt/Makefile | 2 +- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 41 ++- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 6 + drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c | 346 ++++++++++++++++++++++++++ drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h | 93 +++++++ 5 files changed, 483 insertions(+), 5 deletions(-) create mode 100644 drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c create mode 100644 drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h diff --git a/drivers/net/ethernet/broadcom/bnxt/Makefile b/drivers/net/ethernet/broadcom/bnxt/Makefile index b233a86..6082ed1 100644 --- a/drivers/net/ethernet/broadcom/bnxt/Makefile +++ b/drivers/net/ethernet/broadcom/bnxt/Makefile @@ -1,3 +1,3 @@ obj-$(CONFIG_BNXT) += bnxt_en.o -bnxt_en-y := bnxt.o bnxt_sriov.o bnxt_ethtool.o bnxt_dcb.o +bnxt_en-y := bnxt.o bnxt_sriov.o bnxt_ethtool.o bnxt_dcb.o bnxt_ulp.o diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index c26735ea..d334da3 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -52,6 +52,7 @@ #include "bnxt_hsi.h" #include "bnxt.h" +#include "bnxt_ulp.h" #include "bnxt_sriov.h" #include "bnxt_ethtool.h" #include "bnxt_dcb.h" @@ -1528,12 +1529,11 @@ static int bnxt_async_event_process(struct bnxt *bp, set_bit(BNXT_RESET_TASK_SILENT_SP_EVENT, &bp->sp_event); break; default: - netdev_err(bp->dev, "unhandled ASYNC event (id 0x%x)\n", - event_id); goto async_event_process_exit; } schedule_work(&bp->sp_task); async_event_process_exit: + bnxt_ulp_async_events(bp, cmpl); return 0; } @@ -3547,7 +3547,7 @@ static int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp, u16 vnic_id, u16 ctx_idx) return rc; } -static int bnxt_hwrm_vnic_cfg(struct bnxt *bp, u16 vnic_id) +int bnxt_hwrm_vnic_cfg(struct bnxt *bp, u16 vnic_id) { unsigned int ring = 0, grp_idx; struct bnxt_vnic_info *vnic = &bp->vnic_info[vnic_id]; @@ -3595,6 +3595,9 @@ static int bnxt_hwrm_vnic_cfg(struct bnxt *bp, u16 vnic_id) #endif if ((bp->flags & BNXT_FLAG_STRIP_VLAN) || def_vlan) req.flags |= cpu_to_le32(VNIC_CFG_REQ_FLAGS_VLAN_STRIP_MODE); + if (!vnic_id && bnxt_ulp_registered(bp->edev, BNXT_ROCE_ULP)) + req.flags |= + cpu_to_le32(VNIC_CFG_REQ_FLAGS_ROCE_DUAL_VNIC_MODE); return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); } @@ -4842,6 +4845,16 @@ unsigned int bnxt_get_max_func_stat_ctxs(struct bnxt *bp) #endif } +void bnxt_set_max_func_stat_ctxs(struct bnxt *bp, unsigned int max) +{ + if (BNXT_PF(bp)) + bp->pf.max_stat_ctxs = max; +#if defined(CONFIG_BNXT_SRIOV) + else + bp->vf.max_stat_ctxs = max; +#endif +} + unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp) { if (BNXT_PF(bp)) @@ -4851,6 +4864,16 @@ unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp) #endif } +void bnxt_set_max_func_cp_rings(struct bnxt *bp, unsigned int max) +{ + if (BNXT_PF(bp)) + bp->pf.max_cp_rings = max; +#if defined(CONFIG_BNXT_SRIOV) + else + bp->vf.max_cp_rings = max; +#endif +} + static unsigned int bnxt_get_max_func_irqs(struct bnxt *bp) { if (BNXT_PF(bp)) @@ -6767,6 +6790,8 @@ static void bnxt_remove_one(struct pci_dev *pdev) pci_iounmap(pdev, bp->bar2); pci_iounmap(pdev, bp->bar1); pci_iounmap(pdev, bp->bar0); + kfree(bp->edev); + bp->edev = NULL; free_netdev(dev); pci_release_regions(pdev); @@ -6936,6 +6961,7 @@ void bnxt_restore_pf_fw_resources(struct bnxt *bp) { ASSERT_RTNL(); bnxt_hwrm_func_qcaps(bp); + bnxt_subtract_ulp_resources(bp, BNXT_ROCE_ULP); } static void bnxt_parse_log_pcie_link(struct bnxt *bp) @@ -7047,6 +7073,8 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) if (rc) goto init_err; + bp->ulp_probe = bnxt_ulp_probe; + /* Get the MAX capabilities for this function */ rc = bnxt_hwrm_func_qcaps(bp); if (rc) { @@ -7144,12 +7172,15 @@ static pci_ers_result_t bnxt_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state) { struct net_device *netdev = pci_get_drvdata(pdev); + struct bnxt *bp = netdev_priv(netdev); netdev_info(netdev, "PCI I/O error detected\n"); rtnl_lock(); netif_device_detach(netdev); + bnxt_ulp_stop(bp); + if (state == pci_channel_io_perm_failure) { rtnl_unlock(); return PCI_ERS_RESULT_DISCONNECT; @@ -7195,8 +7226,10 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev) if (!err && netif_running(netdev)) err = bnxt_open(netdev); - if (!err) + if (!err) { result = PCI_ERS_RESULT_RECOVERED; + bnxt_ulp_start(bp); + } } if (result != PCI_ERS_RESULT_RECOVERED && netif_running(netdev)) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h index eec2415..16defe9 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h @@ -972,6 +972,9 @@ struct bnxt { #define BNXT_SINGLE_PF(bp) (BNXT_PF(bp) && !BNXT_NPAR(bp)) #define BNXT_CHIP_TYPE_NITRO_A0(bp) ((bp)->flags & BNXT_FLAG_CHIP_NITRO_A0) + struct bnxt_en_dev *edev; + struct bnxt_en_dev * (*ulp_probe)(struct net_device *); + struct bnxt_napi **bnapi; struct bnxt_rx_ring_info *rx_ring; @@ -1242,9 +1245,12 @@ static inline void bnxt_disable_poll(struct bnxt_napi *bnapi) int hwrm_send_message_silent(struct bnxt *, void *, u32, int); int bnxt_hwrm_func_rgtr_async_events(struct bnxt *bp, unsigned long *bmap, int bmap_size); +int bnxt_hwrm_vnic_cfg(struct bnxt *bp, u16 vnic_id); int bnxt_hwrm_set_coal(struct bnxt *); unsigned int bnxt_get_max_func_stat_ctxs(struct bnxt *bp); +void bnxt_set_max_func_stat_ctxs(struct bnxt *bp, unsigned int max); unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp); +void bnxt_set_max_func_cp_rings(struct bnxt *bp, unsigned int max); void bnxt_set_max_func_irqs(struct bnxt *bp, unsigned int max); void bnxt_tx_disable(struct bnxt *bp); void bnxt_tx_enable(struct bnxt *bp); diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c new file mode 100644 index 0000000..781948f --- /dev/null +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c @@ -0,0 +1,346 @@ +/* Broadcom NetXtreme-C/E network driver. + * + * Copyright (c) 2016 Broadcom Limited + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation. + */ + +#include <linux/module.h> + +#include <linux/kernel.h> +#include <linux/errno.h> +#include <linux/interrupt.h> +#include <linux/pci.h> +#include <linux/netdevice.h> +#include <linux/rtnetlink.h> +#include <linux/bitops.h> +#include <linux/irq.h> +#include <asm/byteorder.h> +#include <linux/bitmap.h> + +#include "bnxt_hsi.h" +#include "bnxt.h" +#include "bnxt_ulp.h" + +static int bnxt_register_dev(struct bnxt_en_dev *edev, int ulp_id, + struct bnxt_ulp_ops *ulp_ops, void *handle) +{ + struct net_device *dev = edev->net; + struct bnxt *bp = netdev_priv(dev); + struct bnxt_ulp *ulp; + + ASSERT_RTNL(); + if (ulp_id >= BNXT_MAX_ULP) + return -EINVAL; + + ulp = &edev->ulp_tbl[ulp_id]; + if (rcu_access_pointer(ulp->ulp_ops)) { + netdev_err(bp->dev, "ulp id %d already registered\n", ulp_id); + return -EBUSY; + } + if (ulp_id == BNXT_ROCE_ULP) { + unsigned int max_stat_ctxs; + + max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp); + if (max_stat_ctxs <= BNXT_MIN_ROCE_STAT_CTXS || + bp->num_stat_ctxs == max_stat_ctxs) + return -ENOMEM; + bnxt_set_max_func_stat_ctxs(bp, max_stat_ctxs - + BNXT_MIN_ROCE_STAT_CTXS); + } + + atomic_set(&ulp->ref_count, 0); + ulp->handle = handle; + rcu_assign_pointer(ulp->ulp_ops, ulp_ops); + + if (ulp_id == BNXT_ROCE_ULP) { + if (test_bit(BNXT_STATE_OPEN, &bp->state)) + bnxt_hwrm_vnic_cfg(bp, 0); + } + + return 0; +} + +static int bnxt_unregister_dev(struct bnxt_en_dev *edev, int ulp_id) +{ + struct net_device *dev = edev->net; + struct bnxt *bp = netdev_priv(dev); + struct bnxt_ulp *ulp; + int i; + + ASSERT_RTNL(); + if (ulp_id >= BNXT_MAX_ULP) + return -EINVAL; + + ulp = &edev->ulp_tbl[ulp_id]; + if (!rcu_access_pointer(ulp->ulp_ops)) { + netdev_err(bp->dev, "ulp id %d not registered\n", ulp_id); + return -EINVAL; + } + if (ulp_id == BNXT_ROCE_ULP) { + unsigned int max_stat_ctxs; + + max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp); + bnxt_set_max_func_stat_ctxs(bp, max_stat_ctxs + 1); + } + if (ulp->max_async_event_id) + bnxt_hwrm_func_rgtr_async_events(bp, NULL, 0); + + RCU_INIT_POINTER(ulp->ulp_ops, NULL); + synchronize_rcu(); + ulp->max_async_event_id = 0; + ulp->async_events_bmap = NULL; + while (atomic_read(&ulp->ref_count) != 0 && i < 10) { + msleep(100); + i++; + } + return 0; +} + +static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, int ulp_id, + struct bnxt_msix_entry *ent, int num_msix) +{ + struct net_device *dev = edev->net; + struct bnxt *bp = netdev_priv(dev); + int max_idx, max_cp_rings; + int avail_msix, i, idx; + + ASSERT_RTNL(); + if (ulp_id != BNXT_ROCE_ULP) + return -EINVAL; + + if (!(bp->flags & BNXT_FLAG_USING_MSIX)) + return -ENODEV; + + max_cp_rings = bnxt_get_max_func_cp_rings(bp); + max_idx = min_t(int, bp->total_irqs, max_cp_rings); + avail_msix = max_idx - bp->cp_nr_rings; + if (!avail_msix) + return -ENOMEM; + if (avail_msix > num_msix) + avail_msix = num_msix; + + idx = max_idx - avail_msix; + for (i = 0; i < avail_msix; i++) { + ent[i].vector = bp->irq_tbl[idx + i].vector; + ent[i].ring_idx = idx + i; + ent[i].db_offset = (idx + i) * 0x80; + } + bnxt_set_max_func_irqs(bp, max_idx - avail_msix); + bnxt_set_max_func_cp_rings(bp, max_cp_rings - avail_msix); + edev->ulp_tbl[ulp_id].msix_requested = avail_msix; + return avail_msix; +} + +static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, int ulp_id) +{ + struct net_device *dev = edev->net; + struct bnxt *bp = netdev_priv(dev); + int max_cp_rings, msix_requested; + + ASSERT_RTNL(); + if (ulp_id != BNXT_ROCE_ULP) + return -EINVAL; + + max_cp_rings = bnxt_get_max_func_cp_rings(bp); + msix_requested = edev->ulp_tbl[ulp_id].msix_requested; + bnxt_set_max_func_cp_rings(bp, max_cp_rings + msix_requested); + edev->ulp_tbl[ulp_id].msix_requested = 0; + bnxt_set_max_func_irqs(bp, bp->total_irqs); + return 0; +} + +void bnxt_subtract_ulp_resources(struct bnxt *bp, int ulp_id) +{ + ASSERT_RTNL(); + if (bnxt_ulp_registered(bp->edev, ulp_id)) { + struct bnxt_en_dev *edev = bp->edev; + unsigned int msix_req, max; + + msix_req = edev->ulp_tbl[ulp_id].msix_requested; + max = bnxt_get_max_func_cp_rings(bp); + bnxt_set_max_func_cp_rings(bp, max - msix_req); + max = bnxt_get_max_func_stat_ctxs(bp); + bnxt_set_max_func_stat_ctxs(bp, max - 1); + } +} + +static int bnxt_send_msg(struct bnxt_en_dev *edev, int ulp_id, + struct bnxt_fw_msg *fw_msg) +{ + struct net_device *dev = edev->net; + struct bnxt *bp = netdev_priv(dev); + struct input *req; + int rc; + + mutex_lock(&bp->hwrm_cmd_lock); + req = fw_msg->msg; + req->resp_addr = cpu_to_le64(bp->hwrm_cmd_resp_dma_addr); + rc = _hwrm_send_message(bp, fw_msg->msg, fw_msg->msg_len, + fw_msg->timeout); + if (!rc) { + struct output *resp = bp->hwrm_cmd_resp_addr; + u32 len = le16_to_cpu(resp->resp_len); + + if (fw_msg->resp_max_len < len) + len = fw_msg->resp_max_len; + + memcpy(fw_msg->resp, resp, len); + } + mutex_unlock(&bp->hwrm_cmd_lock); + return rc; +} + +static void bnxt_ulp_get(struct bnxt_ulp *ulp) +{ + atomic_inc(&ulp->ref_count); +} + +static void bnxt_ulp_put(struct bnxt_ulp *ulp) +{ + atomic_dec(&ulp->ref_count); +} + +void bnxt_ulp_stop(struct bnxt *bp) +{ + struct bnxt_en_dev *edev = bp->edev; + struct bnxt_ulp_ops *ops; + int i; + + if (!edev) + return; + + for (i = 0; i < BNXT_MAX_ULP; i++) { + struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; + + rtnl_dereference(ulp->ulp_ops); + if (!ops || !ops->ulp_stop) + continue; + ops->ulp_stop(ulp->handle); + } +} + +void bnxt_ulp_start(struct bnxt *bp) +{ + struct bnxt_en_dev *edev = bp->edev; + struct bnxt_ulp_ops *ops; + int i; + + if (!edev) + return; + + for (i = 0; i < BNXT_MAX_ULP; i++) { + struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; + + ops = rtnl_dereference(ulp->ulp_ops); + if (!ops || !ops->ulp_start) + continue; + ops->ulp_start(ulp->handle); + } +} + +void bnxt_ulp_sriov_cfg(struct bnxt *bp, int num_vfs) +{ + struct bnxt_en_dev *edev = bp->edev; + struct bnxt_ulp_ops *ops; + int i; + + if (!edev) + return; + + for (i = 0; i < BNXT_MAX_ULP; i++) { + struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; + + rcu_read_lock(); + ops = rcu_dereference(ulp->ulp_ops); + if (!ops || !ops->ulp_sriov_config) { + rcu_read_unlock(); + continue; + } + bnxt_ulp_get(ulp); + rcu_read_unlock(); + ops->ulp_sriov_config(ulp->handle, num_vfs); + bnxt_ulp_put(ulp); + } +} + +void bnxt_ulp_async_events(struct bnxt *bp, struct hwrm_async_event_cmpl *cmpl) +{ + u16 event_id = le16_to_cpu(cmpl->event_id); + struct bnxt_en_dev *edev = bp->edev; + struct bnxt_ulp_ops *ops; + int i; + + if (!edev) + return; + + rcu_read_lock(); + for (i = 0; i < BNXT_MAX_ULP; i++) { + struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; + + ops = rcu_dereference(ulp->ulp_ops); + if (!ops || !ops->ulp_async_notifier) + continue; + if (!ulp->async_events_bmap || + event_id > ulp->max_async_event_id) + continue; + + /* Read max_async_event_id first before testing the bitmap. */ + smp_rmb(); + if (test_bit(event_id, ulp->async_events_bmap)) + ops->ulp_async_notifier(ulp->handle, cmpl); + } + rcu_read_unlock(); +} + +static int bnxt_register_async_events(struct bnxt_en_dev *edev, int ulp_id, + unsigned long *events_bmap, u16 max_id) +{ + struct net_device *dev = edev->net; + struct bnxt *bp = netdev_priv(dev); + struct bnxt_ulp *ulp; + + if (ulp_id >= BNXT_MAX_ULP) + return -EINVAL; + + ulp = &edev->ulp_tbl[ulp_id]; + ulp->async_events_bmap = events_bmap; + /* Make sure bnxt_ulp_async_events() sees this order */ + smp_wmb(); + ulp->max_async_event_id = max_id; + bnxt_hwrm_func_rgtr_async_events(bp, events_bmap, max_id + 1); + return 0; +} + +static const struct bnxt_en_ops bnxt_en_ops_tbl = { + .bnxt_register_device = bnxt_register_dev, + .bnxt_unregister_device = bnxt_unregister_dev, + .bnxt_request_msix = bnxt_req_msix_vecs, + .bnxt_free_msix = bnxt_free_msix_vecs, + .bnxt_send_fw_msg = bnxt_send_msg, + .bnxt_register_fw_async_events = bnxt_register_async_events, +}; + +struct bnxt_en_dev *bnxt_ulp_probe(struct net_device *dev) +{ + struct bnxt *bp = netdev_priv(dev); + struct bnxt_en_dev *edev; + + edev = bp->edev; + if (!edev) { + edev = kzalloc(sizeof(*edev), GFP_KERNEL); + if (!edev) + return ERR_PTR(-ENOMEM); + edev->en_ops = &bnxt_en_ops_tbl; + if (bp->flags & BNXT_FLAG_ROCEV1_CAP) + edev->flags |= BNXT_EN_FLAG_ROCEV1_CAP; + if (bp->flags & BNXT_FLAG_ROCEV2_CAP) + edev->flags |= BNXT_EN_FLAG_ROCEV2_CAP; + edev->net = dev; + edev->pdev = bp->pdev; + bp->edev = edev; + } + return bp->edev; +} diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h new file mode 100644 index 0000000..74f816e --- /dev/null +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.h @@ -0,0 +1,93 @@ +/* Broadcom NetXtreme-C/E network driver. + * + * Copyright (c) 2016 Broadcom Limited + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation. + */ + +#ifndef BNXT_ULP_H +#define BNXT_ULP_H + +#define BNXT_ROCE_ULP 0 +#define BNXT_OTHER_ULP 1 +#define BNXT_MAX_ULP 2 + +#define BNXT_MIN_ROCE_CP_RINGS 2 +#define BNXT_MIN_ROCE_STAT_CTXS 1 + +struct hwrm_async_event_cmpl; +struct bnxt; + +struct bnxt_ulp_ops { + /* async_notifier() cannot sleep (in BH context) */ + void (*ulp_async_notifier)(void *, struct hwrm_async_event_cmpl *); + void (*ulp_stop)(void *); + void (*ulp_start)(void *); + void (*ulp_sriov_config)(void *, int); +}; + +struct bnxt_msix_entry { + u32 vector; + u32 ring_idx; + u32 db_offset; +}; + +struct bnxt_fw_msg { + void *msg; + int msg_len; + void *resp; + int resp_max_len; + int timeout; +}; + +struct bnxt_ulp { + void *handle; + struct bnxt_ulp_ops __rcu *ulp_ops; + unsigned long *async_events_bmap; + u16 max_async_event_id; + u16 msix_requested; + atomic_t ref_count; +}; + +struct bnxt_en_dev { + struct net_device *net; + struct pci_dev *pdev; + u32 flags; + #define BNXT_EN_FLAG_ROCEV1_CAP 0x1 + #define BNXT_EN_FLAG_ROCEV2_CAP 0x2 + #define BNXT_EN_FLAG_ROCE_CAP (BNXT_EN_FLAG_ROCEV1_CAP | \ + BNXT_EN_FLAG_ROCEV2_CAP) + const struct bnxt_en_ops *en_ops; + struct bnxt_ulp ulp_tbl[BNXT_MAX_ULP]; +}; + +struct bnxt_en_ops { + int (*bnxt_register_device)(struct bnxt_en_dev *, int, + struct bnxt_ulp_ops *, void *); + int (*bnxt_unregister_device)(struct bnxt_en_dev *, int); + int (*bnxt_request_msix)(struct bnxt_en_dev *, int, + struct bnxt_msix_entry *, int); + int (*bnxt_free_msix)(struct bnxt_en_dev *, int); + int (*bnxt_send_fw_msg)(struct bnxt_en_dev *, int, + struct bnxt_fw_msg *); + int (*bnxt_register_fw_async_events)(struct bnxt_en_dev *, int, + unsigned long *, u16); +}; + +static inline bool bnxt_ulp_registered(struct bnxt_en_dev *edev, int ulp_id) +{ + if (edev && rcu_access_pointer(edev->ulp_tbl[ulp_id].ulp_ops)) + return true; + return false; +} + +void bnxt_subtract_ulp_resources(struct bnxt *bp, int ulp_id); +void bnxt_ulp_stop(struct bnxt *bp); +void bnxt_ulp_start(struct bnxt *bp); +void bnxt_ulp_sriov_cfg(struct bnxt *bp, int num_vfs); +void bnxt_ulp_async_events(struct bnxt *bp, struct hwrm_async_event_cmpl *cmpl); +struct bnxt_en_dev *bnxt_ulp_probe(struct net_device *dev); + +#endif -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH net-next 7/7] bnxt_en: Add interface to support RDMA driver. 2016-12-06 17:09 ` [PATCH net-next 7/7] bnxt_en: Add interface to support RDMA driver Michael Chan @ 2016-12-06 22:33 ` kbuild test robot 0 siblings, 0 replies; 10+ messages in thread From: kbuild test robot @ 2016-12-06 22:33 UTC (permalink / raw) To: Michael Chan Cc: kbuild-all, davem, netdev, selvin.xavier, somnath.kotur, dledford, linux-rdma [-- Attachment #1: Type: text/plain, Size: 5469 bytes --] Hi Michael, [auto build test WARNING on net-next/master] url: https://github.com/0day-ci/linux/commits/Michael-Chan/bnxt_en-Add-interface-to-support-RDMA-driver/20161207-053721 config: i386-allmodconfig (attached as .config) compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901 reproduce: # save the attached .config to linux build tree make ARCH=i386 Note: it may well be a FALSE warning. FWIW you are at least aware of it now. http://gcc.gnu.org/wiki/Better_Uninitialized_Warnings All warnings (new ones prefixed by >>): drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c: In function 'bnxt_unregister_dev': >> drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c:97:4: warning: 'i' may be used uninitialized in this function [-Wmaybe-uninitialized] i++; ~^~ drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c: In function 'bnxt_ulp_stop': >> drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c:219:6: warning: 'ops' may be used uninitialized in this function [-Wmaybe-uninitialized] if (!ops || !ops->ulp_stop) ^ vim +/i +97 drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c 91 RCU_INIT_POINTER(ulp->ulp_ops, NULL); 92 synchronize_rcu(); 93 ulp->max_async_event_id = 0; 94 ulp->async_events_bmap = NULL; 95 while (atomic_read(&ulp->ref_count) != 0 && i < 10) { 96 msleep(100); > 97 i++; 98 } 99 return 0; 100 } 101 102 static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, int ulp_id, 103 struct bnxt_msix_entry *ent, int num_msix) 104 { 105 struct net_device *dev = edev->net; 106 struct bnxt *bp = netdev_priv(dev); 107 int max_idx, max_cp_rings; 108 int avail_msix, i, idx; 109 110 ASSERT_RTNL(); 111 if (ulp_id != BNXT_ROCE_ULP) 112 return -EINVAL; 113 114 if (!(bp->flags & BNXT_FLAG_USING_MSIX)) 115 return -ENODEV; 116 117 max_cp_rings = bnxt_get_max_func_cp_rings(bp); 118 max_idx = min_t(int, bp->total_irqs, max_cp_rings); 119 avail_msix = max_idx - bp->cp_nr_rings; 120 if (!avail_msix) 121 return -ENOMEM; 122 if (avail_msix > num_msix) 123 avail_msix = num_msix; 124 125 idx = max_idx - avail_msix; 126 for (i = 0; i < avail_msix; i++) { 127 ent[i].vector = bp->irq_tbl[idx + i].vector; 128 ent[i].ring_idx = idx + i; 129 ent[i].db_offset = (idx + i) * 0x80; 130 } 131 bnxt_set_max_func_irqs(bp, max_idx - avail_msix); 132 bnxt_set_max_func_cp_rings(bp, max_cp_rings - avail_msix); 133 edev->ulp_tbl[ulp_id].msix_requested = avail_msix; 134 return avail_msix; 135 } 136 137 static int bnxt_free_msix_vecs(struct bnxt_en_dev *edev, int ulp_id) 138 { 139 struct net_device *dev = edev->net; 140 struct bnxt *bp = netdev_priv(dev); 141 int max_cp_rings, msix_requested; 142 143 ASSERT_RTNL(); 144 if (ulp_id != BNXT_ROCE_ULP) 145 return -EINVAL; 146 147 max_cp_rings = bnxt_get_max_func_cp_rings(bp); 148 msix_requested = edev->ulp_tbl[ulp_id].msix_requested; 149 bnxt_set_max_func_cp_rings(bp, max_cp_rings + msix_requested); 150 edev->ulp_tbl[ulp_id].msix_requested = 0; 151 bnxt_set_max_func_irqs(bp, bp->total_irqs); 152 return 0; 153 } 154 155 void bnxt_subtract_ulp_resources(struct bnxt *bp, int ulp_id) 156 { 157 ASSERT_RTNL(); 158 if (bnxt_ulp_registered(bp->edev, ulp_id)) { 159 struct bnxt_en_dev *edev = bp->edev; 160 unsigned int msix_req, max; 161 162 msix_req = edev->ulp_tbl[ulp_id].msix_requested; 163 max = bnxt_get_max_func_cp_rings(bp); 164 bnxt_set_max_func_cp_rings(bp, max - msix_req); 165 max = bnxt_get_max_func_stat_ctxs(bp); 166 bnxt_set_max_func_stat_ctxs(bp, max - 1); 167 } 168 } 169 170 static int bnxt_send_msg(struct bnxt_en_dev *edev, int ulp_id, 171 struct bnxt_fw_msg *fw_msg) 172 { 173 struct net_device *dev = edev->net; 174 struct bnxt *bp = netdev_priv(dev); 175 struct input *req; 176 int rc; 177 178 mutex_lock(&bp->hwrm_cmd_lock); 179 req = fw_msg->msg; 180 req->resp_addr = cpu_to_le64(bp->hwrm_cmd_resp_dma_addr); 181 rc = _hwrm_send_message(bp, fw_msg->msg, fw_msg->msg_len, 182 fw_msg->timeout); 183 if (!rc) { 184 struct output *resp = bp->hwrm_cmd_resp_addr; 185 u32 len = le16_to_cpu(resp->resp_len); 186 187 if (fw_msg->resp_max_len < len) 188 len = fw_msg->resp_max_len; 189 190 memcpy(fw_msg->resp, resp, len); 191 } 192 mutex_unlock(&bp->hwrm_cmd_lock); 193 return rc; 194 } 195 196 static void bnxt_ulp_get(struct bnxt_ulp *ulp) 197 { 198 atomic_inc(&ulp->ref_count); 199 } 200 201 static void bnxt_ulp_put(struct bnxt_ulp *ulp) 202 { 203 atomic_dec(&ulp->ref_count); 204 } 205 206 void bnxt_ulp_stop(struct bnxt *bp) 207 { 208 struct bnxt_en_dev *edev = bp->edev; 209 struct bnxt_ulp_ops *ops; 210 int i; 211 212 if (!edev) 213 return; 214 215 for (i = 0; i < BNXT_MAX_ULP; i++) { 216 struct bnxt_ulp *ulp = &edev->ulp_tbl[i]; 217 218 rtnl_dereference(ulp->ulp_ops); > 219 if (!ops || !ops->ulp_stop) 220 continue; 221 ops->ulp_stop(ulp->handle); 222 } --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation [-- Attachment #2: .config.gz --] [-- Type: application/gzip, Size: 57018 bytes --] ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2016-12-06 23:55 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-12-06 17:09 [PATCH net-next 0/7] bnxt_en: Add interface to support RDMA driver Michael Chan
2016-12-06 17:09 ` [PATCH net-next 1/7] bnxt_en: Add bnxt_set_max_func_irqs() Michael Chan
2016-12-06 17:09 ` [PATCH net-next 3/7] bnxt_en: Move function reset to bnxt_init_one() Michael Chan
2016-12-06 17:09 ` [PATCH net-next 4/7] bnxt_en: Improve completion ring allocation for VFs Michael Chan
[not found] ` <1481044178-25193-1-git-send-email-michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
2016-12-06 17:09 ` [PATCH net-next 2/7] bnxt_en: Enable MSIX early in bnxt_init_one() Michael Chan
[not found] ` <1481044178-25193-3-git-send-email-michael.chan-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
2016-12-06 23:55 ` kbuild test robot
2016-12-06 17:09 ` [PATCH net-next 5/7] bnxt_en: Reserve RDMA resources by default Michael Chan
2016-12-06 17:09 ` [PATCH net-next 6/7] bnxt_en: Refactor the driver registration function with firmware Michael Chan
2016-12-06 17:09 ` [PATCH net-next 7/7] bnxt_en: Add interface to support RDMA driver Michael Chan
2016-12-06 22:33 ` kbuild test robot
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).