* [PATCH for-next 0/5] RDMA/efa: Cleanups and minor improvements
@ 2024-06-24 16:09 Michael Margolin
2024-06-24 16:09 ` [PATCH for-next 1/5] RDMA/efa: Use offset_in_page() function Michael Margolin
` (5 more replies)
0 siblings, 6 replies; 19+ messages in thread
From: Michael Margolin @ 2024-06-24 16:09 UTC (permalink / raw)
To: jgg, leon, linux-rdma; +Cc: sleybo, matua, gal.pressman
This series contains few cleanups and improvements that were laying
around.
Gal Pressman (1):
RDMA/efa: Use offset_in_page() function
Michael Margolin (2):
RDMA/efa: Move type conversion helpers to efa.h
RDMA/efa: Align private func names to a single convention
Yonatan Nachum (2):
RDMA/efa: Remove duplicate aenq enable macro
RDMA/efa: Validate EQ array out of bounds reach
drivers/infiniband/hw/efa/efa.h | 41 +++-
drivers/infiniband/hw/efa/efa_main.c | 2 +-
drivers/infiniband/hw/efa/efa_verbs.c | 333 +++++++++++---------------
3 files changed, 174 insertions(+), 202 deletions(-)
--
2.40.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH for-next 1/5] RDMA/efa: Use offset_in_page() function
2024-06-24 16:09 [PATCH for-next 0/5] RDMA/efa: Cleanups and minor improvements Michael Margolin
@ 2024-06-24 16:09 ` Michael Margolin
2024-06-25 6:32 ` Gal Pressman
2024-06-24 16:09 ` [PATCH for-next 2/5] RDMA/efa: Remove duplicate aenq enable macro Michael Margolin
` (4 subsequent siblings)
5 siblings, 1 reply; 19+ messages in thread
From: Michael Margolin @ 2024-06-24 16:09 UTC (permalink / raw)
To: jgg, leon, linux-rdma
Cc: sleybo, matua, gal.pressman, Gal Pressman, Firas Jahjah
From: Gal Pressman <galpress@amazon.com>
Use offset_in_page() instead of open-coding it.
Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
Reviewed-by: Firas Jahjah <firasj@amazon.com>
Signed-off-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Michael Margolin <mrgolin@amazon.com>
---
drivers/infiniband/hw/efa/efa_verbs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index 8f7a13b79cdc..eee2ae215414 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -524,7 +524,7 @@ static int qp_mmap_entries_setup(struct efa_qp *qp,
address = dev->mem_bar_addr + resp->llq_desc_offset;
length = PAGE_ALIGN(params->sq_ring_size_in_bytes +
- (resp->llq_desc_offset & ~PAGE_MASK));
+ offset_in_page(resp->llq_desc_offset));
qp->llq_desc_mmap_entry =
efa_user_mmap_entry_insert(&ucontext->ibucontext,
--
2.40.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH for-next 2/5] RDMA/efa: Remove duplicate aenq enable macro
2024-06-24 16:09 [PATCH for-next 0/5] RDMA/efa: Cleanups and minor improvements Michael Margolin
2024-06-24 16:09 ` [PATCH for-next 1/5] RDMA/efa: Use offset_in_page() function Michael Margolin
@ 2024-06-24 16:09 ` Michael Margolin
2024-06-25 6:32 ` Gal Pressman
2024-06-24 16:09 ` [PATCH for-next 3/5] RDMA/efa: Validate EQ array out of bounds reach Michael Margolin
` (3 subsequent siblings)
5 siblings, 1 reply; 19+ messages in thread
From: Michael Margolin @ 2024-06-24 16:09 UTC (permalink / raw)
To: jgg, leon, linux-rdma; +Cc: sleybo, matua, gal.pressman, Yonatan Nachum
From: Yonatan Nachum <ynachum@amazon.com>
We have the same macro in main and verbs files and we don't use the
macro in the verbs file, remove it.
Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
Signed-off-by: Yonatan Nachum <ynachum@amazon.com>
Signed-off-by: Michael Margolin <mrgolin@amazon.com>
---
drivers/infiniband/hw/efa/efa_verbs.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index eee2ae215414..cd1f735d08a7 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -26,10 +26,6 @@ enum {
EFA_MMAP_IO_NC,
};
-#define EFA_AENQ_ENABLED_GROUPS \
- (BIT(EFA_ADMIN_FATAL_ERROR) | BIT(EFA_ADMIN_WARNING) | \
- BIT(EFA_ADMIN_NOTIFICATION) | BIT(EFA_ADMIN_KEEP_ALIVE))
-
struct efa_user_mmap_entry {
struct rdma_user_mmap_entry rdma_entry;
u64 address;
--
2.40.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH for-next 3/5] RDMA/efa: Validate EQ array out of bounds reach
2024-06-24 16:09 [PATCH for-next 0/5] RDMA/efa: Cleanups and minor improvements Michael Margolin
2024-06-24 16:09 ` [PATCH for-next 1/5] RDMA/efa: Use offset_in_page() function Michael Margolin
2024-06-24 16:09 ` [PATCH for-next 2/5] RDMA/efa: Remove duplicate aenq enable macro Michael Margolin
@ 2024-06-24 16:09 ` Michael Margolin
2024-06-25 6:33 ` Gal Pressman
2024-06-24 16:09 ` [PATCH for-next 4/5] RDMA/efa: Move type conversion helpers to efa.h Michael Margolin
` (2 subsequent siblings)
5 siblings, 1 reply; 19+ messages in thread
From: Michael Margolin @ 2024-06-24 16:09 UTC (permalink / raw)
To: jgg, leon, linux-rdma
Cc: sleybo, matua, gal.pressman, Yonatan Nachum, Firas Jahjah
From: Yonatan Nachum <ynachum@amazon.com>
When creating a new CQ with interrupts enabled, the caller needs to
specify an EQ index to which the interrupts will be sent on, we don't
validate the requested index in the EQ array.
Validate out of bound reach of the EQ array and return an error.
This is not a bug because IB core validates the requested EQ number when
creating a CQ.
Reviewed-by: Firas Jahjah <firasj@amazon.com>
Signed-off-by: Yonatan Nachum <ynachum@amazon.com>
Signed-off-by: Michael Margolin <mrgolin@amazon.com>
---
drivers/infiniband/hw/efa/efa_verbs.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index cd1f735d08a7..9c3e476e3f9c 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -1046,7 +1046,7 @@ int efa_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
static struct efa_eq *efa_vec2eq(struct efa_dev *dev, int vec)
{
- return &dev->eqs[vec];
+ return vec < dev->neqs ? &dev->eqs[vec] : NULL;
}
static int cq_mmap_entries_setup(struct efa_dev *dev, struct efa_cq *cq,
@@ -1173,6 +1173,11 @@ int efa_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
params.set_src_addr = set_src_addr;
if (cmd.flags & EFA_CREATE_CQ_WITH_COMPLETION_CHANNEL) {
cq->eq = efa_vec2eq(dev, attr->comp_vector);
+ if (!cq->eq) {
+ ibdev_dbg(ibdev, "Invalid EQ requested[%u]\n", attr->comp_vector);
+ err = -EINVAL;
+ goto err_free_mapped;
+ }
params.eqn = cq->eq->eeq.eqn;
params.interrupt_mode_enabled = true;
}
--
2.40.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH for-next 4/5] RDMA/efa: Move type conversion helpers to efa.h
2024-06-24 16:09 [PATCH for-next 0/5] RDMA/efa: Cleanups and minor improvements Michael Margolin
` (2 preceding siblings ...)
2024-06-24 16:09 ` [PATCH for-next 3/5] RDMA/efa: Validate EQ array out of bounds reach Michael Margolin
@ 2024-06-24 16:09 ` Michael Margolin
2024-06-25 6:33 ` Gal Pressman
2024-06-24 16:09 ` [PATCH for-next 5/5] RDMA/efa: Align private func names to a single convention Michael Margolin
2024-06-26 17:55 ` [PATCH for-next 0/5] RDMA/efa: Cleanups and minor improvements Jason Gunthorpe
5 siblings, 1 reply; 19+ messages in thread
From: Michael Margolin @ 2024-06-24 16:09 UTC (permalink / raw)
To: jgg, leon, linux-rdma
Cc: sleybo, matua, gal.pressman, Firas Jahjah, Yonatan Nachum
Move ib_ to efa_ types conversion functions to have them near the types
definitions and to reduce code in efa_verbs.c.
Reviewed-by: Firas Jahjah <firasj@amazon.com>
Reviewed-by: Yonatan Nachum <ynachum@amazon.com>
Signed-off-by: Michael Margolin <mrgolin@amazon.com>
---
drivers/infiniband/hw/efa/efa.h | 35 +++++++++++++++++++++++++++
drivers/infiniband/hw/efa/efa_verbs.c | 35 ---------------------------
2 files changed, 35 insertions(+), 35 deletions(-)
diff --git a/drivers/infiniband/hw/efa/efa.h b/drivers/infiniband/hw/efa/efa.h
index 926f9ff1f60f..f1020bf85b78 100644
--- a/drivers/infiniband/hw/efa/efa.h
+++ b/drivers/infiniband/hw/efa/efa.h
@@ -142,6 +142,41 @@ struct efa_eq {
struct efa_irq irq;
};
+static inline struct efa_dev *to_edev(struct ib_device *ibdev)
+{
+ return container_of(ibdev, struct efa_dev, ibdev);
+}
+
+static inline struct efa_ucontext *to_eucontext(struct ib_ucontext *ibucontext)
+{
+ return container_of(ibucontext, struct efa_ucontext, ibucontext);
+}
+
+static inline struct efa_pd *to_epd(struct ib_pd *ibpd)
+{
+ return container_of(ibpd, struct efa_pd, ibpd);
+}
+
+static inline struct efa_mr *to_emr(struct ib_mr *ibmr)
+{
+ return container_of(ibmr, struct efa_mr, ibmr);
+}
+
+static inline struct efa_qp *to_eqp(struct ib_qp *ibqp)
+{
+ return container_of(ibqp, struct efa_qp, ibqp);
+}
+
+static inline struct efa_cq *to_ecq(struct ib_cq *ibcq)
+{
+ return container_of(ibcq, struct efa_cq, ibcq);
+}
+
+static inline struct efa_ah *to_eah(struct ib_ah *ibah)
+{
+ return container_of(ibah, struct efa_ah, ibah);
+}
+
int efa_query_device(struct ib_device *ibdev,
struct ib_device_attr *props,
struct ib_udata *udata);
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index 9c3e476e3f9c..34a9f86af9bd 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -127,41 +127,6 @@ struct pbl_context {
u8 physically_continuous;
};
-static inline struct efa_dev *to_edev(struct ib_device *ibdev)
-{
- return container_of(ibdev, struct efa_dev, ibdev);
-}
-
-static inline struct efa_ucontext *to_eucontext(struct ib_ucontext *ibucontext)
-{
- return container_of(ibucontext, struct efa_ucontext, ibucontext);
-}
-
-static inline struct efa_pd *to_epd(struct ib_pd *ibpd)
-{
- return container_of(ibpd, struct efa_pd, ibpd);
-}
-
-static inline struct efa_mr *to_emr(struct ib_mr *ibmr)
-{
- return container_of(ibmr, struct efa_mr, ibmr);
-}
-
-static inline struct efa_qp *to_eqp(struct ib_qp *ibqp)
-{
- return container_of(ibqp, struct efa_qp, ibqp);
-}
-
-static inline struct efa_cq *to_ecq(struct ib_cq *ibcq)
-{
- return container_of(ibcq, struct efa_cq, ibcq);
-}
-
-static inline struct efa_ah *to_eah(struct ib_ah *ibah)
-{
- return container_of(ibah, struct efa_ah, ibah);
-}
-
static inline struct efa_user_mmap_entry *
to_emmap(struct rdma_user_mmap_entry *rdma_entry)
{
--
2.40.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH for-next 5/5] RDMA/efa: Align private func names to a single convention
2024-06-24 16:09 [PATCH for-next 0/5] RDMA/efa: Cleanups and minor improvements Michael Margolin
` (3 preceding siblings ...)
2024-06-24 16:09 ` [PATCH for-next 4/5] RDMA/efa: Move type conversion helpers to efa.h Michael Margolin
@ 2024-06-24 16:09 ` Michael Margolin
2024-06-26 15:38 ` Jason Gunthorpe
2024-06-26 17:55 ` [PATCH for-next 0/5] RDMA/efa: Cleanups and minor improvements Jason Gunthorpe
5 siblings, 1 reply; 19+ messages in thread
From: Michael Margolin @ 2024-06-24 16:09 UTC (permalink / raw)
To: jgg, leon, linux-rdma
Cc: sleybo, matua, gal.pressman, Firas Jahjah, Yonatan Nachum
Name functions that aren't exposed outside of a file with '_' prefix and
make sure that all verbs ops' implementations are named according to the
op name with an additional 'efa_' prefix.
Reviewed-by: Firas Jahjah <firasj@amazon.com>
Reviewed-by: Yonatan Nachum <ynachum@amazon.com>
Signed-off-by: Michael Margolin <mrgolin@amazon.com>
---
drivers/infiniband/hw/efa/efa.h | 6 +-
drivers/infiniband/hw/efa/efa_main.c | 2 +-
drivers/infiniband/hw/efa/efa_verbs.c | 285 ++++++++++++--------------
3 files changed, 132 insertions(+), 161 deletions(-)
diff --git a/drivers/infiniband/hw/efa/efa.h b/drivers/infiniband/hw/efa/efa.h
index f1020bf85b78..655db23e81b6 100644
--- a/drivers/infiniband/hw/efa/efa.h
+++ b/drivers/infiniband/hw/efa/efa.h
@@ -197,9 +197,9 @@ int efa_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
int efa_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata);
int efa_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
struct ib_udata *udata);
-struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
- u64 virt_addr, int access_flags,
- struct ib_udata *udata);
+struct ib_mr *efa_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 length,
+ u64 virt_addr, int access_flags,
+ struct ib_udata *udata);
struct ib_mr *efa_reg_user_mr_dmabuf(struct ib_pd *ibpd, u64 start,
u64 length, u64 virt_addr,
int fd, int access_flags,
diff --git a/drivers/infiniband/hw/efa/efa_main.c b/drivers/infiniband/hw/efa/efa_main.c
index 99f9ac23c721..bff2485a76a0 100644
--- a/drivers/infiniband/hw/efa/efa_main.c
+++ b/drivers/infiniband/hw/efa/efa_main.c
@@ -394,7 +394,7 @@ static const struct ib_device_ops efa_dev_ops = {
.query_pkey = efa_query_pkey,
.query_port = efa_query_port,
.query_qp = efa_query_qp,
- .reg_user_mr = efa_reg_mr,
+ .reg_user_mr = efa_reg_user_mr,
.reg_user_mr_dmabuf = efa_reg_user_mr_dmabuf,
INIT_RDMA_OBJ_SIZE(ib_ah, efa_ah, ibah),
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index 34a9f86af9bd..f56ba8732aef 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -128,7 +128,7 @@ struct pbl_context {
};
static inline struct efa_user_mmap_entry *
-to_emmap(struct rdma_user_mmap_entry *rdma_entry)
+_to_emmap(struct rdma_user_mmap_entry *rdma_entry)
{
return container_of(rdma_entry, struct efa_user_mmap_entry, rdma_entry);
}
@@ -140,8 +140,8 @@ to_emmap(struct rdma_user_mmap_entry *rdma_entry)
#define is_reserved_cleared(reserved) \
!memchr_inv(reserved, 0, sizeof(reserved))
-static void *efa_zalloc_mapped(struct efa_dev *dev, dma_addr_t *dma_addr,
- size_t size, enum dma_data_direction dir)
+static void *_zalloc_mapped(struct efa_dev *dev, dma_addr_t *dma_addr,
+ size_t size, enum dma_data_direction dir)
{
void *addr;
@@ -159,9 +159,8 @@ static void *efa_zalloc_mapped(struct efa_dev *dev, dma_addr_t *dma_addr,
return addr;
}
-static void efa_free_mapped(struct efa_dev *dev, void *cpu_addr,
- dma_addr_t dma_addr,
- size_t size, enum dma_data_direction dir)
+static void _free_mapped(struct efa_dev *dev, void *cpu_addr, dma_addr_t dma_addr,
+ size_t size, enum dma_data_direction dir)
{
dma_unmap_single(&dev->pdev->dev, dma_addr, size, dir);
free_pages_exact(cpu_addr, size);
@@ -334,7 +333,7 @@ int efa_query_pkey(struct ib_device *ibdev, u32 port, u16 index,
return 0;
}
-static int efa_pd_dealloc(struct efa_dev *dev, u16 pdn)
+static int _dealloc_pd(struct efa_dev *dev, u16 pdn)
{
struct efa_com_dealloc_pd_params params = {
.pdn = pdn,
@@ -381,7 +380,7 @@ int efa_alloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
return 0;
err_dealloc_pd:
- efa_pd_dealloc(dev, result.pdn);
+ _dealloc_pd(dev, result.pdn);
err_out:
atomic64_inc(&dev->stats.alloc_pd_err);
return err;
@@ -393,18 +392,18 @@ int efa_dealloc_pd(struct ib_pd *ibpd, struct ib_udata *udata)
struct efa_pd *pd = to_epd(ibpd);
ibdev_dbg(&dev->ibdev, "Dealloc pd[%d]\n", pd->pdn);
- efa_pd_dealloc(dev, pd->pdn);
+ _dealloc_pd(dev, pd->pdn);
return 0;
}
-static int efa_destroy_qp_handle(struct efa_dev *dev, u32 qp_handle)
+static int _destroy_qp_handle(struct efa_dev *dev, u32 qp_handle)
{
struct efa_com_destroy_qp_params params = { .qp_handle = qp_handle };
return efa_com_destroy_qp(&dev->edev, ¶ms);
}
-static void efa_qp_user_mmap_entries_remove(struct efa_qp *qp)
+static void _qp_user_mmap_entries_remove(struct efa_qp *qp)
{
rdma_user_mmap_entry_remove(qp->rq_mmap_entry);
rdma_user_mmap_entry_remove(qp->rq_db_mmap_entry);
@@ -420,28 +419,26 @@ int efa_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
ibdev_dbg(&dev->ibdev, "Destroy qp[%u]\n", ibqp->qp_num);
- err = efa_destroy_qp_handle(dev, qp->qp_handle);
+ err = _destroy_qp_handle(dev, qp->qp_handle);
if (err)
return err;
- efa_qp_user_mmap_entries_remove(qp);
+ _qp_user_mmap_entries_remove(qp);
if (qp->rq_cpu_addr) {
ibdev_dbg(&dev->ibdev,
"qp->cpu_addr[0x%p] freed: size[%lu], dma[%pad]\n",
qp->rq_cpu_addr, qp->rq_size,
&qp->rq_dma_addr);
- efa_free_mapped(dev, qp->rq_cpu_addr, qp->rq_dma_addr,
- qp->rq_size, DMA_TO_DEVICE);
+ _free_mapped(dev, qp->rq_cpu_addr, qp->rq_dma_addr, qp->rq_size, DMA_TO_DEVICE);
}
return 0;
}
static struct rdma_user_mmap_entry*
-efa_user_mmap_entry_insert(struct ib_ucontext *ucontext,
- u64 address, size_t length,
- u8 mmap_flag, u64 *offset)
+_user_mmap_entry_insert(struct ib_ucontext *ucontext, u64 address, size_t length,
+ u8 mmap_flag, u64 *offset)
{
struct efa_user_mmap_entry *entry = kzalloc(sizeof(*entry), GFP_KERNEL);
int err;
@@ -463,21 +460,19 @@ efa_user_mmap_entry_insert(struct ib_ucontext *ucontext,
return &entry->rdma_entry;
}
-static int qp_mmap_entries_setup(struct efa_qp *qp,
- struct efa_dev *dev,
- struct efa_ucontext *ucontext,
- struct efa_com_create_qp_params *params,
- struct efa_ibv_create_qp_resp *resp)
+static int _qp_mmap_entries_setup(struct efa_qp *qp,
+ struct efa_dev *dev,
+ struct efa_ucontext *ucontext,
+ struct efa_com_create_qp_params *params,
+ struct efa_ibv_create_qp_resp *resp)
{
size_t length;
u64 address;
address = dev->db_bar_addr + resp->sq_db_offset;
- qp->sq_db_mmap_entry =
- efa_user_mmap_entry_insert(&ucontext->ibucontext,
- address,
- PAGE_SIZE, EFA_MMAP_IO_NC,
- &resp->sq_db_mmap_key);
+ qp->sq_db_mmap_entry = _user_mmap_entry_insert(&ucontext->ibucontext, address,
+ PAGE_SIZE, EFA_MMAP_IO_NC,
+ &resp->sq_db_mmap_key);
if (!qp->sq_db_mmap_entry)
return -ENOMEM;
@@ -487,11 +482,9 @@ static int qp_mmap_entries_setup(struct efa_qp *qp,
length = PAGE_ALIGN(params->sq_ring_size_in_bytes +
offset_in_page(resp->llq_desc_offset));
- qp->llq_desc_mmap_entry =
- efa_user_mmap_entry_insert(&ucontext->ibucontext,
- address, length,
- EFA_MMAP_IO_WC,
- &resp->llq_desc_mmap_key);
+ qp->llq_desc_mmap_entry = _user_mmap_entry_insert(&ucontext->ibucontext, address,
+ length, EFA_MMAP_IO_WC,
+ &resp->llq_desc_mmap_key);
if (!qp->llq_desc_mmap_entry)
goto err_remove_mmap;
@@ -500,22 +493,18 @@ static int qp_mmap_entries_setup(struct efa_qp *qp,
if (qp->rq_size) {
address = dev->db_bar_addr + resp->rq_db_offset;
- qp->rq_db_mmap_entry =
- efa_user_mmap_entry_insert(&ucontext->ibucontext,
- address, PAGE_SIZE,
- EFA_MMAP_IO_NC,
- &resp->rq_db_mmap_key);
+ qp->rq_db_mmap_entry = _user_mmap_entry_insert(&ucontext->ibucontext, address,
+ PAGE_SIZE, EFA_MMAP_IO_NC,
+ &resp->rq_db_mmap_key);
if (!qp->rq_db_mmap_entry)
goto err_remove_mmap;
resp->rq_db_offset &= ~PAGE_MASK;
address = virt_to_phys(qp->rq_cpu_addr);
- qp->rq_mmap_entry =
- efa_user_mmap_entry_insert(&ucontext->ibucontext,
- address, qp->rq_size,
- EFA_MMAP_DMA_PAGE,
- &resp->rq_mmap_key);
+ qp->rq_mmap_entry = _user_mmap_entry_insert(&ucontext->ibucontext, address,
+ qp->rq_size, EFA_MMAP_DMA_PAGE,
+ &resp->rq_mmap_key);
if (!qp->rq_mmap_entry)
goto err_remove_mmap;
@@ -525,13 +514,12 @@ static int qp_mmap_entries_setup(struct efa_qp *qp,
return 0;
err_remove_mmap:
- efa_qp_user_mmap_entries_remove(qp);
+ _qp_user_mmap_entries_remove(qp);
return -ENOMEM;
}
-static int efa_qp_validate_cap(struct efa_dev *dev,
- struct ib_qp_init_attr *init_attr)
+static int _qp_validate_cap(struct efa_dev *dev, struct ib_qp_init_attr *init_attr)
{
if (init_attr->cap.max_send_wr > dev->dev_attr.max_sq_depth) {
ibdev_dbg(&dev->ibdev,
@@ -570,8 +558,7 @@ static int efa_qp_validate_cap(struct efa_dev *dev,
return 0;
}
-static int efa_qp_validate_attr(struct efa_dev *dev,
- struct ib_qp_init_attr *init_attr)
+static int _qp_validate_attr(struct efa_dev *dev, struct ib_qp_init_attr *init_attr)
{
if (init_attr->qp_type != IB_QPT_DRIVER &&
init_attr->qp_type != IB_QPT_UD) {
@@ -609,11 +596,11 @@ int efa_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
ucontext = rdma_udata_to_drv_context(udata, struct efa_ucontext,
ibucontext);
- err = efa_qp_validate_cap(dev, init_attr);
+ err = _qp_validate_cap(dev, init_attr);
if (err)
goto err_out;
- err = efa_qp_validate_attr(dev, init_attr);
+ err = _qp_validate_attr(dev, init_attr);
if (err)
goto err_out;
@@ -684,8 +671,8 @@ int efa_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
create_qp_params.rq_ring_size_in_bytes = cmd.rq_ring_size;
qp->rq_size = PAGE_ALIGN(create_qp_params.rq_ring_size_in_bytes);
if (qp->rq_size) {
- qp->rq_cpu_addr = efa_zalloc_mapped(dev, &qp->rq_dma_addr,
- qp->rq_size, DMA_TO_DEVICE);
+ qp->rq_cpu_addr = _zalloc_mapped(dev, &qp->rq_dma_addr,
+ qp->rq_size, DMA_TO_DEVICE);
if (!qp->rq_cpu_addr) {
err = -ENOMEM;
goto err_out;
@@ -711,8 +698,7 @@ int efa_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
resp.send_sub_cq_idx = create_qp_resp.send_sub_cq_idx;
resp.recv_sub_cq_idx = create_qp_resp.recv_sub_cq_idx;
- err = qp_mmap_entries_setup(qp, dev, ucontext, &create_qp_params,
- &resp);
+ err = _qp_mmap_entries_setup(qp, dev, ucontext, &create_qp_params, &resp);
if (err)
goto err_destroy_qp;
@@ -740,13 +726,12 @@ int efa_create_qp(struct ib_qp *ibqp, struct ib_qp_init_attr *init_attr,
return 0;
err_remove_mmap_entries:
- efa_qp_user_mmap_entries_remove(qp);
+ _qp_user_mmap_entries_remove(qp);
err_destroy_qp:
- efa_destroy_qp_handle(dev, create_qp_resp.qp_handle);
+ _destroy_qp_handle(dev, create_qp_resp.qp_handle);
err_free_mapped:
if (qp->rq_size)
- efa_free_mapped(dev, qp->rq_cpu_addr, qp->rq_dma_addr,
- qp->rq_size, DMA_TO_DEVICE);
+ _free_mapped(dev, qp->rq_cpu_addr, qp->rq_dma_addr, qp->rq_size, DMA_TO_DEVICE);
err_out:
atomic64_inc(&dev->stats.create_qp_err);
return err;
@@ -835,9 +820,8 @@ static const struct {
}
};
-static bool efa_modify_srd_qp_is_ok(enum ib_qp_state cur_state,
- enum ib_qp_state next_state,
- enum ib_qp_attr_mask mask)
+static bool _modify_srd_qp_is_ok(enum ib_qp_state cur_state, enum ib_qp_state next_state,
+ enum ib_qp_attr_mask mask)
{
enum ib_qp_attr_mask req_param, opt_param;
@@ -861,10 +845,10 @@ static bool efa_modify_srd_qp_is_ok(enum ib_qp_state cur_state,
return true;
}
-static int efa_modify_qp_validate(struct efa_dev *dev, struct efa_qp *qp,
- struct ib_qp_attr *qp_attr, int qp_attr_mask,
- enum ib_qp_state cur_state,
- enum ib_qp_state new_state)
+static int _modify_qp_validate(struct efa_dev *dev, struct efa_qp *qp,
+ struct ib_qp_attr *qp_attr, int qp_attr_mask,
+ enum ib_qp_state cur_state,
+ enum ib_qp_state new_state)
{
int err;
@@ -881,8 +865,7 @@ static int efa_modify_qp_validate(struct efa_dev *dev, struct efa_qp *qp,
}
if (qp->ibqp.qp_type == IB_QPT_DRIVER)
- err = !efa_modify_srd_qp_is_ok(cur_state, new_state,
- qp_attr_mask);
+ err = !_modify_srd_qp_is_ok(cur_state, new_state, qp_attr_mask);
else
err = !ib_modify_qp_is_ok(cur_state, new_state, IB_QPT_UD,
qp_attr_mask);
@@ -929,8 +912,7 @@ int efa_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
qp->state;
new_state = qp_attr_mask & IB_QP_STATE ? qp_attr->qp_state : cur_state;
- err = efa_modify_qp_validate(dev, qp, qp_attr, qp_attr_mask, cur_state,
- new_state);
+ err = _modify_qp_validate(dev, qp, qp_attr, qp_attr_mask, cur_state, new_state);
if (err)
return err;
@@ -976,14 +958,14 @@ int efa_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
return 0;
}
-static int efa_destroy_cq_idx(struct efa_dev *dev, int cq_idx)
+static int _destroy_cq_idx(struct efa_dev *dev, int cq_idx)
{
struct efa_com_destroy_cq_params params = { .cq_idx = cq_idx };
return efa_com_destroy_cq(&dev->edev, ¶ms);
}
-static void efa_cq_user_mmap_entries_remove(struct efa_cq *cq)
+static void _cq_user_mmap_entries_remove(struct efa_cq *cq)
{
rdma_user_mmap_entry_remove(cq->db_mmap_entry);
rdma_user_mmap_entry_remove(cq->mmap_entry);
@@ -998,40 +980,38 @@ int efa_destroy_cq(struct ib_cq *ibcq, struct ib_udata *udata)
"Destroy cq[%d] virt[0x%p] freed: size[%lu], dma[%pad]\n",
cq->cq_idx, cq->cpu_addr, cq->size, &cq->dma_addr);
- efa_destroy_cq_idx(dev, cq->cq_idx);
- efa_cq_user_mmap_entries_remove(cq);
+ _destroy_cq_idx(dev, cq->cq_idx);
+ _cq_user_mmap_entries_remove(cq);
if (cq->eq) {
xa_erase(&dev->cqs_xa, cq->cq_idx);
synchronize_irq(cq->eq->irq.irqn);
}
- efa_free_mapped(dev, cq->cpu_addr, cq->dma_addr, cq->size,
- DMA_FROM_DEVICE);
+ _free_mapped(dev, cq->cpu_addr, cq->dma_addr, cq->size, DMA_FROM_DEVICE);
return 0;
}
-static struct efa_eq *efa_vec2eq(struct efa_dev *dev, int vec)
+static struct efa_eq *_vec2eq(struct efa_dev *dev, int vec)
{
return vec < dev->neqs ? &dev->eqs[vec] : NULL;
}
-static int cq_mmap_entries_setup(struct efa_dev *dev, struct efa_cq *cq,
- struct efa_ibv_create_cq_resp *resp,
- bool db_valid)
+static int _cq_mmap_entries_setup(struct efa_dev *dev, struct efa_cq *cq,
+ struct efa_ibv_create_cq_resp *resp,
+ bool db_valid)
{
resp->q_mmap_size = cq->size;
- cq->mmap_entry = efa_user_mmap_entry_insert(&cq->ucontext->ibucontext,
- virt_to_phys(cq->cpu_addr),
- cq->size, EFA_MMAP_DMA_PAGE,
- &resp->q_mmap_key);
+ cq->mmap_entry = _user_mmap_entry_insert(&cq->ucontext->ibucontext,
+ virt_to_phys(cq->cpu_addr),
+ cq->size, EFA_MMAP_DMA_PAGE,
+ &resp->q_mmap_key);
if (!cq->mmap_entry)
return -ENOMEM;
if (db_valid) {
- cq->db_mmap_entry =
- efa_user_mmap_entry_insert(&cq->ucontext->ibucontext,
- dev->db_bar_addr + resp->db_off,
- PAGE_SIZE, EFA_MMAP_IO_NC,
- &resp->db_mmap_key);
+ cq->db_mmap_entry = _user_mmap_entry_insert(&cq->ucontext->ibucontext,
+ dev->db_bar_addr + resp->db_off,
+ PAGE_SIZE, EFA_MMAP_IO_NC,
+ &resp->db_mmap_key);
if (!cq->db_mmap_entry) {
rdma_user_mmap_entry_remove(cq->mmap_entry);
return -ENOMEM;
@@ -1123,8 +1103,7 @@ int efa_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
cq->ucontext = ucontext;
cq->size = PAGE_ALIGN(cmd.cq_entry_size * entries * cmd.num_sub_cqs);
- cq->cpu_addr = efa_zalloc_mapped(dev, &cq->dma_addr, cq->size,
- DMA_FROM_DEVICE);
+ cq->cpu_addr = _zalloc_mapped(dev, &cq->dma_addr, cq->size, DMA_FROM_DEVICE);
if (!cq->cpu_addr) {
err = -ENOMEM;
goto err_out;
@@ -1137,7 +1116,7 @@ int efa_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
params.num_sub_cqs = cmd.num_sub_cqs;
params.set_src_addr = set_src_addr;
if (cmd.flags & EFA_CREATE_CQ_WITH_COMPLETION_CHANNEL) {
- cq->eq = efa_vec2eq(dev, attr->comp_vector);
+ cq->eq = _vec2eq(dev, attr->comp_vector);
if (!cq->eq) {
ibdev_dbg(ibdev, "Invalid EQ requested[%u]\n", attr->comp_vector);
err = -EINVAL;
@@ -1157,7 +1136,7 @@ int efa_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
cq->ibcq.cqe = result.actual_depth;
WARN_ON_ONCE(entries != result.actual_depth);
- err = cq_mmap_entries_setup(dev, cq, &resp, result.db_valid);
+ err = _cq_mmap_entries_setup(dev, cq, &resp, result.db_valid);
if (err) {
ibdev_dbg(ibdev, "Could not setup cq[%u] mmap entries\n",
cq->cq_idx);
@@ -1192,12 +1171,11 @@ int efa_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
if (cq->eq)
xa_erase(&dev->cqs_xa, cq->cq_idx);
err_remove_mmap:
- efa_cq_user_mmap_entries_remove(cq);
+ _cq_user_mmap_entries_remove(cq);
err_destroy_cq:
- efa_destroy_cq_idx(dev, cq->cq_idx);
+ _destroy_cq_idx(dev, cq->cq_idx);
err_free_mapped:
- efa_free_mapped(dev, cq->cpu_addr, cq->dma_addr, cq->size,
- DMA_FROM_DEVICE);
+ _free_mapped(dev, cq->cpu_addr, cq->dma_addr, cq->size, DMA_FROM_DEVICE);
err_out:
atomic64_inc(&dev->stats.create_cq_err);
@@ -1223,7 +1201,7 @@ static int umem_to_page_list(struct efa_dev *dev,
return 0;
}
-static struct scatterlist *efa_vmalloc_buf_to_sg(u64 *buf, int page_cnt)
+static struct scatterlist *_vmalloc_buf_to_sg(u64 *buf, int page_cnt)
{
struct scatterlist *sglist;
struct page *pg;
@@ -1251,7 +1229,7 @@ static struct scatterlist *efa_vmalloc_buf_to_sg(u64 *buf, int page_cnt)
* create a chunk list of physical pages dma addresses from the supplied
* scatter gather list
*/
-static int pbl_chunk_list_create(struct efa_dev *dev, struct pbl_context *pbl)
+static int _pbl_chunk_list_create(struct efa_dev *dev, struct pbl_context *pbl)
{
struct pbl_chunk_list *chunk_list = &pbl->phys.indirect.chunk_list;
int page_cnt = pbl->phys.indirect.pbl_buf_size_in_pages;
@@ -1351,7 +1329,7 @@ static int pbl_chunk_list_create(struct efa_dev *dev, struct pbl_context *pbl)
return -ENOMEM;
}
-static void pbl_chunk_list_destroy(struct efa_dev *dev, struct pbl_context *pbl)
+static void _pbl_chunk_list_destroy(struct efa_dev *dev, struct pbl_context *pbl)
{
struct pbl_chunk_list *chunk_list = &pbl->phys.indirect.chunk_list;
int i;
@@ -1366,8 +1344,7 @@ static void pbl_chunk_list_destroy(struct efa_dev *dev, struct pbl_context *pbl)
}
/* initialize pbl continuous mode: map pbl buffer to a dma address. */
-static int pbl_continuous_initialize(struct efa_dev *dev,
- struct pbl_context *pbl)
+static int _pbl_continuous_initialize(struct efa_dev *dev, struct pbl_context *pbl)
{
dma_addr_t dma_addr;
@@ -1391,14 +1368,14 @@ static int pbl_continuous_initialize(struct efa_dev *dev,
* create a chunk list out of the dma addresses of the physical pages of
* pbl buffer.
*/
-static int pbl_indirect_initialize(struct efa_dev *dev, struct pbl_context *pbl)
+static int _pbl_indirect_initialize(struct efa_dev *dev, struct pbl_context *pbl)
{
u32 size_in_pages = DIV_ROUND_UP(pbl->pbl_buf_size_in_bytes, EFA_CHUNK_PAYLOAD_SIZE);
struct scatterlist *sgl;
int sg_dma_cnt, err;
BUILD_BUG_ON(EFA_CHUNK_PAYLOAD_SIZE > PAGE_SIZE);
- sgl = efa_vmalloc_buf_to_sg(pbl->pbl_buf, size_in_pages);
+ sgl = _vmalloc_buf_to_sg(pbl->pbl_buf, size_in_pages);
if (!sgl)
return -ENOMEM;
@@ -1411,7 +1388,7 @@ static int pbl_indirect_initialize(struct efa_dev *dev, struct pbl_context *pbl)
pbl->phys.indirect.pbl_buf_size_in_pages = size_in_pages;
pbl->phys.indirect.sgl = sgl;
pbl->phys.indirect.sg_dma_cnt = sg_dma_cnt;
- err = pbl_chunk_list_create(dev, pbl);
+ err = _pbl_chunk_list_create(dev, pbl);
if (err) {
ibdev_dbg(&dev->ibdev,
"chunk_list creation failed[%d]\n", err);
@@ -1432,20 +1409,20 @@ static int pbl_indirect_initialize(struct efa_dev *dev, struct pbl_context *pbl)
return err;
}
-static void pbl_indirect_terminate(struct efa_dev *dev, struct pbl_context *pbl)
+static void _pbl_indirect_terminate(struct efa_dev *dev, struct pbl_context *pbl)
{
- pbl_chunk_list_destroy(dev, pbl);
+ _pbl_chunk_list_destroy(dev, pbl);
dma_unmap_sg(&dev->pdev->dev, pbl->phys.indirect.sgl,
pbl->phys.indirect.pbl_buf_size_in_pages, DMA_TO_DEVICE);
kfree(pbl->phys.indirect.sgl);
}
/* create a page buffer list from a mapped user memory region */
-static int pbl_create(struct efa_dev *dev,
- struct pbl_context *pbl,
- struct ib_umem *umem,
- int hp_cnt,
- u8 hp_shift)
+static int _pbl_create(struct efa_dev *dev,
+ struct pbl_context *pbl,
+ struct ib_umem *umem,
+ int hp_cnt,
+ u8 hp_shift)
{
int err;
@@ -1461,7 +1438,7 @@ static int pbl_create(struct efa_dev *dev,
if (err)
goto err_free;
- err = pbl_indirect_initialize(dev, pbl);
+ err = _pbl_indirect_initialize(dev, pbl);
if (err)
goto err_free;
} else {
@@ -1471,7 +1448,7 @@ static int pbl_create(struct efa_dev *dev,
if (err)
goto err_free;
- err = pbl_continuous_initialize(dev, pbl);
+ err = _pbl_continuous_initialize(dev, pbl);
if (err)
goto err_free;
}
@@ -1487,19 +1464,19 @@ static int pbl_create(struct efa_dev *dev,
return err;
}
-static void pbl_destroy(struct efa_dev *dev, struct pbl_context *pbl)
+static void _pbl_destroy(struct efa_dev *dev, struct pbl_context *pbl)
{
if (pbl->physically_continuous)
dma_unmap_single(&dev->pdev->dev, pbl->phys.continuous.dma_addr,
pbl->pbl_buf_size_in_bytes, DMA_TO_DEVICE);
else
- pbl_indirect_terminate(dev, pbl);
+ _pbl_indirect_terminate(dev, pbl);
kvfree(pbl->pbl_buf);
}
-static int efa_create_inline_pbl(struct efa_dev *dev, struct efa_mr *mr,
- struct efa_com_reg_mr_params *params)
+static int _create_inline_pbl(struct efa_dev *dev, struct efa_mr *mr,
+ struct efa_com_reg_mr_params *params)
{
int err;
@@ -1515,15 +1492,12 @@ static int efa_create_inline_pbl(struct efa_dev *dev, struct efa_mr *mr,
return 0;
}
-static int efa_create_pbl(struct efa_dev *dev,
- struct pbl_context *pbl,
- struct efa_mr *mr,
- struct efa_com_reg_mr_params *params)
+static int _create_pbl(struct efa_dev *dev, struct pbl_context *pbl, struct efa_mr *mr,
+ struct efa_com_reg_mr_params *params)
{
int err;
- err = pbl_create(dev, pbl, mr->umem, params->page_num,
- params->page_shift);
+ err = _pbl_create(dev, pbl, mr->umem, params->page_num, params->page_shift);
if (err) {
ibdev_dbg(&dev->ibdev, "Failed to create pbl[%d]\n", err);
return err;
@@ -1549,8 +1523,7 @@ static int efa_create_pbl(struct efa_dev *dev,
return 0;
}
-static struct efa_mr *efa_alloc_mr(struct ib_pd *ibpd, int access_flags,
- struct ib_udata *udata)
+static struct efa_mr *_alloc_mr(struct ib_pd *ibpd, int access_flags, struct ib_udata *udata)
{
struct efa_dev *dev = to_edev(ibpd->device);
int supp_access_flags;
@@ -1583,8 +1556,8 @@ static struct efa_mr *efa_alloc_mr(struct ib_pd *ibpd, int access_flags,
return mr;
}
-static int efa_register_mr(struct ib_pd *ibpd, struct efa_mr *mr, u64 start,
- u64 length, u64 virt_addr, int access_flags)
+static int _register_mr(struct ib_pd *ibpd, struct efa_mr *mr, u64 start,
+ u64 length, u64 virt_addr, int access_flags)
{
struct efa_dev *dev = to_edev(ibpd->device);
struct efa_com_reg_mr_params params = {};
@@ -1617,7 +1590,7 @@ static int efa_register_mr(struct ib_pd *ibpd, struct efa_mr *mr, u64 start,
inline_size = ARRAY_SIZE(params.pbl.inline_pbl_array);
if (params.page_num <= inline_size) {
- err = efa_create_inline_pbl(dev, mr, ¶ms);
+ err = _create_inline_pbl(dev, mr, ¶ms);
if (err)
return err;
@@ -1625,12 +1598,12 @@ static int efa_register_mr(struct ib_pd *ibpd, struct efa_mr *mr, u64 start,
if (err)
return err;
} else {
- err = efa_create_pbl(dev, &pbl, mr, ¶ms);
+ err = _create_pbl(dev, &pbl, mr, ¶ms);
if (err)
return err;
err = efa_com_register_mr(&dev->edev, ¶ms, &result);
- pbl_destroy(dev, &pbl);
+ _pbl_destroy(dev, &pbl);
if (err)
return err;
@@ -1660,7 +1633,7 @@ struct ib_mr *efa_reg_user_mr_dmabuf(struct ib_pd *ibpd, u64 start,
struct efa_mr *mr;
int err;
- mr = efa_alloc_mr(ibpd, access_flags, udata);
+ mr = _alloc_mr(ibpd, access_flags, udata);
if (IS_ERR(mr)) {
err = PTR_ERR(mr);
goto err_out;
@@ -1675,7 +1648,7 @@ struct ib_mr *efa_reg_user_mr_dmabuf(struct ib_pd *ibpd, u64 start,
}
mr->umem = &umem_dmabuf->umem;
- err = efa_register_mr(ibpd, mr, start, length, virt_addr, access_flags);
+ err = _register_mr(ibpd, mr, start, length, virt_addr, access_flags);
if (err)
goto err_release;
@@ -1690,15 +1663,15 @@ struct ib_mr *efa_reg_user_mr_dmabuf(struct ib_pd *ibpd, u64 start,
return ERR_PTR(err);
}
-struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
- u64 virt_addr, int access_flags,
- struct ib_udata *udata)
+struct ib_mr *efa_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 length,
+ u64 virt_addr, int access_flags,
+ struct ib_udata *udata)
{
struct efa_dev *dev = to_edev(ibpd->device);
struct efa_mr *mr;
int err;
- mr = efa_alloc_mr(ibpd, access_flags, udata);
+ mr = _alloc_mr(ibpd, access_flags, udata);
if (IS_ERR(mr)) {
err = PTR_ERR(mr);
goto err_out;
@@ -1712,7 +1685,7 @@ struct ib_mr *efa_reg_mr(struct ib_pd *ibpd, u64 start, u64 length,
goto err_free;
}
- err = efa_register_mr(ibpd, mr, start, length, virt_addr, access_flags);
+ err = _register_mr(ibpd, mr, start, length, virt_addr, access_flags);
if (err)
goto err_release;
@@ -1798,7 +1771,7 @@ int efa_get_port_immutable(struct ib_device *ibdev, u32 port_num,
return 0;
}
-static int efa_dealloc_uar(struct efa_dev *dev, u16 uarn)
+static int _dealloc_uar(struct efa_dev *dev, u16 uarn)
{
struct efa_com_dealloc_uar_params params = {
.uarn = uarn,
@@ -1811,8 +1784,8 @@ static int efa_dealloc_uar(struct efa_dev *dev, u16 uarn)
(_attr_str = (!(_dev)->dev_attr._attr || ((_comp_mask) & (_mask))) ? \
NULL : #_attr)
-static int efa_user_comp_handshake(const struct ib_ucontext *ibucontext,
- const struct efa_ibv_alloc_ucontext_cmd *cmd)
+static int _user_comp_handshake(const struct ib_ucontext *ibucontext,
+ const struct efa_ibv_alloc_ucontext_cmd *cmd)
{
struct efa_dev *dev = to_edev(ibucontext->device);
char *attr_str;
@@ -1856,7 +1829,7 @@ int efa_alloc_ucontext(struct ib_ucontext *ibucontext, struct ib_udata *udata)
goto err_out;
}
- err = efa_user_comp_handshake(ibucontext, &cmd);
+ err = _user_comp_handshake(ibucontext, &cmd);
if (err)
goto err_out;
@@ -1882,7 +1855,7 @@ int efa_alloc_ucontext(struct ib_ucontext *ibucontext, struct ib_udata *udata)
return 0;
err_dealloc_uar:
- efa_dealloc_uar(dev, result.uarn);
+ _dealloc_uar(dev, result.uarn);
err_out:
atomic64_inc(&dev->stats.alloc_ucontext_err);
return err;
@@ -1893,18 +1866,18 @@ void efa_dealloc_ucontext(struct ib_ucontext *ibucontext)
struct efa_ucontext *ucontext = to_eucontext(ibucontext);
struct efa_dev *dev = to_edev(ibucontext->device);
- efa_dealloc_uar(dev, ucontext->uarn);
+ _dealloc_uar(dev, ucontext->uarn);
}
void efa_mmap_free(struct rdma_user_mmap_entry *rdma_entry)
{
- struct efa_user_mmap_entry *entry = to_emmap(rdma_entry);
+ struct efa_user_mmap_entry *entry = _to_emmap(rdma_entry);
kfree(entry);
}
-static int __efa_mmap(struct efa_dev *dev, struct efa_ucontext *ucontext,
- struct vm_area_struct *vma)
+static int _efa_mmap(struct efa_dev *dev, struct efa_ucontext *ucontext,
+ struct vm_area_struct *vma)
{
struct rdma_user_mmap_entry *rdma_entry;
struct efa_user_mmap_entry *entry;
@@ -1920,7 +1893,7 @@ static int __efa_mmap(struct efa_dev *dev, struct efa_ucontext *ucontext,
atomic64_inc(&dev->stats.mmap_err);
return -EINVAL;
}
- entry = to_emmap(rdma_entry);
+ entry = _to_emmap(rdma_entry);
ibdev_dbg(&dev->ibdev,
"Mapping address[%#llx], length[%#zx], mmap_flag[%d]\n",
@@ -1977,10 +1950,10 @@ int efa_mmap(struct ib_ucontext *ibucontext,
"start %#lx, end %#lx, length = %#zx, pgoff = %#lx\n",
vma->vm_start, vma->vm_end, length, vma->vm_pgoff);
- return __efa_mmap(dev, ucontext, vma);
+ return _efa_mmap(dev, ucontext, vma);
}
-static int efa_ah_destroy(struct efa_dev *dev, struct efa_ah *ah)
+static int _destroy_ah(struct efa_dev *dev, struct efa_ah *ah)
{
struct efa_com_destroy_ah_params params = {
.ah = ah->ah,
@@ -2042,7 +2015,7 @@ int efa_create_ah(struct ib_ah *ibah,
return 0;
err_destroy_ah:
- efa_ah_destroy(dev, ah);
+ _destroy_ah(dev, ah);
err_out:
atomic64_inc(&dev->stats.create_ah_err);
return err;
@@ -2061,7 +2034,7 @@ int efa_destroy_ah(struct ib_ah *ibah, u32 flags)
return -EOPNOTSUPP;
}
- efa_ah_destroy(dev, ah);
+ _destroy_ah(dev, ah);
return 0;
}
@@ -2080,8 +2053,7 @@ struct rdma_hw_stats *efa_alloc_hw_device_stats(struct ib_device *ibdev)
RDMA_HW_STATS_DEFAULT_LIFESPAN);
}
-static int efa_fill_device_stats(struct efa_dev *dev,
- struct rdma_hw_stats *stats)
+static int _fill_device_stats(struct efa_dev *dev, struct rdma_hw_stats *stats)
{
struct efa_com_stats_admin *as = &dev->edev.aq.stats;
struct efa_stats *s = &dev->stats;
@@ -2104,8 +2076,7 @@ static int efa_fill_device_stats(struct efa_dev *dev,
return ARRAY_SIZE(efa_device_stats_descs);
}
-static int efa_fill_port_stats(struct efa_dev *dev, struct rdma_hw_stats *stats,
- u32 port_num)
+static int _fill_port_stats(struct efa_dev *dev, struct rdma_hw_stats *stats, u32 port_num)
{
struct efa_com_get_stats_params params = {};
union efa_com_get_stats_result result;
@@ -2171,9 +2142,9 @@ int efa_get_hw_stats(struct ib_device *ibdev, struct rdma_hw_stats *stats,
u32 port_num, int index)
{
if (port_num)
- return efa_fill_port_stats(to_edev(ibdev), stats, port_num);
+ return _fill_port_stats(to_edev(ibdev), stats, port_num);
else
- return efa_fill_device_stats(to_edev(ibdev), stats);
+ return _fill_device_stats(to_edev(ibdev), stats);
}
enum rdma_link_layer efa_port_link_layer(struct ib_device *ibdev,
--
2.40.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 1/5] RDMA/efa: Use offset_in_page() function
2024-06-24 16:09 ` [PATCH for-next 1/5] RDMA/efa: Use offset_in_page() function Michael Margolin
@ 2024-06-25 6:32 ` Gal Pressman
0 siblings, 0 replies; 19+ messages in thread
From: Gal Pressman @ 2024-06-25 6:32 UTC (permalink / raw)
To: Michael Margolin, jgg, leon, linux-rdma
Cc: sleybo, matua, Gal Pressman, Firas Jahjah
On 24/06/2024 19:09, Michael Margolin wrote:
> From: Gal Pressman <galpress@amazon.com>
>
> Use offset_in_page() instead of open-coding it.
>
> Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
> Reviewed-by: Firas Jahjah <firasj@amazon.com>
> Signed-off-by: Gal Pressman <galpress@amazon.com>
> Signed-off-by: Michael Margolin <mrgolin@amazon.com>
Looks good Gal!
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 2/5] RDMA/efa: Remove duplicate aenq enable macro
2024-06-24 16:09 ` [PATCH for-next 2/5] RDMA/efa: Remove duplicate aenq enable macro Michael Margolin
@ 2024-06-25 6:32 ` Gal Pressman
0 siblings, 0 replies; 19+ messages in thread
From: Gal Pressman @ 2024-06-25 6:32 UTC (permalink / raw)
To: Michael Margolin, jgg, leon, linux-rdma; +Cc: sleybo, matua, Yonatan Nachum
On 24/06/2024 19:09, Michael Margolin wrote:
> From: Yonatan Nachum <ynachum@amazon.com>
>
> We have the same macro in main and verbs files and we don't use the
> macro in the verbs file, remove it.
>
> Reviewed-by: Yossi Leybovich <sleybo@amazon.com>
> Signed-off-by: Yonatan Nachum <ynachum@amazon.com>
> Signed-off-by: Michael Margolin <mrgolin@amazon.com>
Reviewed-by: Gal Pressman <gal.pressman@linux.dev>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 4/5] RDMA/efa: Move type conversion helpers to efa.h
2024-06-24 16:09 ` [PATCH for-next 4/5] RDMA/efa: Move type conversion helpers to efa.h Michael Margolin
@ 2024-06-25 6:33 ` Gal Pressman
2024-06-25 11:21 ` Margolin, Michael
0 siblings, 1 reply; 19+ messages in thread
From: Gal Pressman @ 2024-06-25 6:33 UTC (permalink / raw)
To: Michael Margolin, jgg, leon, linux-rdma
Cc: sleybo, matua, Firas Jahjah, Yonatan Nachum
On 24/06/2024 19:09, Michael Margolin wrote:
> Move ib_ to efa_ types conversion functions to have them near the types
> definitions and to reduce code in efa_verbs.c.
>
> Reviewed-by: Firas Jahjah <firasj@amazon.com>
> Reviewed-by: Yonatan Nachum <ynachum@amazon.com>
> Signed-off-by: Michael Margolin <mrgolin@amazon.com>
The idea was to not expose these functions in the h file as they are not
used outside of efa_verbs.c.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 3/5] RDMA/efa: Validate EQ array out of bounds reach
2024-06-24 16:09 ` [PATCH for-next 3/5] RDMA/efa: Validate EQ array out of bounds reach Michael Margolin
@ 2024-06-25 6:33 ` Gal Pressman
2024-06-25 11:28 ` Margolin, Michael
0 siblings, 1 reply; 19+ messages in thread
From: Gal Pressman @ 2024-06-25 6:33 UTC (permalink / raw)
To: Michael Margolin, jgg, leon, linux-rdma
Cc: sleybo, matua, Yonatan Nachum, Firas Jahjah
On 24/06/2024 19:09, Michael Margolin wrote:
> From: Yonatan Nachum <ynachum@amazon.com>
>
> When creating a new CQ with interrupts enabled, the caller needs to
> specify an EQ index to which the interrupts will be sent on, we don't
> validate the requested index in the EQ array.
> Validate out of bound reach of the EQ array and return an error.
>
> This is not a bug because IB core validates the requested EQ number when
> creating a CQ.
Then why is this patch needed?
>
> Reviewed-by: Firas Jahjah <firasj@amazon.com>
> Signed-off-by: Yonatan Nachum <ynachum@amazon.com>
> Signed-off-by: Michael Margolin <mrgolin@amazon.com>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 4/5] RDMA/efa: Move type conversion helpers to efa.h
2024-06-25 6:33 ` Gal Pressman
@ 2024-06-25 11:21 ` Margolin, Michael
2024-06-25 14:19 ` Leon Romanovsky
0 siblings, 1 reply; 19+ messages in thread
From: Margolin, Michael @ 2024-06-25 11:21 UTC (permalink / raw)
To: Gal Pressman, jgg, leon, linux-rdma
Cc: sleybo, matua, Firas Jahjah, Yonatan Nachum
On 6/25/2024 9:33 AM, Gal Pressman wrote
> On 24/06/2024 19:09, Michael Margolin wrote:
>> Move ib_ to efa_ types conversion functions to have them near the types
>> definitions and to reduce code in efa_verbs.c.
>>
>> Reviewed-by: Firas Jahjah <firasj@amazon.com>
>> Reviewed-by: Yonatan Nachum <ynachum@amazon.com>
>> Signed-off-by: Michael Margolin <mrgolin@amazon.com>
> The idea was to not expose these functions in the h file as they are not
> used outside of efa_verbs.c.
I know but it's a driver internal header and it also makes sense to have
the cast from the base type near the definition.
Michael
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 3/5] RDMA/efa: Validate EQ array out of bounds reach
2024-06-25 6:33 ` Gal Pressman
@ 2024-06-25 11:28 ` Margolin, Michael
2024-06-25 14:16 ` Leon Romanovsky
0 siblings, 1 reply; 19+ messages in thread
From: Margolin, Michael @ 2024-06-25 11:28 UTC (permalink / raw)
To: Gal Pressman, jgg, leon, linux-rdma
Cc: sleybo, matua, Yonatan Nachum, Firas Jahjah
On 6/25/2024 9:33 AM, Gal Pressman wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
>
>
>
> On 24/06/2024 19:09, Michael Margolin wrote:
>> From: Yonatan Nachum <ynachum@amazon.com>
>>
>> When creating a new CQ with interrupts enabled, the caller needs to
>> specify an EQ index to which the interrupts will be sent on, we don't
>> validate the requested index in the EQ array.
>> Validate out of bound reach of the EQ array and return an error.
>>
>> This is not a bug because IB core validates the requested EQ number when
>> creating a CQ.
> Then why is this patch needed?
Mainly as best practice of validating the index right before array
access to avoid hard to catch bugs.
Michael
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 3/5] RDMA/efa: Validate EQ array out of bounds reach
2024-06-25 11:28 ` Margolin, Michael
@ 2024-06-25 14:16 ` Leon Romanovsky
0 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2024-06-25 14:16 UTC (permalink / raw)
To: Margolin, Michael
Cc: Gal Pressman, jgg, linux-rdma, sleybo, matua, Yonatan Nachum,
Firas Jahjah
On Tue, Jun 25, 2024 at 02:28:24PM +0300, Margolin, Michael wrote:
>
> On 6/25/2024 9:33 AM, Gal Pressman wrote:
> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >
> >
> >
> > On 24/06/2024 19:09, Michael Margolin wrote:
> > > From: Yonatan Nachum <ynachum@amazon.com>
> > >
> > > When creating a new CQ with interrupts enabled, the caller needs to
> > > specify an EQ index to which the interrupts will be sent on, we don't
> > > validate the requested index in the EQ array.
> > > Validate out of bound reach of the EQ array and return an error.
> > >
> > > This is not a bug because IB core validates the requested EQ number when
> > > creating a CQ.
> > Then why is this patch needed?
>
> Mainly as best practice of validating the index right before array access to
> avoid hard to catch bugs.
If you know about the bug, please fix it. There is no need to add code
"to protect" low level from incorrect upper level.
Thanks
>
> Michael
>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 4/5] RDMA/efa: Move type conversion helpers to efa.h
2024-06-25 11:21 ` Margolin, Michael
@ 2024-06-25 14:19 ` Leon Romanovsky
0 siblings, 0 replies; 19+ messages in thread
From: Leon Romanovsky @ 2024-06-25 14:19 UTC (permalink / raw)
To: Margolin, Michael
Cc: Gal Pressman, jgg, linux-rdma, sleybo, matua, Firas Jahjah,
Yonatan Nachum
On Tue, Jun 25, 2024 at 02:21:39PM +0300, Margolin, Michael wrote:
> On 6/25/2024 9:33 AM, Gal Pressman wrote
>
> > On 24/06/2024 19:09, Michael Margolin wrote:
> > > Move ib_ to efa_ types conversion functions to have them near the types
> > > definitions and to reduce code in efa_verbs.c.
> > >
> > > Reviewed-by: Firas Jahjah <firasj@amazon.com>
> > > Reviewed-by: Yonatan Nachum <ynachum@amazon.com>
> > > Signed-off-by: Michael Margolin <mrgolin@amazon.com>
> > The idea was to not expose these functions in the h file as they are not
> > used outside of efa_verbs.c.
>
> I know but it's a driver internal header and it also makes sense to have the
> cast from the base type near the definition.
We put in header file things which are used in multiple files. It is not
the case here. Let's try to avoid code churn without real gain behind it.
Thanks
>
> Michael
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 5/5] RDMA/efa: Align private func names to a single convention
2024-06-24 16:09 ` [PATCH for-next 5/5] RDMA/efa: Align private func names to a single convention Michael Margolin
@ 2024-06-26 15:38 ` Jason Gunthorpe
2024-06-27 16:57 ` Margolin, Michael
0 siblings, 1 reply; 19+ messages in thread
From: Jason Gunthorpe @ 2024-06-26 15:38 UTC (permalink / raw)
To: Michael Margolin
Cc: leon, linux-rdma, sleybo, matua, gal.pressman, Firas Jahjah,
Yonatan Nachum
On Mon, Jun 24, 2024 at 04:09:18PM +0000, Michael Margolin wrote:
> Name functions that aren't exposed outside of a file with '_' prefix and
> make sure that all verbs ops' implementations are named according to the
> op name with an additional 'efa_' prefix.
That isn't the kernel convention, please don't use _ like this
Jason
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 0/5] RDMA/efa: Cleanups and minor improvements
2024-06-24 16:09 [PATCH for-next 0/5] RDMA/efa: Cleanups and minor improvements Michael Margolin
` (4 preceding siblings ...)
2024-06-24 16:09 ` [PATCH for-next 5/5] RDMA/efa: Align private func names to a single convention Michael Margolin
@ 2024-06-26 17:55 ` Jason Gunthorpe
5 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2024-06-26 17:55 UTC (permalink / raw)
To: Michael Margolin; +Cc: leon, linux-rdma, sleybo, matua, gal.pressman
On Mon, Jun 24, 2024 at 04:09:13PM +0000, Michael Margolin wrote:
> RDMA/efa: Use offset_in_page() function
> RDMA/efa: Remove duplicate aenq enable macro
I took these two into for-next. It seems the others have comments.
Thanks,
Jason
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 5/5] RDMA/efa: Align private func names to a single convention
2024-06-26 15:38 ` Jason Gunthorpe
@ 2024-06-27 16:57 ` Margolin, Michael
2024-06-27 18:43 ` Leon Romanovsky
0 siblings, 1 reply; 19+ messages in thread
From: Margolin, Michael @ 2024-06-27 16:57 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: leon, linux-rdma, sleybo, matua, gal.pressman, Firas Jahjah,
Yonatan Nachum
On 6/26/2024 6:38 PM, Jason Gunthorpe wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
>
>
>
> On Mon, Jun 24, 2024 at 04:09:18PM +0000, Michael Margolin wrote:
>> Name functions that aren't exposed outside of a file with '_' prefix and
>> make sure that all verbs ops' implementations are named according to the
>> op name with an additional 'efa_' prefix.
> That isn't the kernel convention, please don't use _ like this
>
> Jason
AFAIK there is no single kernel convention for static functions naming
and it also seems that the underscore prefix isn't rare in the subsystem.
Would you suggest a preferred style?
Michael
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 5/5] RDMA/efa: Align private func names to a single convention
2024-06-27 16:57 ` Margolin, Michael
@ 2024-06-27 18:43 ` Leon Romanovsky
2024-06-28 14:27 ` Jason Gunthorpe
0 siblings, 1 reply; 19+ messages in thread
From: Leon Romanovsky @ 2024-06-27 18:43 UTC (permalink / raw)
To: Margolin, Michael
Cc: Jason Gunthorpe, linux-rdma, sleybo, matua, gal.pressman,
Firas Jahjah, Yonatan Nachum
On Thu, Jun 27, 2024 at 07:57:17PM +0300, Margolin, Michael wrote:
>
> On 6/26/2024 6:38 PM, Jason Gunthorpe wrote:
> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >
> >
> >
> > On Mon, Jun 24, 2024 at 04:09:18PM +0000, Michael Margolin wrote:
> > > Name functions that aren't exposed outside of a file with '_' prefix and
> > > make sure that all verbs ops' implementations are named according to the
> > > op name with an additional 'efa_' prefix.
> > That isn't the kernel convention, please don't use _ like this
> >
> > Jason
>
> AFAIK there is no single kernel convention for static functions naming and
> it also seems that the underscore prefix isn't rare in the subsystem.
Underscore at the beginning of the function name is a common practice
to mark that function as locked variant.
Thanks
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH for-next 5/5] RDMA/efa: Align private func names to a single convention
2024-06-27 18:43 ` Leon Romanovsky
@ 2024-06-28 14:27 ` Jason Gunthorpe
0 siblings, 0 replies; 19+ messages in thread
From: Jason Gunthorpe @ 2024-06-28 14:27 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Margolin, Michael, linux-rdma, sleybo, matua, gal.pressman,
Firas Jahjah, Yonatan Nachum
On Thu, Jun 27, 2024 at 09:43:20PM +0300, Leon Romanovsky wrote:
> On Thu, Jun 27, 2024 at 07:57:17PM +0300, Margolin, Michael wrote:
> >
> > On 6/26/2024 6:38 PM, Jason Gunthorpe wrote:
> > > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > >
> > >
> > >
> > > On Mon, Jun 24, 2024 at 04:09:18PM +0000, Michael Margolin wrote:
> > > > Name functions that aren't exposed outside of a file with '_' prefix and
> > > > make sure that all verbs ops' implementations are named according to the
> > > > op name with an additional 'efa_' prefix.
> > > That isn't the kernel convention, please don't use _ like this
> > >
> > > Jason
> >
> > AFAIK there is no single kernel convention for static functions naming and
> > it also seems that the underscore prefix isn't rare in the subsystem.
>
> Underscore at the beginning of the function name is a common practice
> to mark that function as locked variant.
Yeah, it is not typical to mark static functions at all.
You might exclude the symbol prefix (like efa_) but even that is not
universal.
Jason
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2024-06-28 14:27 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-24 16:09 [PATCH for-next 0/5] RDMA/efa: Cleanups and minor improvements Michael Margolin
2024-06-24 16:09 ` [PATCH for-next 1/5] RDMA/efa: Use offset_in_page() function Michael Margolin
2024-06-25 6:32 ` Gal Pressman
2024-06-24 16:09 ` [PATCH for-next 2/5] RDMA/efa: Remove duplicate aenq enable macro Michael Margolin
2024-06-25 6:32 ` Gal Pressman
2024-06-24 16:09 ` [PATCH for-next 3/5] RDMA/efa: Validate EQ array out of bounds reach Michael Margolin
2024-06-25 6:33 ` Gal Pressman
2024-06-25 11:28 ` Margolin, Michael
2024-06-25 14:16 ` Leon Romanovsky
2024-06-24 16:09 ` [PATCH for-next 4/5] RDMA/efa: Move type conversion helpers to efa.h Michael Margolin
2024-06-25 6:33 ` Gal Pressman
2024-06-25 11:21 ` Margolin, Michael
2024-06-25 14:19 ` Leon Romanovsky
2024-06-24 16:09 ` [PATCH for-next 5/5] RDMA/efa: Align private func names to a single convention Michael Margolin
2024-06-26 15:38 ` Jason Gunthorpe
2024-06-27 16:57 ` Margolin, Michael
2024-06-27 18:43 ` Leon Romanovsky
2024-06-28 14:27 ` Jason Gunthorpe
2024-06-26 17:55 ` [PATCH for-next 0/5] RDMA/efa: Cleanups and minor improvements Jason Gunthorpe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).