stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] nvmet-rdma: fix null dereference under heavy load
@ 2019-01-03 10:33 Raju Rangoju
  2019-01-03 16:15 ` Max Gurtovoy
  0 siblings, 1 reply; 3+ messages in thread
From: Raju Rangoju @ 2019-01-03 10:33 UTC (permalink / raw)
  To: sagi, linux-nvme; +Cc: maxg, swise, rajur, stable

Under heavy load if we don't have any pre-allocated rsps left, we
dynamically allocate a rsp, but we are not actually allocating memory
for nvme_completion (rsp->req.rsp). In such a case, accessing pointer
fields (req->rsp->status) in nvmet_req_init() will result in crash.

To fix this, allocate the memory for nvme_completion by calling
nvmet_rdma_alloc_rsp()

Fixes: 8407879c("nvmet-rdma:fix possible bogus dereference under heavy load")

Cc: <stable@vger.kernel.org>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Raju Rangoju <rajur@chelsio.com>

--
Changes from v1:
- Moved integer to 'if' block
- Used 'unlikely' in datapath flow condition
---
 drivers/nvme/target/rdma.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index a8d23eb80192..8f9e6645fcf9 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -139,6 +139,10 @@ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc);
 static void nvmet_rdma_read_data_done(struct ib_cq *cq, struct ib_wc *wc);
 static void nvmet_rdma_qp_event(struct ib_event *event, void *priv);
 static void nvmet_rdma_queue_disconnect(struct nvmet_rdma_queue *queue);
+static void nvmet_rdma_free_rsp(struct nvmet_rdma_device *ndev,
+				struct nvmet_rdma_rsp *r);
+static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
+				struct nvmet_rdma_rsp *r);
 
 static const struct nvmet_fabrics_ops nvmet_rdma_ops;
 
@@ -182,9 +186,17 @@ nvmet_rdma_get_rsp(struct nvmet_rdma_queue *queue)
 	spin_unlock_irqrestore(&queue->rsps_lock, flags);
 
 	if (unlikely(!rsp)) {
-		rsp = kmalloc(sizeof(*rsp), GFP_KERNEL);
+		int ret = -EINVAL;
+
+		rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);
 		if (unlikely(!rsp))
 			return NULL;
+		ret = nvmet_rdma_alloc_rsp(queue->dev, rsp);
+		if (unlikely(ret)) {
+			kfree(rsp);
+			return NULL;
+		}
+
 		rsp->allocated = true;
 	}
 
@@ -197,6 +209,7 @@ nvmet_rdma_put_rsp(struct nvmet_rdma_rsp *rsp)
 	unsigned long flags;
 
 	if (unlikely(rsp->allocated)) {
+		nvmet_rdma_free_rsp(rsp->queue->dev, rsp);
 		kfree(rsp);
 		return;
 	}
-- 
2.12.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] nvmet-rdma: fix null dereference under heavy load
  2019-01-03 10:33 [PATCH v2] nvmet-rdma: fix null dereference under heavy load Raju Rangoju
@ 2019-01-03 16:15 ` Max Gurtovoy
  2019-01-03 17:29   ` Raju Rangoju
  0 siblings, 1 reply; 3+ messages in thread
From: Max Gurtovoy @ 2019-01-03 16:15 UTC (permalink / raw)
  To: Raju Rangoju, sagi, linux-nvme; +Cc: swise, stable


On 1/3/2019 12:33 PM, Raju Rangoju wrote:
> Under heavy load if we don't have any pre-allocated rsps left, we
> dynamically allocate a rsp, but we are not actually allocating memory
> for nvme_completion (rsp->req.rsp). In such a case, accessing pointer
> fields (req->rsp->status) in nvmet_req_init() will result in crash.
>
> To fix this, allocate the memory for nvme_completion by calling
> nvmet_rdma_alloc_rsp()
>
> Fixes: 8407879c("nvmet-rdma:fix possible bogus dereference under heavy load")
>
> Cc: <stable@vger.kernel.org>
> Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
> Signed-off-by: Raju Rangoju <rajur@chelsio.com>
>
> --
> Changes from v1:
> - Moved integer to 'if' block
> - Used 'unlikely' in datapath flow condition
> ---
>   drivers/nvme/target/rdma.c | 15 ++++++++++++++-
>   1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index a8d23eb80192..8f9e6645fcf9 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -139,6 +139,10 @@ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc);
>   static void nvmet_rdma_read_data_done(struct ib_cq *cq, struct ib_wc *wc);
>   static void nvmet_rdma_qp_event(struct ib_event *event, void *priv);
>   static void nvmet_rdma_queue_disconnect(struct nvmet_rdma_queue *queue);
> +static void nvmet_rdma_free_rsp(struct nvmet_rdma_device *ndev,
> +				struct nvmet_rdma_rsp *r);
> +static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
> +				struct nvmet_rdma_rsp *r);
>   
>   static const struct nvmet_fabrics_ops nvmet_rdma_ops;
>   
> @@ -182,9 +186,17 @@ nvmet_rdma_get_rsp(struct nvmet_rdma_queue *queue)
>   	spin_unlock_irqrestore(&queue->rsps_lock, flags);
>   
>   	if (unlikely(!rsp)) {
> -		rsp = kmalloc(sizeof(*rsp), GFP_KERNEL);
> +		int ret = -EINVAL;
> +

no real need to initialize ret variable (sorry I didn't see it in first 
review).


> +		rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);
>   		if (unlikely(!rsp))
>   			return NULL;
> +		ret = nvmet_rdma_alloc_rsp(queue->dev, rsp);
> +		if (unlikely(ret)) {
> +			kfree(rsp);
> +			return NULL;
> +		}
> +
>   		rsp->allocated = true;
>   	}
>   
> @@ -197,6 +209,7 @@ nvmet_rdma_put_rsp(struct nvmet_rdma_rsp *rsp)
>   	unsigned long flags;
>   
>   	if (unlikely(rsp->allocated)) {
> +		nvmet_rdma_free_rsp(rsp->queue->dev, rsp);
>   		kfree(rsp);
>   		return;
>   	}

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] nvmet-rdma: fix null dereference under heavy load
  2019-01-03 16:15 ` Max Gurtovoy
@ 2019-01-03 17:29   ` Raju Rangoju
  0 siblings, 0 replies; 3+ messages in thread
From: Raju Rangoju @ 2019-01-03 17:29 UTC (permalink / raw)
  To: Max Gurtovoy; +Cc: sagi, linux-nvme, swise, stable

On Thursday, January 01/03/19, 2019 at 18:15:55 +0200, Max Gurtovoy wrote:
> 
> >  	spin_unlock_irqrestore(&queue->rsps_lock, flags);
> >  	if (unlikely(!rsp)) {
> >-		rsp = kmalloc(sizeof(*rsp), GFP_KERNEL);
> >+		int ret = -EINVAL;
> >+
> 
> no real need to initialize ret variable (sorry I didn't see it in
> first review).
> 
>

No problem. I'll post v3.

> >+		rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-01-03 17:30 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-01-03 10:33 [PATCH v2] nvmet-rdma: fix null dereference under heavy load Raju Rangoju
2019-01-03 16:15 ` Max Gurtovoy
2019-01-03 17:29   ` Raju Rangoju

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).