From: Hou Pu <houpu.main@gmail.com>
To: houpu.main@gmail.com
Cc: elad.grupi@dell.com, linux-nvme@lists.infradead.org, sagi@grimberg.me
Subject: Re: [PATCH v3] nvmet-tcp: fix a segmentation fault during io parsing error
Date: Tue, 30 Mar 2021 13:48:49 +0800 [thread overview]
Message-ID: <20210330054849.5596-1-houpu.main@gmail.com> (raw)
In-Reply-To: <20210330041219.3069-1-houpu.main@gmail.com>
On Tue, 30 Mar 2021 12:12:19 +0800, Hou Pu wrote:
> On Date: Mon, 29 Mar 2021 21:01:25 +0300, Elad Grupi wrote:
> > diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> > index 70cc507d1565..41102fc09595 100644
> > --- a/drivers/nvme/target/tcp.c
> > +++ b/drivers/nvme/target/tcp.c
> > @@ -525,11 +525,34 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
> > struct nvmet_tcp_cmd *cmd =
> > container_of(req, struct nvmet_tcp_cmd, req);
> > struct nvmet_tcp_queue *queue = cmd->queue;
> > + struct nvme_sgl_desc *sgl;
> > + u32 len;
> > +
> > + if (unlikely(cmd == queue->cmd)) {
> > + sgl = &cmd->req.cmd->common.dptr.sgl;
> > + len = le32_to_cpu(sgl->length);
> > +
> > + /*
> > + * Wait for inline data before processing the response.
> > + * Avoid using helpers, this might happen before
> > + * nvmet_req_init is completed.
> > + */
> > + if (len && cmd->rcv_state == NVMET_TCP_RECV_PDU)
> > + return;
>
> Is it queue->rcv_state ?
> I tried this patch, the identify command could get here. And nvme connect could hang.
> We need to figure out a way to tell if it needs abort queue the request. Or maybe we
> could use the v2 version.
Adding nvme_is_write() would solve the problem.
Also as we skip queue queue->io_work, we should return
0 instead -EAGAIN like below to consume the inline data
in nvmet_tcp_try_recv_one(). Or the io_work might not
have a chance to run.
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index a10a3bd59..f3d117771 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -537,7 +537,8 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
* Avoid using helpers, this might happen before
* nvmet_req_init is completed.
*/
- if (len && cmd->rcv_state == NVMET_TCP_RECV_PDU)
+ if (len && queue->rcv_state == NVMET_TCP_RECV_PDU &&
+ nvme_is_write(cmd->req.cmd))
return;
}
@@ -984,7 +985,7 @@ static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue)
le32_to_cpu(req->cmd->common.dptr.sgl.length));
nvmet_tcp_handle_req_failure(queue, queue->cmd, req);
- return -EAGAIN;
+ return 0;
}
ret = nvmet_tcp_map_data(queue->cmd);
Thanks,
Hou
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2021-03-30 5:49 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-29 18:01 [PATCH v3] nvmet-tcp: fix a segmentation fault during io parsing error elad.grupi
2021-03-30 4:12 ` Hou Pu
2021-03-30 5:48 ` Hou Pu [this message]
2021-03-30 17:25 ` Grupi, Elad
-- strict thread matches above, loose matches on Subject: below --
2021-03-28 9:50 elad.grupi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210330054849.5596-1-houpu.main@gmail.com \
--to=houpu.main@gmail.com \
--cc=elad.grupi@dell.com \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox