Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Keith Busch <kbusch@kernel.org>
Cc: linux-nvme@lists.infradead.org, hch@lst.de
Subject: Re: nvme tcp receive errors
Date: Wed, 28 Apr 2021 10:42:13 -0700	[thread overview]
Message-ID: <00a3b480-2885-b32b-139a-ef68ff21bad5@grimberg.me> (raw)
In-Reply-To: <20210428155847.GA21854@redsun51.ssa.fujisawa.hgst.com>


>>> This one took a bit to go through. The traces all only have a single r2t
>>> pdu with 0 offset for the full length of the requested transfer. I had
>>> to add trace events to see what the heck the driver is thinking, and
>>> the result is even more confusing.
>>>
>>> The kernel message error:
>>>
>>>       nvme5: req 5 op 1 r2t len 16384 exceeded data len 16384 (4096 sent)
>>>
>>> And all the new trace events for this request are:
>>>
>>>       fio-25086   [011] ....  9630.542669: nvme_tcp_queue_rq: nvme5: qid=4 tag=5 op=1 data_len=16384
>>>       fio-25093   [007] ....  9630.542854: nvme_tcp_cmd_pdu: nvme5: qid=4 tag=5 op=1 page_offset=3664 send_len=72
>>>       <...>-22670 [003] ....  9630.544377: nvme_tcp_r2t: nvme5: qid=4 tag=5 op=1 r2t_len=16384 r2t_offset=0 data_sent=4096 data_len=16384
>>>
>>> The fact "data_sent" is non-zero on the very first r2t makes no sense to me.
>>
>> Yep, not supposed to happen... Like the traces though! very useful and
>> absolutely worth having.
>>
>>> I so far can not find any sequence where that could happen.
>>
>> The only way that data_sent can increment is either by getting an r2t
>> solicitation or by sending incapsule data.
>>
>> What is the ioccsz the controller is exposing?
> 
> In capsule data is not supported by this target, so ioccsz is 4.
> 
>> in nvme_tcp_sned_cmd_pdu we have:
>> --
>>                  if (inline_data) {
>>                          req->state = NVME_TCP_SEND_DATA;
>>                          if (queue->data_digest)
>>                                  crypto_ahash_init(queue->snd_hash);
>>                  } else {
>>                          nvme_tcp_done_send_req(queue);
>>                  }
>> --
>>
>> Where inline_data flag is:
>> 	bool inline_data = nvme_tcp_has_inline_data(req);
>> which essentially boils down to:
>> 	req->data_len <= queue->cmnd_capsule_len - sizeof(struct nvme_command);
>>
>> I wonder how does the nvme command sgl look like? is it an offset
>> sgl?
> 
> Just a single data block descriptor SGL. The target supports only 1 and
> reports that through ID_CTRL.MSDBD.
> 
> What do you mean by "offset" SGL?

In tcp.c:
--
static void nvme_tcp_set_sg_inline(struct nvme_tcp_queue *queue,
                 struct nvme_command *c, u32 data_len)
{
         struct nvme_sgl_desc *sg = &c->common.dptr.sgl;

         sg->addr = cpu_to_le64(queue->ctrl->ctrl.icdoff);
         sg->length = cpu_to_le32(data_len);
         sg->type = (NVME_SGL_FMT_DATA_DESC << 4) | NVME_SGL_FMT_OFFSET;
}

static void nvme_tcp_set_sg_host_data(struct nvme_command *c,
                 u32 data_len)
{
         struct nvme_sgl_desc *sg = &c->common.dptr.sgl;

         sg->addr = 0;
         sg->length = cpu_to_le32(data_len);
         sg->type = (NVME_TRANSPORT_SGL_DATA_DESC << 4) |
                         NVME_SGL_FMT_TRANSPORT_A;
}
--

What is the sgl type you see in the traces? transport specific sgl
(host-data i.e. non-incapsule) or inline?

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-04-28 17:42 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-31 16:18 nvme tcp receive errors Keith Busch
2021-03-31 19:10 ` Sagi Grimberg
2021-03-31 20:49   ` Keith Busch
2021-03-31 22:16     ` Sagi Grimberg
2021-03-31 22:26       ` Keith Busch
2021-03-31 22:45         ` Sagi Grimberg
2021-04-02 17:11     ` Keith Busch
2021-04-02 17:27       ` Sagi Grimberg
2021-04-05 14:37         ` Keith Busch
2021-04-07 19:53           ` Keith Busch
2021-04-09 21:38             ` Sagi Grimberg
2021-04-27 23:39               ` Keith Busch
2021-04-27 23:55                 ` Sagi Grimberg
2021-04-28 15:58                   ` Keith Busch
2021-04-28 17:42                     ` Sagi Grimberg [this message]
2021-04-28 18:01                       ` Keith Busch
2021-04-28 23:06                         ` Sagi Grimberg
2021-04-29  3:33                           ` Keith Busch
2021-04-29  4:52                             ` Sagi Grimberg
2021-05-03 18:51                               ` Keith Busch
2021-05-03 19:58                                 ` Sagi Grimberg
2021-05-03 20:25                                   ` Keith Busch
2021-05-04 19:29                                     ` Sagi Grimberg
2021-04-09 18:04           ` Sagi Grimberg
2021-04-14  0:29             ` Keith Busch
2021-04-21  5:33               ` Sagi Grimberg
2021-04-21 14:28                 ` Keith Busch
2021-04-21 16:59                   ` Sagi Grimberg
2021-04-26 15:31                 ` Keith Busch
2021-04-27  3:10                   ` Sagi Grimberg
2021-04-27 18:12                     ` Keith Busch
2021-04-27 23:58                       ` Sagi Grimberg
2021-04-30 23:42                         ` Sagi Grimberg
2021-05-03 14:28                           ` Keith Busch
2021-05-03 19:36                             ` Sagi Grimberg
2021-05-03 19:38                               ` Sagi Grimberg
2021-05-03 19:44                                 ` Keith Busch
2021-05-03 20:00                                   ` Sagi Grimberg
2021-05-04 14:36                                     ` Keith Busch
2021-05-04 18:15                                       ` Sagi Grimberg
2021-05-04 19:14                                         ` Keith Busch
2021-05-10 18:06                                           ` Keith Busch
2021-05-10 18:18                                             ` Sagi Grimberg
2021-05-10 18:30                                               ` Keith Busch
2021-05-10 21:07                                                 ` Sagi Grimberg
2021-05-11  3:00                                                   ` Keith Busch
2021-05-11 17:17                                                     ` Sagi Grimberg
2021-05-13 15:48                                                       ` Keith Busch
2021-05-13 19:53                                                         ` Sagi Grimberg
2021-05-17 20:48                                                           ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=00a3b480-2885-b32b-139a-ef68ff21bad5@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox