From: Hannes Reinecke <hare@suse.de>
To: Sagi Grimberg <sagi@grimberg.me>, Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <kbusch@kernel.org>, linux-nvme@lists.infradead.org
Subject: Re: [PATCH 0/3] nvme-tcp: start error recovery after KATO
Date: Mon, 4 Dec 2023 12:37:31 +0100 [thread overview]
Message-ID: <765a21bd-e876-4e0f-a5ff-dd25fa7265fa@suse.de> (raw)
In-Reply-To: <b7b72df8-917c-4d4e-8440-47f85c50973d@grimberg.me>
On 12/4/23 11:30, Sagi Grimberg wrote:
>
>>>>>>> Hi all,
>>>>>>>
>>>>>>> there have been some very insistent reports of data corruption
>>>>>>> with certain target implementations due to command retries.
>>>>>>
>>>>>> None of which were reported on this list...
>>>>>>
>>>>> Correct. I can ask them to post their finding here if that would
>>>>> make a difference.
>>>>>
>>>>>>> Problem here is that for TCP we're starting error recovery
>>>>>>> immediately after either a command timeout or a (local) link loss.
>>>>>>
>>>>>> It does so only in one occasion, when the user triggered a
>>>>>> reset_controller. a command timeout is greater than the default
>>>>>> kato (6 times in fact), was this the case where the issue was
>>>>>> observed? If so, the timeout handler should probably just wait
>>>>>> the kato remaining time.
>>>>>>
>>>>> Nothing to do with reset_controller.
>>>>> The problem really is in nvme_tcp_state_change(), which will
>>>>> blindly start error recovery (and a subsequent command retry)
>>>>> whenever the local link drops.
>>>>
>>>> That is incorrect. A link drop does not trigger a socket state change.
>>>> a state change is triggered for specific socket states
>>>> (see net/ipv4/tcp.c for description of socket states).
>>>>
>>>> These are a set of states that describes local and remote sides
>>>> shutting down (or a TIME_WAIT). The cases where the host
>>>> triggers error recovery without the controller acknowledging it
>>>> (in case the link is down) are FIN_WAIT1/2, and these happen
>>>> when the user triggers a controller reset/disconnect.
>>>>
>>>> In the other circumstances, we'll see a TCP_CLOSE/CLOSE_WAIT/CLOSING
>>>> which should indicate the controller acknowledged the shutdown.
>>>>
>>>
>>> Ah, now. It's really neither.
>>> Complaint from the customer was that we are retrying commands without
>>> waiting for KATO. Actual claim was that we're retrying more-or-less
>>> immediately.
>>>
>>> So, the problem here is not the first command running into a timeout;
>>> the problem really is with the _subsequent_ commands running into a
>>> timeout (cf my comments to patch 1/3).
>>> The first command will set the status to RESETTING and start error
>>> recovery, all fine. But all _other_ commands will be aborted directly,
>>> causing them to be retried immediately without any delay.
>>
>> This is still unclear to me.
>> The default io timeout is 30s, which is 6 times larger than the
>> default kato. So effectively we are hitting a case where we waited
>> 6 kato periods, and with your patch we are waiting 7 kato periods?
>>
>> There is a disconnect for me here.
>>
>> Can you please clarify:
>> 1. what is your kato?
>> 2. what is your io timeout?
>> 3. was the controller RESETTING state triggered from kato? or
>> from io timeout?
>> if the latter, it indicates that kato is higher than io timeout,
>> otherwise the transport is perfectly fine, and this is essentially
>> a controller io that is taking longer than the host io timeout?
>>
>> Do you have some trace that points out how a command is sent down to
>> the controller (successfully arrives to the controller) the network goes
>> down, and the io is retried before kato expires?
>>
>> What is unclear to me here, is how kato expires, but the controller
>> continues to process commands.
>>
Argument here is that there is a race window between command timeout
triggering (ie the call to nvme_tcp_timeout()) and scheduling of the
'err_work' workqueue function. If you have a burst of commands all
expiring at roughly the same time the first command will set the
controller state to 'RESETTING', and schedule the 'err_work' workqueue
function.
The second command will timing out _before_ err_work is scheduled,
will see the RESETTING state in nvme_tcp_timeout(), and directly
complete and retry (via nvme_tcp_complete_timed_out()) without
waiting for KATO.
That is the issue the customer had been seing, and that is what is
causing data corruption on their end.
This patchset is an attempt to fix this issue.
We'd be happy to pick up this thread, it just had been pushed back
internally as we put in a band-aid for that customer in our kernels...
Cheers,
Hannes
next prev parent reply other threads:[~2023-12-04 11:38 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-08 10:00 [PATCH 0/3] nvme-tcp: start error recovery after KATO Hannes Reinecke
2023-09-08 10:00 ` [PATCH 1/3] nvme-tcp: Do not terminate commands when in RESETTING Hannes Reinecke
2023-09-12 11:54 ` Sagi Grimberg
2023-09-12 12:06 ` Hannes Reinecke
2023-09-12 12:15 ` Sagi Grimberg
2023-09-12 12:59 ` Hannes Reinecke
2023-09-12 13:02 ` Sagi Grimberg
2023-09-08 10:00 ` [PATCH 2/3] nvme-tcp: make 'err_work' a delayed work Hannes Reinecke
2023-09-08 10:00 ` [PATCH 3/3] nvme-tcp: delay error recovery until the next KATO interval Hannes Reinecke
2023-09-12 12:17 ` Sagi Grimberg
2023-09-12 11:51 ` [PATCH 0/3] nvme-tcp: start error recovery after KATO Sagi Grimberg
2023-09-12 11:56 ` Hannes Reinecke
2023-09-12 12:12 ` Sagi Grimberg
2023-09-12 13:25 ` Hannes Reinecke
2023-09-12 13:43 ` Sagi Grimberg
2023-12-04 10:30 ` Sagi Grimberg
2023-12-04 11:37 ` Hannes Reinecke [this message]
2023-12-04 13:27 ` Sagi Grimberg
2023-12-04 14:15 ` Hannes Reinecke
2023-12-04 16:11 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=765a21bd-e876-4e0f-a5ff-dd25fa7265fa@suse.de \
--to=hare@suse.de \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox