public inbox for linux-nfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Chuck Lever III <chuck.lever@oracle.com>
To: Timo Rothenpieler <timo@rothenpieler.org>
Cc: Linux NFS Mailing List <linux-nfs@vger.kernel.org>,
	linux-rdma <linux-rdma@vger.kernel.org>
Subject: Re: Spurious instability with NFSoRDMA under moderate load
Date: Tue, 10 Aug 2021 17:17:38 +0000	[thread overview]
Message-ID: <4BA2A532-9063-4893-AF53-E1DAB06095CC@oracle.com> (raw)
In-Reply-To: <a28b403e-42cf-3189-a4db-86d20da1b7aa@rothenpieler.org>



> On Aug 10, 2021, at 10:06 AM, Timo Rothenpieler <timo@rothenpieler.org> wrote:
> 
> On 10.08.2021 14:49, Timo Rothenpieler wrote:
>> Ok, so this just happened again for the first time since upgrading to 5.12.
>> Exact same thing, except this time no error cqe was dumped simultaneously (It still appeared in dmesg, but over a week before the issue showed up). So I'll assume it's unrelated to this issue.
>> I had no issues while running 5.12.12 and below. Recently (14 days ago or so) updated to 5.12.19, and now it's happening again.
>> Unfortunately, with how rarely this issue happens, this can either be a regression between those two versions, or it was still present all along and just never triggered for several months.
>> Makes me wonder if this is somehow related to the problem described in "NFS server regression in kernel 5.13 (tested w/ 5.13.9)". But the pattern of the issue does not look all that similar, given that for me, the hangs never recover, and I have RDMA in the mix.
> 
> Here's some trace and debug data I collected while it was in a state where everything using copy_range would get stuck.
> The file(s) in question is named "tmp-copy-*", with a bunch of numbers at the end.
> 
> This also seems to be entirely a client side issue, since other NFS clients on the same server work just fine the whole time, and a reboot of that one affected client makes it work normally again, without touching the server.
> 
> In this instance right now, I was even unable to cleanly reboot the client machine, since it got completely stuck, seemingly unable to unmount the NFS shares, getting stuck the same way other operations would.
> 
> What confuses me the most is that while it is in this messed up state, it generally still works fine, and it's only a few very specific operations (only one I know of for sure is copy_range) get stuck.
> The processes also aren't unkillable. a SIGTERM or SIGKILL will end them, and also flushes NFS. Captured that in the dmesg_output2.

What I see in this data is that the server is reporting

   SEQ4_STATUS_CB_PATH_DOWN

and the client is attempting to recover (repeatedly) using
BIND_CONN_TO_SESSION. But apparently the recovery didn't
actually work, because the server continues to report a
callback path problem.

[1712389.125641] nfs41_handle_sequence_flag_errors: "10.110.10.200" (client ID 6765f8600a675814) flags=0x00000001
[1712389.129264] nfs4_bind_conn_to_session: bind_conn_to_session was successful for server 10.110.10.200!

[1712389.171953] nfs41_handle_sequence_flag_errors: "10.110.10.200" (client ID 6765f8600a675814) flags=0x00000001
[1712389.178361] nfs4_bind_conn_to_session: bind_conn_to_session was successful for server 10.110.10.200!

[1712389.195606] nfs41_handle_sequence_flag_errors: "10.110.10.200" (client ID 6765f8600a675814) flags=0x00000001
[1712389.203891] nfs4_bind_conn_to_session: bind_conn_to_session was successful for server 10.110.10.200!

I guess it's time to switch to tracing on the server side
to see if you can nail down why the server's callback
requests are failing. On your NFS server, run:

 # trace-cmd record -e nfsd -e sunrpc -e rpcgss -e rpcrdma

at roughly the same point during your test that you captured
the previous client-side trace.

--
Chuck Lever




  parent reply	other threads:[~2021-08-10 17:17 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-16 17:29 Spurious instability with NFSoRDMA under moderate load Timo Rothenpieler
2021-05-17 16:27 ` Chuck Lever III
2021-05-17 17:37   ` Timo Rothenpieler
2021-06-21 16:06     ` Timo Rothenpieler
2021-06-21 16:28       ` Chuck Lever III
2021-08-10 12:49       ` Timo Rothenpieler
     [not found]         ` <a28b403e-42cf-3189-a4db-86d20da1b7aa@rothenpieler.org>
2021-08-10 17:17           ` Chuck Lever III [this message]
2021-08-10 21:40             ` Timo Rothenpieler
     [not found]               ` <141fdf51-2aa1-6614-fe4e-96f168cbe6cf@rothenpieler.org>
2021-08-11  0:19                 ` Chuck Lever III
     [not found]                   ` <64F9A492-44B9-4057-ABA5-C8202828A8DD@oracle.com>
     [not found]                     ` <1b8a24a9-5dba-3faf-8b0a-16e728a6051c@rothenpieler.org>
     [not found]                       ` <5DD80ADC-0A4B-4D95-8CF7-29096439DE9D@oracle.com>
     [not found]                         ` <0444ca5c-e8b6-1d80-d8a5-8469daa74970@rothenpieler.org>
     [not found]                           ` <cc2f55cd-57d4-d7c3-ed83-8b81ea60d821@rothenpieler.org>
2021-08-11 17:30                             ` Chuck Lever III
2021-08-11 18:38                               ` Olga Kornievskaia
2021-08-11 18:51                                 ` Chuck Lever III
2021-08-11 19:46                                   ` Olga Kornievskaia
2021-08-11 20:01                                     ` Chuck Lever III
2021-08-11 20:14                                       ` J. Bruce Fields
2021-08-11 20:40                                         ` Olga Kornievskaia
2021-08-12 15:40                                           ` J. Bruce Fields
2021-08-11 20:51                                       ` J. Bruce Fields
2021-08-11 20:51                                       ` Olga Kornievskaia
2021-08-12 18:13                               ` Timo Rothenpieler
2021-08-16 13:26                                 ` Chuck Lever III
2021-08-20 15:12                                   ` Chuck Lever III
2021-08-20 16:21                                     ` Timo Rothenpieler
     [not found]                                     ` <60273c2e-e946-25fb-68af-975f793e73d2@rothenpieler.org>
2021-10-29 15:14                                       ` Chuck Lever III
2021-10-29 18:17                                         ` Timo Rothenpieler
2021-10-29 19:06                                           ` Chuck Lever III
2021-08-17 21:08                                 ` Chuck Lever III
2021-08-17 21:51                                   ` Timo Rothenpieler
2021-08-17 22:55                                     ` dai.ngo
2021-08-17 23:05                                       ` dai.ngo
2021-08-18 16:55                                         ` Chuck Lever III
2021-08-18  0:03                                     ` Timo Rothenpieler
2021-05-19 15:20   ` Leon Romanovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4BA2A532-9063-4893-AF53-E1DAB06095CC@oracle.com \
    --to=chuck.lever@oracle.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=timo@rothenpieler.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox