* [Bug ?] Permanent FIN_WAIT_2 state on NFS client with bad NFS server @ 2017-09-20 22:17 Manjunath Patil 2017-09-21 17:05 ` David Wysochanski 0 siblings, 1 reply; 8+ messages in thread From: Manjunath Patil @ 2017-09-20 22:17 UTC (permalink / raw) To: linux-nfs Hi, With autoclose trying to close the connection, after the idle timeout in NFSv3 mounts, a bad NFS server may not send the final FIN, leading the client stay in FIN_WAIT_2 state forever. This is easily reproducible by simulating the bad server behavior. I used 'netstat -an | grep 2049' to observer socket state. This is will also stall the other RPC requests from connecting and proceeding as XPRT_CLOSING flag is already set. This can be observed in the 4.14-rc1 as well. This behavior is introduced with the following commit - caf4ccd SUNRPC: Make xs_tcp_close() do a socket shutdown rather than a sock_release Once we reverse this commit, the FIN_WAIT_2 state lasts only for 60 seconds. Any thoughts correcting this behavior? or is this behavior expected? -Thanks, Manjunath ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Bug ?] Permanent FIN_WAIT_2 state on NFS client with bad NFS server 2017-09-20 22:17 [Bug ?] Permanent FIN_WAIT_2 state on NFS client with bad NFS server Manjunath Patil @ 2017-09-21 17:05 ` David Wysochanski 2017-09-22 19:21 ` Manjunath Patil 0 siblings, 1 reply; 8+ messages in thread From: David Wysochanski @ 2017-09-21 17:05 UTC (permalink / raw) To: Manjunath Patil; +Cc: linux-nfs On Wed, 2017-09-20 at 15:17 -0700, Manjunath Patil wrote: > Hi, > > With autoclose trying to close the connection, after the idle timeout > in NFSv3 mounts, > a bad NFS server may not send the final FIN, leading the client stay > in FIN_WAIT_2 state forever. > This is easily reproducible by simulating the bad server behavior. I > used 'netstat -an | grep 2049' to observer socket state. > How long did you wait and how did you simulate the failure? I am very interested in your test case. I am not sure which kernels you are testing but in my tests (simulating a dropped FIN from the NFS server but not blocking the ACK or further packets) I've seen that the sunrpc TCP keepalive commit 7f260e8575bf53b93b77978c1e39f8e67612759c caused a RST to happen after around 4 minutes so it won't get stuck forever. The only way I could get a FIN_WAIT_2 indefinite hang was to block all traffic from the server port which arguably, if that happens you'll get a hang but only a bit later so I concluded such a test seems invalid. > This is will also stall the other RPC requests from connecting and > proceeding as XPRT_CLOSING flag is already set. > > This can be observed in the 4.14-rc1 as well. > This behavior is introduced with the following commit - > caf4ccd SUNRPC: Make xs_tcp_close() do a socket shutdown rather than a > sock_release > > Once we reverse this commit, the FIN_WAIT_2 state lasts only for 60 seconds. > Interesting maybe the problem is back on some upstream kernels (I mostly test RHEL6, RHEL7, and some fedora). Do you know what is actually firing to get the TCP connection out of FIN_WAIT_2? Have you tried to trace this? I first saw FIN_WAIT_2 hangs after commit 9cbc94fb06f98de0e8d393eaff09c790f4c3ba46 which removed xs_tcp_scheduler_linger_timeout was backported to RHEL6. Later we added the TCP keepalive commit which seems to have resolved these hangs as far as I know. > Any thoughts correcting this behavior? > or is this behavior expected? > Depending on your test, it may be expected behavior but it sounds like not if truly you are stuck in FIN_WAIT_2 indefinitely and you've not got some permanent firewall rule blocking traffic, etc. > -Thanks, > Manjunath > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Bug ?] Permanent FIN_WAIT_2 state on NFS client with bad NFS server 2017-09-21 17:05 ` David Wysochanski @ 2017-09-22 19:21 ` Manjunath Patil 2017-10-06 19:13 ` Manjunath Patil 0 siblings, 1 reply; 8+ messages in thread From: Manjunath Patil @ 2017-09-22 19:21 UTC (permalink / raw) To: David Wysochanski; +Cc: linux-nfs, manjunath.b.patil Hi David, On Thu, Sep 21, 2017 at 10:05 AM, David Wysochanski <dwysocha@redhat.com> wrote: > On Wed, 2017-09-20 at 15:17 -0700, Manjunath Patil wrote: >> Hi, >> >> With autoclose trying to close the connection, after the idle timeout >> in NFSv3 mounts, >> a bad NFS server may not send the final FIN, leading the client stay >> in FIN_WAIT_2 state forever. >> This is easily reproducible by simulating the bad server behavior. I >> used 'netstat -an | grep 2049' to observer socket state. >> > How long did you wait and how did you simulate the failure? I am very > interested in your test case. I observer this in ct environment. In this case the fin_wait_2 stayed forever. ct had to restart the node to get out. We tried to simulate this behavior in Linux nfs server by stopping the incoming FIN for 2049 port inside kernel. This prevented the server from sending the final FIN for some time. The linux server eventually sent a FIN after some delay. Though I am not sure, I think this is due to /* apparently the "standard" is that clients close * idle connections after 5 minutes, servers after * 6 minutes * http://www.connectathon.org/talks96/nfstcp.pdf */ static int svc_conn_age_period = 6*60; > > I am not sure which kernels you are testing but in my tests (simulating > a dropped FIN from the NFS server but not blocking the ACK or further > packets) I've seen that the sunrpc TCP keepalive commit > 7f260e8575bf53b93b77978c1e39f8e67612759c caused a RST to happen after > around 4 minutes so it won't get stuck forever. The only way I could > get a FIN_WAIT_2 indefinite hang was to block all traffic from the > server port which arguably, if that happens you'll get a hang but only a > bit later so I concluded such a test seems invalid. I have observed this behavior with OL6 and upsteam 4.14-rc1 kernel. I do not see tcp-keepalive causing a RST, rather the FIN_WAIT_2 state stays till it gets the final FIN from server. > > >> This is will also stall the other RPC requests from connecting and >> proceeding as XPRT_CLOSING flag is already set. >> >> This can be observed in the 4.14-rc1 as well. >> This behavior is introduced with the following commit - >> caf4ccd SUNRPC: Make xs_tcp_close() do a socket shutdown rather than a >> sock_release >> >> Once we reverse this commit, the FIN_WAIT_2 state lasts only for 60 seconds. >> > > Interesting maybe the problem is back on some upstream kernels (I mostly > test RHEL6, RHEL7, and some fedora). Do you know what is actually > firing to get the TCP connection out of FIN_WAIT_2? Have you tried to > trace this? I think this is because, caf4ccd introduces the half close behavior to xprt_autoclose() In this case, its expected to wait for final FIN from server. However if a bad server chose to not send the final FIN, I think we do not have a backup plan on client side. In the earlier behavior of full close, the tcp clears the FIN_WAIT_2 state after /proc/sys/net/ipv4/tcp_fin_timeout which is 60 seconds. > > I first saw FIN_WAIT_2 hangs after commit > 9cbc94fb06f98de0e8d393eaff09c790f4c3ba46 which removed > xs_tcp_scheduler_linger_timeout was backported to RHEL6. Later we added > the TCP keepalive commit which seems to have resolved these hangs as far > as I know. > > >> Any thoughts correcting this behavior? >> or is this behavior expected? >> > Depending on your test, it may be expected behavior but it sounds like > not if truly you are stuck in FIN_WAIT_2 indefinitely and you've not got > some permanent firewall rule blocking traffic, etc. > > > >> -Thanks, >> Manjunath >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html > > -Thanks, Manjunath ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Bug ?] Permanent FIN_WAIT_2 state on NFS client with bad NFS server 2017-09-22 19:21 ` Manjunath Patil @ 2017-10-06 19:13 ` Manjunath Patil 2017-10-06 20:51 ` Trond Myklebust 0 siblings, 1 reply; 8+ messages in thread From: Manjunath Patil @ 2017-10-06 19:13 UTC (permalink / raw) To: David Wysochanski; +Cc: linux-nfs, manjunath.b.patil Hi David, On Fri, Sep 22, 2017 at 12:21 PM, Manjunath Patil <mbpatil.linux@gmail.com> wrote: > Hi David, > > On Thu, Sep 21, 2017 at 10:05 AM, David Wysochanski <dwysocha@redhat.com> wrote: >> On Wed, 2017-09-20 at 15:17 -0700, Manjunath Patil wrote: >>> Hi, >>> >>> With autoclose trying to close the connection, after the idle timeout >>> in NFSv3 mounts, >>> a bad NFS server may not send the final FIN, leading the client stay >>> in FIN_WAIT_2 state forever. >>> This is easily reproducible by simulating the bad server behavior. I >>> used 'netstat -an | grep 2049' to observer socket state. >>> >> How long did you wait and how did you simulate the failure? I am very >> interested in your test case. > I observer this in ct environment. In this case the fin_wait_2 stayed forever. > ct had to restart the node to get out. > > We tried to simulate this behavior in Linux nfs server by stopping the > incoming FIN > for 2049 port inside kernel. This prevented the server from sending > the final FIN for some time. > > The linux server eventually sent a FIN after some delay. Though I am > not sure, I think this is due to > > /* apparently the "standard" is that clients close > * idle connections after 5 minutes, servers after > * 6 minutes > * http://www.connectathon.org/talks96/nfstcp.pdf > */ > static int svc_conn_age_period = 6*60; I tried to increase this value. After setting this value to a high value [60*60], I could see the client staying in FIN_WAIT_2 state forever. To repeat, my test case is, 1. Take a nfs server and make it not send the FIN on 2049 port 2. Use any upstream kernel [I used 4.14-rc1] as nfs client 3. Let the mount be idle for 5 mins so that autoclose gets triggered. 4. after this, client stays in FIN_WAIT_2 state[we can observer it with netstat -an | grep 2049]. 5. At this point no new NFS connection is allowed on this port. So mount is hung for application. -Thanks, Manjunath > >> >> I am not sure which kernels you are testing but in my tests (simulating >> a dropped FIN from the NFS server but not blocking the ACK or further >> packets) I've seen that the sunrpc TCP keepalive commit >> 7f260e8575bf53b93b77978c1e39f8e67612759c caused a RST to happen after >> around 4 minutes so it won't get stuck forever. The only way I could >> get a FIN_WAIT_2 indefinite hang was to block all traffic from the >> server port which arguably, if that happens you'll get a hang but only a >> bit later so I concluded such a test seems invalid. > I have observed this behavior with OL6 and upsteam 4.14-rc1 kernel. > I do not see tcp-keepalive causing a RST, rather the FIN_WAIT_2 state > stays till it gets > the final FIN from server. >> >> >>> This is will also stall the other RPC requests from connecting and >>> proceeding as XPRT_CLOSING flag is already set. >>> >>> This can be observed in the 4.14-rc1 as well. >>> This behavior is introduced with the following commit - >>> caf4ccd SUNRPC: Make xs_tcp_close() do a socket shutdown rather than a >>> sock_release >>> >>> Once we reverse this commit, the FIN_WAIT_2 state lasts only for 60 seconds. >>> >> >> Interesting maybe the problem is back on some upstream kernels (I mostly >> test RHEL6, RHEL7, and some fedora). Do you know what is actually >> firing to get the TCP connection out of FIN_WAIT_2? Have you tried to >> trace this? > I think this is because, caf4ccd introduces the half close behavior to > xprt_autoclose() > In this case, its expected to wait for final FIN from server. However > if a bad server > chose to not send the final FIN, I think we do not have a backup plan > on client side. > > In the earlier behavior of full close, the tcp clears the FIN_WAIT_2 > state after /proc/sys/net/ipv4/tcp_fin_timeout > which is 60 seconds. > >> >> I first saw FIN_WAIT_2 hangs after commit >> 9cbc94fb06f98de0e8d393eaff09c790f4c3ba46 which removed >> xs_tcp_scheduler_linger_timeout was backported to RHEL6. Later we added >> the TCP keepalive commit which seems to have resolved these hangs as far >> as I know. >> >> >>> Any thoughts correcting this behavior? >>> or is this behavior expected? >>> >> Depending on your test, it may be expected behavior but it sounds like >> not if truly you are stuck in FIN_WAIT_2 indefinitely and you've not got >> some permanent firewall rule blocking traffic, etc. >> >> >> >>> -Thanks, >>> Manjunath >>> -- >>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in >>> the body of a message to majordomo@vger.kernel.org >>> More majordomo info at http://vger.kernel.org/majordomo-info.html >> >> > -Thanks, > Manjunath ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Bug ?] Permanent FIN_WAIT_2 state on NFS client with bad NFS server 2017-10-06 19:13 ` Manjunath Patil @ 2017-10-06 20:51 ` Trond Myklebust 2017-10-06 22:12 ` Manjunath Patil 0 siblings, 1 reply; 8+ messages in thread From: Trond Myklebust @ 2017-10-06 20:51 UTC (permalink / raw) To: mbpatil.linux@gmail.com, dwysocha@redhat.com Cc: linux-nfs@vger.kernel.org, manjunath.b.patil@oracle.com T24gRnJpLCAyMDE3LTEwLTA2IGF0IDEyOjEzIC0wNzAwLCBNYW5qdW5hdGggUGF0aWwgd3JvdGU6 DQo+IEhpIERhdmlkLA0KPiANCj4gT24gRnJpLCBTZXAgMjIsIDIwMTcgYXQgMTI6MjEgUE0sIE1h bmp1bmF0aCBQYXRpbA0KPiA8bWJwYXRpbC5saW51eEBnbWFpbC5jb20+IHdyb3RlOg0KPiA+IEhp IERhdmlkLA0KPiA+IA0KPiA+IE9uIFRodSwgU2VwIDIxLCAyMDE3IGF0IDEwOjA1IEFNLCBEYXZp ZCBXeXNvY2hhbnNraSA8ZHd5c29jaGFAcmVkaGENCj4gPiB0LmNvbT4gd3JvdGU6DQo+ID4gPiBP biBXZWQsIDIwMTctMDktMjAgYXQgMTU6MTcgLTA3MDAsIE1hbmp1bmF0aCBQYXRpbCB3cm90ZToN Cj4gPiA+ID4gSGksDQo+ID4gPiA+IA0KPiA+ID4gPiBXaXRoIGF1dG9jbG9zZSB0cnlpbmcgdG8g Y2xvc2UgdGhlIGNvbm5lY3Rpb24sIGFmdGVyIHRoZSBpZGxlDQo+ID4gPiA+IHRpbWVvdXQNCj4g PiA+ID4gaW4gTkZTdjMgbW91bnRzLA0KPiA+ID4gPiBhIGJhZCBORlMgc2VydmVyIG1heSBub3Qg c2VuZCB0aGUgZmluYWwgRklOLCBsZWFkaW5nIHRoZSBjbGllbnQNCj4gPiA+ID4gc3RheQ0KPiA+ ID4gPiBpbiBGSU5fV0FJVF8yIHN0YXRlIGZvcmV2ZXIuDQo+ID4gPiA+IFRoaXMgaXMgZWFzaWx5 IHJlcHJvZHVjaWJsZSBieSBzaW11bGF0aW5nIHRoZSBiYWQgc2VydmVyDQo+ID4gPiA+IGJlaGF2 aW9yLiBJDQo+ID4gPiA+IHVzZWQgJ25ldHN0YXQgLWFuIHwgZ3JlcCAyMDQ5JyB0byBvYnNlcnZl ciBzb2NrZXQgc3RhdGUuDQo+ID4gPiA+IA0KPiA+ID4gDQo+ID4gPiBIb3cgbG9uZyBkaWQgeW91 IHdhaXQgYW5kIGhvdyBkaWQgeW91IHNpbXVsYXRlIHRoZSBmYWlsdXJlPyAgSSBhbQ0KPiA+ID4g dmVyeQ0KPiA+ID4gaW50ZXJlc3RlZCBpbiB5b3VyIHRlc3QgY2FzZS4NCj4gPiANCj4gPiBJIG9i c2VydmVyIHRoaXMgaW4gY3QgZW52aXJvbm1lbnQuIEluIHRoaXMgY2FzZSB0aGUgZmluX3dhaXRf Mg0KPiA+IHN0YXllZCBmb3JldmVyLg0KPiA+IGN0IGhhZCB0byByZXN0YXJ0IHRoZSBub2RlIHRv IGdldCBvdXQuDQo+ID4gDQo+ID4gV2UgdHJpZWQgdG8gc2ltdWxhdGUgdGhpcyBiZWhhdmlvciBp biBMaW51eCBuZnMgc2VydmVyIGJ5IHN0b3BwaW5nDQo+ID4gdGhlDQo+ID4gaW5jb21pbmcgRklO DQo+ID4gZm9yIDIwNDkgcG9ydCBpbnNpZGUga2VybmVsLiBUaGlzIHByZXZlbnRlZCB0aGUgc2Vy dmVyIGZyb20gc2VuZGluZw0KPiA+IHRoZSBmaW5hbCBGSU4gZm9yIHNvbWUgdGltZS4NCj4gPiAN Cj4gPiBUaGUgbGludXggc2VydmVyIGV2ZW50dWFsbHkgc2VudCBhIEZJTiBhZnRlciBzb21lIGRl bGF5LiBUaG91Z2ggSQ0KPiA+IGFtDQo+ID4gbm90IHN1cmUsIEkgdGhpbmsgdGhpcyBpcyBkdWUg dG8NCj4gPiANCj4gPiAvKiBhcHBhcmVudGx5IHRoZSAic3RhbmRhcmQiIGlzIHRoYXQgY2xpZW50 cyBjbG9zZQ0KPiA+ICAqIGlkbGUgY29ubmVjdGlvbnMgYWZ0ZXIgNSBtaW51dGVzLCBzZXJ2ZXJz IGFmdGVyDQo+ID4gICogNiBtaW51dGVzDQo+ID4gICogICBodHRwOi8vd3d3LmNvbm5lY3RhdGhv bi5vcmcvdGFsa3M5Ni9uZnN0Y3AucGRmDQo+ID4gICovDQo+ID4gc3RhdGljIGludCBzdmNfY29u bl9hZ2VfcGVyaW9kID0gNio2MDsNCj4gDQo+IEkgdHJpZWQgdG8gaW5jcmVhc2UgdGhpcyB2YWx1 ZS4NCj4gQWZ0ZXIgc2V0dGluZyB0aGlzIHZhbHVlIHRvIGEgaGlnaCB2YWx1ZSBbNjAqNjBdLCBJ IGNvdWxkIHNlZSB0aGUNCj4gY2xpZW50IHN0YXlpbmcgaW4gRklOX1dBSVRfMiBzdGF0ZSBmb3Jl dmVyLg0KPiANCj4gVG8gcmVwZWF0LCBteSB0ZXN0IGNhc2UgaXMsDQo+IDEuIFRha2UgYSBuZnMg c2VydmVyIGFuZCBtYWtlIGl0IG5vdCBzZW5kIHRoZSBGSU4gb24gMjA0OSBwb3J0DQo+IDIuIFVz ZSBhbnkgdXBzdHJlYW0ga2VybmVsIFtJIHVzZWQgNC4xNC1yYzFdIGFzIG5mcyBjbGllbnQNCj4g My4gTGV0IHRoZSBtb3VudCBiZSBpZGxlIGZvciA1IG1pbnMgc28gdGhhdCBhdXRvY2xvc2UgZ2V0 cyB0cmlnZ2VyZWQuDQo+IDQuIGFmdGVyIHRoaXMsIGNsaWVudCBzdGF5cyBpbiBGSU5fV0FJVF8y IHN0YXRlW3dlIGNhbiBvYnNlcnZlciBpdA0KPiB3aXRoIG5ldHN0YXQgLWFuIHwgZ3JlcCAyMDQ5 XS4NCj4gNS4gQXQgdGhpcyBwb2ludCBubyBuZXcgTkZTIGNvbm5lY3Rpb24gaXMgYWxsb3dlZCBv biB0aGlzIHBvcnQuIFNvDQo+IG1vdW50IGlzIGh1bmcgZm9yIGFwcGxpY2F0aW9uLg0KDQpXaGF0 IGRvIHlvdSBtZWFuIHdoZW4geW91IHNheSAibWFrZSBpdCBub3Qgc2VuZCBGSU4iPyBBcmUgeW91 IGp1c3QNCmZpbHRlcmluZyBhbGwgcGFja2V0cyB3aXRoIGEgRklOIGZsYWcgc2V0PyBOb3JtYWxs eSwgYSBGSU4gaXMgZXhwZWN0ZWQNCnRvIGJlIEFDS2VkIGJ5IHRoZSByZWNpcGllbnQgc28gdGhh dCBpdCBjYW4gYmUgcmV0cmFuc21pdHRlZCBpZiBsb3N0Lg0KDQoNCkhvd2V2ZXIsIGV2ZW4gaWYg aXQgZG9lcyBub3QgcmVjZWl2ZSB0aGUgRklOIGZyb20gdGhlIHNlcnZlciwgdGhlbiB0aGUNCkZJ Tl9XQUlUMiBzdGF0ZSBzaG91bGQgYXV0b21hdGljYWxseSB0aW1lIG91dCBhZnRlcg0KL3Byb2Mv c3lzL25ldC9pcHY0L3RjcF9maW5fdGltZW91dCBzZWNvbmRzIChzZWUgdGhlIGRlc2NyaXB0aW9u IGluIHRoZQ0KU09fTElOR0VSMiBzb2NrZXQgb3B0aW9uKS4gSXNuJ3QgdGhpcyB3b3JraW5nPw0K DQotLSANClRyb25kIE15a2xlYnVzdA0KTGludXggTkZTIGNsaWVudCBtYWludGFpbmVyLCBQcmlt YXJ5RGF0YQ0KdHJvbmQubXlrbGVidXN0QHByaW1hcnlkYXRhLmNvbQ0K ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Bug ?] Permanent FIN_WAIT_2 state on NFS client with bad NFS server 2017-10-06 20:51 ` Trond Myklebust @ 2017-10-06 22:12 ` Manjunath Patil 2017-10-06 22:38 ` Trond Myklebust 2017-10-08 17:58 ` Trond Myklebust 0 siblings, 2 replies; 8+ messages in thread From: Manjunath Patil @ 2017-10-06 22:12 UTC (permalink / raw) To: Trond Myklebust Cc: dwysocha@redhat.com, linux-nfs@vger.kernel.org, manjunath.b.patil@oracle.com On Fri, Oct 6, 2017 at 1:51 PM, Trond Myklebust <trondmy@primarydata.com> wrote: > On Fri, 2017-10-06 at 12:13 -0700, Manjunath Patil wrote: >> Hi David, >> >> On Fri, Sep 22, 2017 at 12:21 PM, Manjunath Patil >> <mbpatil.linux@gmail.com> wrote: >> > Hi David, >> > >> > On Thu, Sep 21, 2017 at 10:05 AM, David Wysochanski <dwysocha@redha >> > t.com> wrote: >> > > On Wed, 2017-09-20 at 15:17 -0700, Manjunath Patil wrote: >> > > > Hi, >> > > > >> > > > With autoclose trying to close the connection, after the idle >> > > > timeout >> > > > in NFSv3 mounts, >> > > > a bad NFS server may not send the final FIN, leading the client >> > > > stay >> > > > in FIN_WAIT_2 state forever. >> > > > This is easily reproducible by simulating the bad server >> > > > behavior. I >> > > > used 'netstat -an | grep 2049' to observer socket state. >> > > > >> > > >> > > How long did you wait and how did you simulate the failure? I am >> > > very >> > > interested in your test case. >> > >> > I observer this in ct environment. In this case the fin_wait_2 >> > stayed forever. >> > ct had to restart the node to get out. >> > >> > We tried to simulate this behavior in Linux nfs server by stopping >> > the >> > incoming FIN >> > for 2049 port inside kernel. This prevented the server from sending >> > the final FIN for some time. >> > >> > The linux server eventually sent a FIN after some delay. Though I >> > am >> > not sure, I think this is due to >> > >> > /* apparently the "standard" is that clients close >> > * idle connections after 5 minutes, servers after >> > * 6 minutes >> > * http://www.connectathon.org/talks96/nfstcp.pdf >> > */ >> > static int svc_conn_age_period = 6*60; >> >> I tried to increase this value. >> After setting this value to a high value [60*60], I could see the >> client staying in FIN_WAIT_2 state forever. >> >> To repeat, my test case is, >> 1. Take a nfs server and make it not send the FIN on 2049 port >> 2. Use any upstream kernel [I used 4.14-rc1] as nfs client >> 3. Let the mount be idle for 5 mins so that autoclose gets triggered. >> 4. after this, client stays in FIN_WAIT_2 state[we can observer it >> with netstat -an | grep 2049]. >> 5. At this point no new NFS connection is allowed on this port. So >> mount is hung for application. > > What do you mean when you say "make it not send FIN"? Are you just > filtering all packets with a FIN flag set? Normally, a FIN is expected > to be ACKed by the recipient so that it can be retransmitted if lost. In my test-case I prevented TCP layer itself[by code change] from sending FIN packet on port 2049. The client sends FIN, gets a ACK then Client expects final FIN, server never sends it. > > > However, even if it does not receive the FIN from the server, then the > FIN_WAIT2 state should automatically time out after > /proc/sys/net/ipv4/tcp_fin_timeout seconds (see the description in the > SO_LINGER2 socket option). Isn't this working? > I think this behavior is true only for full close of socket. The present issue is happening only with autoclose() The autoclose behavior is changed from full close to half close with the following commit - caf4ccd SUNRPC: Make xs_tcp_close() do a socket shutdown rather than a sock_release The following commit might be related too - 9cbc94f SUNRPC: Remove TCP socket linger code -Thanks, manjunath > -- > Trond Myklebust > Linux NFS client maintainer, PrimaryData > trond.myklebust@primarydata.com ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Bug ?] Permanent FIN_WAIT_2 state on NFS client with bad NFS server 2017-10-06 22:12 ` Manjunath Patil @ 2017-10-06 22:38 ` Trond Myklebust 2017-10-08 17:58 ` Trond Myklebust 1 sibling, 0 replies; 8+ messages in thread From: Trond Myklebust @ 2017-10-06 22:38 UTC (permalink / raw) To: mbpatil.linux@gmail.com Cc: linux-nfs@vger.kernel.org, dwysocha@redhat.com, manjunath.b.patil@oracle.com T24gRnJpLCAyMDE3LTEwLTA2IGF0IDE1OjEyIC0wNzAwLCBNYW5qdW5hdGggUGF0aWwgd3JvdGU6 DQo+IE9uIEZyaSwgT2N0IDYsIDIwMTcgYXQgMTo1MSBQTSwgVHJvbmQgTXlrbGVidXN0IDx0cm9u ZG15QHByaW1hcnlkYXRhLg0KPiBjb20+IHdyb3RlOg0KPiA+IE9uIEZyaSwgMjAxNy0xMC0wNiBh dCAxMjoxMyAtMDcwMCwgTWFuanVuYXRoIFBhdGlsIHdyb3RlOg0KPiA+ID4gSGkgRGF2aWQsDQo+ ID4gPiANCj4gPiA+IE9uIEZyaSwgU2VwIDIyLCAyMDE3IGF0IDEyOjIxIFBNLCBNYW5qdW5hdGgg UGF0aWwNCj4gPiA+IDxtYnBhdGlsLmxpbnV4QGdtYWlsLmNvbT4gd3JvdGU6DQo+ID4gPiA+IEhp IERhdmlkLA0KPiA+ID4gPiANCj4gPiA+ID4gT24gVGh1LCBTZXAgMjEsIDIwMTcgYXQgMTA6MDUg QU0sIERhdmlkIFd5c29jaGFuc2tpIDxkd3lzb2NoYUByDQo+ID4gPiA+IGVkaGENCj4gPiA+ID4g dC5jb20+IHdyb3RlOg0KPiA+ID4gPiA+IE9uIFdlZCwgMjAxNy0wOS0yMCBhdCAxNToxNyAtMDcw MCwgTWFuanVuYXRoIFBhdGlsIHdyb3RlOg0KPiA+ID4gPiA+ID4gSGksDQo+ID4gPiA+ID4gPiAN Cj4gPiA+ID4gPiA+IFdpdGggYXV0b2Nsb3NlIHRyeWluZyB0byBjbG9zZSB0aGUgY29ubmVjdGlv biwgYWZ0ZXIgdGhlDQo+ID4gPiA+ID4gPiBpZGxlDQo+ID4gPiA+ID4gPiB0aW1lb3V0DQo+ID4g PiA+ID4gPiBpbiBORlN2MyBtb3VudHMsDQo+ID4gPiA+ID4gPiBhIGJhZCBORlMgc2VydmVyIG1h eSBub3Qgc2VuZCB0aGUgZmluYWwgRklOLCBsZWFkaW5nIHRoZQ0KPiA+ID4gPiA+ID4gY2xpZW50 DQo+ID4gPiA+ID4gPiBzdGF5DQo+ID4gPiA+ID4gPiBpbiBGSU5fV0FJVF8yIHN0YXRlIGZvcmV2 ZXIuDQo+ID4gPiA+ID4gPiBUaGlzIGlzIGVhc2lseSByZXByb2R1Y2libGUgYnkgc2ltdWxhdGlu ZyB0aGUgYmFkIHNlcnZlcg0KPiA+ID4gPiA+ID4gYmVoYXZpb3IuIEkNCj4gPiA+ID4gPiA+IHVz ZWQgJ25ldHN0YXQgLWFuIHwgZ3JlcCAyMDQ5JyB0byBvYnNlcnZlciBzb2NrZXQgc3RhdGUuDQo+ ID4gPiA+ID4gPiANCj4gPiA+ID4gPiANCj4gPiA+ID4gPiBIb3cgbG9uZyBkaWQgeW91IHdhaXQg YW5kIGhvdyBkaWQgeW91IHNpbXVsYXRlIHRoZQ0KPiA+ID4gPiA+IGZhaWx1cmU/ICBJIGFtDQo+ ID4gPiA+ID4gdmVyeQ0KPiA+ID4gPiA+IGludGVyZXN0ZWQgaW4geW91ciB0ZXN0IGNhc2UuDQo+ ID4gPiA+IA0KPiA+ID4gPiBJIG9ic2VydmVyIHRoaXMgaW4gY3QgZW52aXJvbm1lbnQuIEluIHRo aXMgY2FzZSB0aGUgZmluX3dhaXRfMg0KPiA+ID4gPiBzdGF5ZWQgZm9yZXZlci4NCj4gPiA+ID4g Y3QgaGFkIHRvIHJlc3RhcnQgdGhlIG5vZGUgdG8gZ2V0IG91dC4NCj4gPiA+ID4gDQo+ID4gPiA+ IFdlIHRyaWVkIHRvIHNpbXVsYXRlIHRoaXMgYmVoYXZpb3IgaW4gTGludXggbmZzIHNlcnZlciBi eQ0KPiA+ID4gPiBzdG9wcGluZw0KPiA+ID4gPiB0aGUNCj4gPiA+ID4gaW5jb21pbmcgRklODQo+ ID4gPiA+IGZvciAyMDQ5IHBvcnQgaW5zaWRlIGtlcm5lbC4gVGhpcyBwcmV2ZW50ZWQgdGhlIHNl cnZlciBmcm9tDQo+ID4gPiA+IHNlbmRpbmcNCj4gPiA+ID4gdGhlIGZpbmFsIEZJTiBmb3Igc29t ZSB0aW1lLg0KPiA+ID4gPiANCj4gPiA+ID4gVGhlIGxpbnV4IHNlcnZlciBldmVudHVhbGx5IHNl bnQgYSBGSU4gYWZ0ZXIgc29tZSBkZWxheS4gVGhvdWdoDQo+ID4gPiA+IEkNCj4gPiA+ID4gYW0N Cj4gPiA+ID4gbm90IHN1cmUsIEkgdGhpbmsgdGhpcyBpcyBkdWUgdG8NCj4gPiA+ID4gDQo+ID4g PiA+IC8qIGFwcGFyZW50bHkgdGhlICJzdGFuZGFyZCIgaXMgdGhhdCBjbGllbnRzIGNsb3NlDQo+ ID4gPiA+ICAqIGlkbGUgY29ubmVjdGlvbnMgYWZ0ZXIgNSBtaW51dGVzLCBzZXJ2ZXJzIGFmdGVy DQo+ID4gPiA+ICAqIDYgbWludXRlcw0KPiA+ID4gPiAgKiAgIGh0dHA6Ly93d3cuY29ubmVjdGF0 aG9uLm9yZy90YWxrczk2L25mc3RjcC5wZGYNCj4gPiA+ID4gICovDQo+ID4gPiA+IHN0YXRpYyBp bnQgc3ZjX2Nvbm5fYWdlX3BlcmlvZCA9IDYqNjA7DQo+ID4gPiANCj4gPiA+IEkgdHJpZWQgdG8g aW5jcmVhc2UgdGhpcyB2YWx1ZS4NCj4gPiA+IEFmdGVyIHNldHRpbmcgdGhpcyB2YWx1ZSB0byBh IGhpZ2ggdmFsdWUgWzYwKjYwXSwgSSBjb3VsZCBzZWUgdGhlDQo+ID4gPiBjbGllbnQgc3RheWlu ZyBpbiBGSU5fV0FJVF8yIHN0YXRlIGZvcmV2ZXIuDQo+ID4gPiANCj4gPiA+IFRvIHJlcGVhdCwg bXkgdGVzdCBjYXNlIGlzLA0KPiA+ID4gMS4gVGFrZSBhIG5mcyBzZXJ2ZXIgYW5kIG1ha2UgaXQg bm90IHNlbmQgdGhlIEZJTiBvbiAyMDQ5IHBvcnQNCj4gPiA+IDIuIFVzZSBhbnkgdXBzdHJlYW0g a2VybmVsIFtJIHVzZWQgNC4xNC1yYzFdIGFzIG5mcyBjbGllbnQNCj4gPiA+IDMuIExldCB0aGUg bW91bnQgYmUgaWRsZSBmb3IgNSBtaW5zIHNvIHRoYXQgYXV0b2Nsb3NlIGdldHMNCj4gPiA+IHRy aWdnZXJlZC4NCj4gPiA+IDQuIGFmdGVyIHRoaXMsIGNsaWVudCBzdGF5cyBpbiBGSU5fV0FJVF8y IHN0YXRlW3dlIGNhbiBvYnNlcnZlcg0KPiA+ID4gaXQNCj4gPiA+IHdpdGggbmV0c3RhdCAtYW4g fCBncmVwIDIwNDldLg0KPiA+ID4gNS4gQXQgdGhpcyBwb2ludCBubyBuZXcgTkZTIGNvbm5lY3Rp b24gaXMgYWxsb3dlZCBvbiB0aGlzIHBvcnQuDQo+ID4gPiBTbw0KPiA+ID4gbW91bnQgaXMgaHVu ZyBmb3IgYXBwbGljYXRpb24uDQo+ID4gDQo+ID4gV2hhdCBkbyB5b3UgbWVhbiB3aGVuIHlvdSBz YXkgIm1ha2UgaXQgbm90IHNlbmQgRklOIj8gQXJlIHlvdSBqdXN0DQo+ID4gZmlsdGVyaW5nIGFs bCBwYWNrZXRzIHdpdGggYSBGSU4gZmxhZyBzZXQ/IE5vcm1hbGx5LCBhIEZJTiBpcw0KPiA+IGV4 cGVjdGVkDQo+ID4gdG8gYmUgQUNLZWQgYnkgdGhlIHJlY2lwaWVudCBzbyB0aGF0IGl0IGNhbiBi ZSByZXRyYW5zbWl0dGVkIGlmDQo+ID4gbG9zdC4NCj4gDQo+IEluIG15IHRlc3QtY2FzZSBJIHBy ZXZlbnRlZCBUQ1AgbGF5ZXIgaXRzZWxmW2J5IGNvZGUgY2hhbmdlXSBmcm9tDQo+IHNlbmRpbmcg RklOIHBhY2tldCBvbiBwb3J0IDIwNDkuDQo+IA0KPiBUaGUgY2xpZW50IHNlbmRzIEZJTiwgZ2V0 cyBhIEFDSw0KPiB0aGVuDQo+IENsaWVudCBleHBlY3RzIGZpbmFsIEZJTiwgc2VydmVyIG5ldmVy IHNlbmRzIGl0Lg0KPiA+IA0KPiA+IA0KPiA+IEhvd2V2ZXIsIGV2ZW4gaWYgaXQgZG9lcyBub3Qg cmVjZWl2ZSB0aGUgRklOIGZyb20gdGhlIHNlcnZlciwgdGhlbg0KPiA+IHRoZQ0KPiA+IEZJTl9X QUlUMiBzdGF0ZSBzaG91bGQgYXV0b21hdGljYWxseSB0aW1lIG91dCBhZnRlcg0KPiA+IC9wcm9j L3N5cy9uZXQvaXB2NC90Y3BfZmluX3RpbWVvdXQgc2Vjb25kcyAoc2VlIHRoZSBkZXNjcmlwdGlv biBpbg0KPiA+IHRoZQ0KPiA+IFNPX0xJTkdFUjIgc29ja2V0IG9wdGlvbikuIElzbid0IHRoaXMg d29ya2luZz8NCj4gPiANCj4gDQo+IEkgdGhpbmsgdGhpcyBiZWhhdmlvciBpcyB0cnVlIG9ubHkg Zm9yIGZ1bGwgY2xvc2Ugb2Ygc29ja2V0LiBUaGUNCj4gcHJlc2VudCBpc3N1ZSBpcyBoYXBwZW5p bmcgb25seSB3aXRoIGF1dG9jbG9zZSgpDQo+IFRoZSBhdXRvY2xvc2UgYmVoYXZpb3IgaXMgY2hh bmdlZCBmcm9tIGZ1bGwgY2xvc2UgdG8gaGFsZiBjbG9zZSB3aXRoDQo+IHRoZSBmb2xsb3dpbmcg Y29tbWl0IC0NCj4gY2FmNGNjZCBTVU5SUEM6IE1ha2UgeHNfdGNwX2Nsb3NlKCkgZG8gYSBzb2Nr ZXQgc2h1dGRvd24gcmF0aGVyIHRoYW4NCj4gYQ0KPiBzb2NrX3JlbGVhc2UNCj4gDQo+IFRoZSBm b2xsb3dpbmcgY29tbWl0IG1pZ2h0IGJlIHJlbGF0ZWQgdG9vIC0NCj4gOWNiYzk0ZiBTVU5SUEM6 IFJlbW92ZSBUQ1Agc29ja2V0IGxpbmdlciBjb2RlDQo+IA0KDQpbdHJvbmRteUBsZWlyYSBsaW51 eF0kIGdyZXAgc29ja19zaHV0ZG93biBuZXQvc3VucnBjL3hwcnRzb2NrLmMgDQoJa2VybmVsX3Nv Y2tfc2h1dGRvd24oc29jaywgU0hVVF9SRFdSKTsNCgkJa2VybmVsX3NvY2tfc2h1dGRvd24oc29j aywgU0hVVF9SRFdSKTsNCg0KU0hVVF9SRFdSIGlzIGEgZnVsbCBjbG9zZSBBRkFJSy4uLg0KDQot LSANClRyb25kIE15a2xlYnVzdA0KTGludXggTkZTIGNsaWVudCBtYWludGFpbmVyLCBQcmltYXJ5 RGF0YQ0KdHJvbmQubXlrbGVidXN0QHByaW1hcnlkYXRhLmNvbQ0K ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Bug ?] Permanent FIN_WAIT_2 state on NFS client with bad NFS server 2017-10-06 22:12 ` Manjunath Patil 2017-10-06 22:38 ` Trond Myklebust @ 2017-10-08 17:58 ` Trond Myklebust 1 sibling, 0 replies; 8+ messages in thread From: Trond Myklebust @ 2017-10-08 17:58 UTC (permalink / raw) To: mbpatil.linux@gmail.com Cc: linux-nfs@vger.kernel.org, dwysocha@redhat.com, manjunath.b.patil@oracle.com T24gRnJpLCAyMDE3LTEwLTA2IGF0IDE1OjEyIC0wNzAwLCBNYW5qdW5hdGggUGF0aWwgd3JvdGU6 DQo+IE9uIEZyaSwgT2N0IDYsIDIwMTcgYXQgMTo1MSBQTSwgVHJvbmQgTXlrbGVidXN0IDx0cm9u ZG15QHByaW1hcnlkYXRhLg0KPiBjb20+IHdyb3RlOg0KPiA+IE9uIEZyaSwgMjAxNy0xMC0wNiBh dCAxMjoxMyAtMDcwMCwgTWFuanVuYXRoIFBhdGlsIHdyb3RlOg0KPiA+ID4gSGkgRGF2aWQsDQo+ ID4gPiANCj4gPiA+IE9uIEZyaSwgU2VwIDIyLCAyMDE3IGF0IDEyOjIxIFBNLCBNYW5qdW5hdGgg UGF0aWwNCj4gPiA+IDxtYnBhdGlsLmxpbnV4QGdtYWlsLmNvbT4gd3JvdGU6DQo+ID4gPiA+IEhp IERhdmlkLA0KPiA+ID4gPiANCj4gPiA+ID4gT24gVGh1LCBTZXAgMjEsIDIwMTcgYXQgMTA6MDUg QU0sIERhdmlkIFd5c29jaGFuc2tpIDxkd3lzb2NoYUByDQo+ID4gPiA+IGVkaGENCj4gPiA+ID4g dC5jb20+IHdyb3RlOg0KPiA+ID4gPiA+IE9uIFdlZCwgMjAxNy0wOS0yMCBhdCAxNToxNyAtMDcw MCwgTWFuanVuYXRoIFBhdGlsIHdyb3RlOg0KPiA+ID4gPiA+ID4gSGksDQo+ID4gPiA+ID4gPiAN Cj4gPiA+ID4gPiA+IFdpdGggYXV0b2Nsb3NlIHRyeWluZyB0byBjbG9zZSB0aGUgY29ubmVjdGlv biwgYWZ0ZXIgdGhlDQo+ID4gPiA+ID4gPiBpZGxlDQo+ID4gPiA+ID4gPiB0aW1lb3V0DQo+ID4g PiA+ID4gPiBpbiBORlN2MyBtb3VudHMsDQo+ID4gPiA+ID4gPiBhIGJhZCBORlMgc2VydmVyIG1h eSBub3Qgc2VuZCB0aGUgZmluYWwgRklOLCBsZWFkaW5nIHRoZQ0KPiA+ID4gPiA+ID4gY2xpZW50 DQo+ID4gPiA+ID4gPiBzdGF5DQo+ID4gPiA+ID4gPiBpbiBGSU5fV0FJVF8yIHN0YXRlIGZvcmV2 ZXIuDQo+ID4gPiA+ID4gPiBUaGlzIGlzIGVhc2lseSByZXByb2R1Y2libGUgYnkgc2ltdWxhdGlu ZyB0aGUgYmFkIHNlcnZlcg0KPiA+ID4gPiA+ID4gYmVoYXZpb3IuIEkNCj4gPiA+ID4gPiA+IHVz ZWQgJ25ldHN0YXQgLWFuIHwgZ3JlcCAyMDQ5JyB0byBvYnNlcnZlciBzb2NrZXQgc3RhdGUuDQo+ ID4gPiA+ID4gPiANCj4gPiA+ID4gPiANCj4gPiA+ID4gPiBIb3cgbG9uZyBkaWQgeW91IHdhaXQg YW5kIGhvdyBkaWQgeW91IHNpbXVsYXRlIHRoZQ0KPiA+ID4gPiA+IGZhaWx1cmU/ICBJIGFtDQo+ ID4gPiA+ID4gdmVyeQ0KPiA+ID4gPiA+IGludGVyZXN0ZWQgaW4geW91ciB0ZXN0IGNhc2UuDQo+ ID4gPiA+IA0KPiA+ID4gPiBJIG9ic2VydmVyIHRoaXMgaW4gY3QgZW52aXJvbm1lbnQuIEluIHRo aXMgY2FzZSB0aGUgZmluX3dhaXRfMg0KPiA+ID4gPiBzdGF5ZWQgZm9yZXZlci4NCj4gPiA+ID4g Y3QgaGFkIHRvIHJlc3RhcnQgdGhlIG5vZGUgdG8gZ2V0IG91dC4NCj4gPiA+ID4gDQo+ID4gPiA+ IFdlIHRyaWVkIHRvIHNpbXVsYXRlIHRoaXMgYmVoYXZpb3IgaW4gTGludXggbmZzIHNlcnZlciBi eQ0KPiA+ID4gPiBzdG9wcGluZw0KPiA+ID4gPiB0aGUNCj4gPiA+ID4gaW5jb21pbmcgRklODQo+ ID4gPiA+IGZvciAyMDQ5IHBvcnQgaW5zaWRlIGtlcm5lbC4gVGhpcyBwcmV2ZW50ZWQgdGhlIHNl cnZlciBmcm9tDQo+ID4gPiA+IHNlbmRpbmcNCj4gPiA+ID4gdGhlIGZpbmFsIEZJTiBmb3Igc29t ZSB0aW1lLg0KPiA+ID4gPiANCj4gPiA+ID4gVGhlIGxpbnV4IHNlcnZlciBldmVudHVhbGx5IHNl bnQgYSBGSU4gYWZ0ZXIgc29tZSBkZWxheS4gVGhvdWdoDQo+ID4gPiA+IEkNCj4gPiA+ID4gYW0N Cj4gPiA+ID4gbm90IHN1cmUsIEkgdGhpbmsgdGhpcyBpcyBkdWUgdG8NCj4gPiA+ID4gDQo+ID4g PiA+IC8qIGFwcGFyZW50bHkgdGhlICJzdGFuZGFyZCIgaXMgdGhhdCBjbGllbnRzIGNsb3NlDQo+ ID4gPiA+ICAqIGlkbGUgY29ubmVjdGlvbnMgYWZ0ZXIgNSBtaW51dGVzLCBzZXJ2ZXJzIGFmdGVy DQo+ID4gPiA+ICAqIDYgbWludXRlcw0KPiA+ID4gPiAgKiAgIGh0dHA6Ly93d3cuY29ubmVjdGF0 aG9uLm9yZy90YWxrczk2L25mc3RjcC5wZGYNCj4gPiA+ID4gICovDQo+ID4gPiA+IHN0YXRpYyBp bnQgc3ZjX2Nvbm5fYWdlX3BlcmlvZCA9IDYqNjA7DQo+ID4gPiANCj4gPiA+IEkgdHJpZWQgdG8g aW5jcmVhc2UgdGhpcyB2YWx1ZS4NCj4gPiA+IEFmdGVyIHNldHRpbmcgdGhpcyB2YWx1ZSB0byBh IGhpZ2ggdmFsdWUgWzYwKjYwXSwgSSBjb3VsZCBzZWUgdGhlDQo+ID4gPiBjbGllbnQgc3RheWlu ZyBpbiBGSU5fV0FJVF8yIHN0YXRlIGZvcmV2ZXIuDQo+ID4gPiANCj4gPiA+IFRvIHJlcGVhdCwg bXkgdGVzdCBjYXNlIGlzLA0KPiA+ID4gMS4gVGFrZSBhIG5mcyBzZXJ2ZXIgYW5kIG1ha2UgaXQg bm90IHNlbmQgdGhlIEZJTiBvbiAyMDQ5IHBvcnQNCj4gPiA+IDIuIFVzZSBhbnkgdXBzdHJlYW0g a2VybmVsIFtJIHVzZWQgNC4xNC1yYzFdIGFzIG5mcyBjbGllbnQNCj4gPiA+IDMuIExldCB0aGUg bW91bnQgYmUgaWRsZSBmb3IgNSBtaW5zIHNvIHRoYXQgYXV0b2Nsb3NlIGdldHMNCj4gPiA+IHRy aWdnZXJlZC4NCj4gPiA+IDQuIGFmdGVyIHRoaXMsIGNsaWVudCBzdGF5cyBpbiBGSU5fV0FJVF8y IHN0YXRlW3dlIGNhbiBvYnNlcnZlcg0KPiA+ID4gaXQNCj4gPiA+IHdpdGggbmV0c3RhdCAtYW4g fCBncmVwIDIwNDldLg0KPiA+ID4gNS4gQXQgdGhpcyBwb2ludCBubyBuZXcgTkZTIGNvbm5lY3Rp b24gaXMgYWxsb3dlZCBvbiB0aGlzIHBvcnQuDQo+ID4gPiBTbw0KPiA+ID4gbW91bnQgaXMgaHVu ZyBmb3IgYXBwbGljYXRpb24uDQo+ID4gDQo+ID4gV2hhdCBkbyB5b3UgbWVhbiB3aGVuIHlvdSBz YXkgIm1ha2UgaXQgbm90IHNlbmQgRklOIj8gQXJlIHlvdSBqdXN0DQo+ID4gZmlsdGVyaW5nIGFs bCBwYWNrZXRzIHdpdGggYSBGSU4gZmxhZyBzZXQ/IE5vcm1hbGx5LCBhIEZJTiBpcw0KPiA+IGV4 cGVjdGVkDQo+ID4gdG8gYmUgQUNLZWQgYnkgdGhlIHJlY2lwaWVudCBzbyB0aGF0IGl0IGNhbiBi ZSByZXRyYW5zbWl0dGVkIGlmDQo+ID4gbG9zdC4NCj4gDQo+IEluIG15IHRlc3QtY2FzZSBJIHBy ZXZlbnRlZCBUQ1AgbGF5ZXIgaXRzZWxmW2J5IGNvZGUgY2hhbmdlXSBmcm9tDQo+IHNlbmRpbmcg RklOIHBhY2tldCBvbiBwb3J0IDIwNDkuDQo+IA0KPiBUaGUgY2xpZW50IHNlbmRzIEZJTiwgZ2V0 cyBhIEFDSw0KPiB0aGVuDQo+IENsaWVudCBleHBlY3RzIGZpbmFsIEZJTiwgc2VydmVyIG5ldmVy IHNlbmRzIGl0Lg0KPiA+IA0KPiA+IA0KPiA+IEhvd2V2ZXIsIGV2ZW4gaWYgaXQgZG9lcyBub3Qg cmVjZWl2ZSB0aGUgRklOIGZyb20gdGhlIHNlcnZlciwgdGhlbg0KPiA+IHRoZQ0KPiA+IEZJTl9X QUlUMiBzdGF0ZSBzaG91bGQgYXV0b21hdGljYWxseSB0aW1lIG91dCBhZnRlcg0KPiA+IC9wcm9j L3N5cy9uZXQvaXB2NC90Y3BfZmluX3RpbWVvdXQgc2Vjb25kcyAoc2VlIHRoZSBkZXNjcmlwdGlv biBpbg0KPiA+IHRoZQ0KPiA+IFNPX0xJTkdFUjIgc29ja2V0IG9wdGlvbikuIElzbid0IHRoaXMg d29ya2luZz8NCj4gPiANCj4gDQo+IEkgdGhpbmsgdGhpcyBiZWhhdmlvciBpcyB0cnVlIG9ubHkg Zm9yIGZ1bGwgY2xvc2Ugb2Ygc29ja2V0LiBUaGUNCj4gcHJlc2VudCBpc3N1ZSBpcyBoYXBwZW5p bmcgb25seSB3aXRoIGF1dG9jbG9zZSgpDQo+IFRoZSBhdXRvY2xvc2UgYmVoYXZpb3IgaXMgY2hh bmdlZCBmcm9tIGZ1bGwgY2xvc2UgdG8gaGFsZiBjbG9zZSB3aXRoDQo+IHRoZSBmb2xsb3dpbmcg Y29tbWl0IC0NCj4gY2FmNGNjZCBTVU5SUEM6IE1ha2UgeHNfdGNwX2Nsb3NlKCkgZG8gYSBzb2Nr ZXQgc2h1dGRvd24gcmF0aGVyIHRoYW4NCj4gYQ0KPiBzb2NrX3JlbGVhc2UNCg0KSSB0aG91Z2h0 IHlvdSBzYWlkIGluIHlvdXIgdmVyeSBmaXJzdCBlbWFpbCwgdGhhdCB5b3Ugd2VyZSB0YWxraW5n DQphYm91dCA0LjE0LXJjMT8gRm9yIHRoYXQga2VybmVsLCBhdXRvY2xvc2UgY2FsbHM6DQoNCi8q Kg0KICogeHNfdGNwX3NodXRkb3duIC0gZ3JhY2VmdWxseSBzaHV0IGRvd24gYSBUQ1Agc29ja2V0 DQogKiBAeHBydDogdHJhbnNwb3J0DQogKg0KICogSW5pdGlhdGVzIGEgZ3JhY2VmdWwgc2h1dGRv d24gb2YgdGhlIFRDUCBzb2NrZXQgYnkgY2FsbGluZyB0aGUNCiAqIGVxdWl2YWxlbnQgb2Ygc2h1 dGRvd24oU0hVVF9SRFdSKTsNCiAqLw0Kc3RhdGljIHZvaWQgeHNfdGNwX3NodXRkb3duKHN0cnVj dCBycGNfeHBydCAqeHBydCkNCnsNCiAgICAgICAgc3RydWN0IHNvY2tfeHBydCAqdHJhbnNwb3J0 ID0gY29udGFpbmVyX29mKHhwcnQsIHN0cnVjdCBzb2NrX3hwcnQsIHhwcnQpOw0KICAgICAgICBz dHJ1Y3Qgc29ja2V0ICpzb2NrID0gdHJhbnNwb3J0LT5zb2NrOw0KDQogICAgICAgIGlmIChzb2Nr ID09IE5VTEwpDQogICAgICAgICAgICAgICAgcmV0dXJuOw0KICAgICAgICBpZiAoeHBydF9jb25u ZWN0ZWQoeHBydCkpIHsNCiAgICAgICAgICAgICAgICBrZXJuZWxfc29ja19zaHV0ZG93bihzb2Nr LCBTSFVUX1JEV1IpOw0KICAgICAgICAgICAgICAgIHRyYWNlX3JwY19zb2NrZXRfc2h1dGRvd24o eHBydCwgc29jayk7DQogICAgICAgIH0gZWxzZQ0KICAgICAgICAgICAgICAgIHhzX3Jlc2V0X3Ry YW5zcG9ydCh0cmFuc3BvcnQpOw0KfQ0KDQp3aGljaCwgYXMgSSBzYWlkIGFib3ZlLCBkb2VzIGEg X2Z1bGwgY2xvc2VfLg0KDQotLSANClRyb25kIE15a2xlYnVzdA0KTGludXggTkZTIGNsaWVudCBt YWludGFpbmVyLCBQcmltYXJ5RGF0YQ0KdHJvbmQubXlrbGVidXN0QHByaW1hcnlkYXRhLmNvbQ0K ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2017-10-08 17:58 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2017-09-20 22:17 [Bug ?] Permanent FIN_WAIT_2 state on NFS client with bad NFS server Manjunath Patil 2017-09-21 17:05 ` David Wysochanski 2017-09-22 19:21 ` Manjunath Patil 2017-10-06 19:13 ` Manjunath Patil 2017-10-06 20:51 ` Trond Myklebust 2017-10-06 22:12 ` Manjunath Patil 2017-10-06 22:38 ` Trond Myklebust 2017-10-08 17:58 ` Trond Myklebust
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).