From: "Mkrtchyan, Tigran" <tigran.mkrtchyan@desy.de>
To: Weston Andros Adamson <dros@netapp.com>
Cc: linux-nfs <linux-nfs@vger.kernel.org>,
Andy Adamson <William.Adamson@netapp.com>,
Steve Dickson <steved@redhat.com>
Subject: Re: DoS with NFSv4.1 client
Date: Thu, 10 Oct 2013 17:11:52 +0200 (CEST) [thread overview]
Message-ID: <1777495627.591249.1381417912773.JavaMail.zimbra@desy.de> (raw)
In-Reply-To: <1732606875.591017.1381416532236.JavaMail.zimbra@desy.de>
This is probably a question to IEFT working group, but anyway.
If my layout has a flag 'return-on-close' and open state id
is not valid any more should client expect layout to be still valid?
Tigran.
----- Original Message -----
> From: "Tigran Mkrtchyan" <tigran.mkrtchyan@desy.de>
> To: "Weston Andros Adamson" <dros@netapp.com>
> Cc: "linux-nfs" <linux-nfs@vger.kernel.org>, "Andy Adamson" <William.Adamson@netapp.com>, "Steve Dickson"
> <steved@redhat.com>
> Sent: Thursday, October 10, 2013 4:48:52 PM
> Subject: Re: DoS with NFSv4.1 client
>
>
>
> ----- Original Message -----
> > From: "Weston Andros Adamson" <dros@netapp.com>
> > To: "Tigran Mkrtchyan" <tigran.mkrtchyan@desy.de>
> > Cc: "<linux-nfs@vger.kernel.org>" <linux-nfs@vger.kernel.org>, "Andy
> > Adamson" <William.Adamson@netapp.com>, "Steve
> > Dickson" <steved@redhat.com>
> > Sent: Thursday, October 10, 2013 4:35:25 PM
> > Subject: Re: DoS with NFSv4.1 client
> >
> > Well, it'd be nice not to loop forever, but my question remains, is this
> > due
> > to a server bug (the DS not knowing about new stateid from MDS)?
> >
>
> Up to now, we have pushed open state id to the DS only on LAYOUTGET.
> This have to be changed, as the behaviour is not spec compliant.
>
> Tigran.
>
> > -dros
> >
> > On Oct 10, 2013, at 10:14 AM, Weston Andros Adamson <dros@netapp.com>
> > wrote:
> >
> > > So is this a server bug? It seems like the client is behaving
> > > correctly...
> > >
> > > -dros
> > >
> > > On Oct 10, 2013, at 5:56 AM, "Mkrtchyan, Tigran"
> > > <tigran.mkrtchyan@desy.de>
> > > wrote:
> > >
> > >>
> > >>
> > >> Today we was 'luck' to have such situation at day time.
> > >> Here is what happens:
> > >>
> > >> The client sends an OPEN and gets an open state id.
> > >> This is followed by LAYOUTGET ... and READ to DS.
> > >> At some point, server returns back BAD_STATEID.
> > >> This triggers client to issue a new OPEN and use
> > >> new open stateid with READ request to DS. As new
> > >> stateid is not known to DS, it keeps returning
> > >> BAD_STATEID and becomes an infinite loop.
> > >>
> > >> Regards,
> > >> Tigran.
> > >>
> > >>
> > >>
> > >> ----- Original Message -----
> > >>> From: "Tigran Mkrtchyan" <tigran.mkrtchyan@desy.de>
> > >>> To: linux-nfs@vger.kernel.org
> > >>> Cc: "Andy Adamson" <william.adamson@netapp.com>, "Steve Dickson"
> > >>> <steved@redhat.com>
> > >>> Sent: Wednesday, October 9, 2013 10:48:32 PM
> > >>> Subject: DoS with NFSv4.1 client
> > >>>
> > >>>
> > >>> Hi,
> > >>>
> > >>> last night we got a DoS attack with one of the NFS clients.
> > >>> The farm node, which was accessing data with pNFS,
> > >>> went mad and have tried to kill dCache NFS server. As usually
> > >>> this have happened over night and we was not able to
> > >>> get a network traffic or bump the debug level.
> > >>>
> > >>> The symptoms are:
> > >>>
> > >>> client starts to bombard the MDS with OPEN requests. As we see
> > >>> state created on the server side, the requests was processed by
> > >>> server. Nevertheless, for some reason, client did not like it. Here
> > >>> is the result of mountstats:
> > >>>
> > >>> OPEN:
> > >>> 17087065 ops (99%) 1 retrans (0%) 0 major timeouts
> > >>> avg bytes sent per op: 356 avg bytes received per op: 455
> > >>> backlog wait: 0.014707 RTT: 4.535704 total execute time: 4.574094
> > >>> (milliseconds)
> > >>> CLOSE:
> > >>> 290 ops (0%) 0 retrans (0%) 0 major timeouts
> > >>> avg bytes sent per op: 247 avg bytes received per op: 173
> > >>> backlog wait: 308.827586 RTT: 1748.479310 total execute time:
> > >>> 2057.365517
> > >>> (milliseconds)
> > >>>
> > >>>
> > >>> As you can see there is a quite a big difference between number of open
> > >>> and
> > >>> close requests.
> > >>> The same picture we can see on the server side as well:
> > >>>
> > >>> NFSServerV41 Stats: average±stderr(ns) min(ns)
> > >>> max(ns) Sampes
> > >>> DESTROY_SESSION 26056±4511.89 13000
> > >>> 97000 17
> > >>> OPEN 1197297± 0.00 816000
> > >>> 31924558000 54398533
> > >>> RESTOREFH 0± 0.00 0
> > >>> 25018778000 54398533
> > >>> SEQUENCE 1000± 0.00 1000
> > >>> 26066722000 55601046
> > >>> LOOKUP 4607959± 0.00 375000
> > >>> 26977455000 32118
> > >>> GETDEVICEINFO 13158±100.88 4000
> > >>> 655000 11378
> > >>> CLOSE 16236211± 0.00 5000
> > >>> 21021819000 20420
> > >>> LAYOUTGET 271736361± 0.00 10003000
> > >>> 68414723000 21095
> > >>>
> > >>> The last column is the number of requests.
> > >>>
> > >>> This is with RHEL6.4 as the client. By looking at the code,
> > >>> I can see a loop at nfs4proc.c#nfs4_do_open() which can be
> > >>> the cause of the problem. Nevertheless, I can't
> > >>> fine any reason why this look turned into an 'infinite' one.
> > >>>
> > >>> At the and our server ran out of memory and we have returned
> > >>> NFSERR_SERVERFAULT to the client. This triggered client to
> > >>> reestablish the session and all open state ids was
> > >>> invalidated and cleaned up.
> > >>>
> > >>> I am still trying to reproduce this behavior (on client
> > >>> and server) and any hint is welcome.
> > >>>
> > >>> Tigran.
> > >>> --
> > >>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > >>> the body of a message to majordomo@vger.kernel.org
> > >>> More majordomo info at http://vger.kernel.org/majordomo-info.html
> > >>>
> > >> --
> > >> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > >> the body of a message to majordomo@vger.kernel.org
> > >> More majordomo info at http://vger.kernel.org/majordomo-info.html
> > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
next prev parent reply other threads:[~2013-10-10 15:11 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1201078747.580554.1381350008792.JavaMail.zimbra@desy.de>
2013-10-09 20:48 ` DoS with NFSv4.1 client Mkrtchyan, Tigran
2013-10-10 9:56 ` Mkrtchyan, Tigran
2013-10-10 14:14 ` Weston Andros Adamson
2013-10-10 14:35 ` Weston Andros Adamson
2013-10-10 14:48 ` Mkrtchyan, Tigran
2013-10-10 15:11 ` Mkrtchyan, Tigran [this message]
2013-10-10 15:39 ` Adamson, Andy
[not found] ` <24F8F3E5-578D-43C1-83B8-F3310526D4AE@netapp.com>
[not found] ` <22165921.590399.1381413800631.JavaMail.zimbra@desy.de>
[not found] ` <194A4163-11C3-453B-9E3C-6B1460A7E57A@netapp.com>
2013-10-10 14:42 ` Adamson, Andy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1777495627.591249.1381417912773.JavaMail.zimbra@desy.de \
--to=tigran.mkrtchyan@desy.de \
--cc=William.Adamson@netapp.com \
--cc=dros@netapp.com \
--cc=linux-nfs@vger.kernel.org \
--cc=steved@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).