From: David Miller <davem@davemloft.net>
To: dhowells@redhat.com
Cc: linux-afs@lists.infradead.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 01/11] rxrpc: Add a common object cache
Date: Tue, 08 Mar 2016 22:00:04 -0500 (EST) [thread overview]
Message-ID: <20160308.220004.1907883099752626015.davem@davemloft.net> (raw)
In-Reply-To: <24950.1457471469@warthog.procyon.org.uk>
From: David Howells <dhowells@redhat.com>
Date: Tue, 08 Mar 2016 21:11:09 +0000
> I can put in a limit per peer, where a 'peer' is either a particular remote
> UDP port or particulat remote host. TCP has this by virtue of having a
> limited number of ports available per IP address. But if I have 10 IP
> addresses available, I can attempt to set up half a million TCP connections to
> a server simultaneously. If I have access to a box that has an NFS mount on
> it, I can potentially open sufficient TCP ports that the NFS mount can't make
> a connection if it's not allowed to use privileged ports.
You must hold onto and commit locally to state for each and every one
of those remote TCP connections you create and actually move to
established state.
It's completely different, both sides have to make a non-trivial
resource commitment.
For this RXRPC stuff, you don't.
That's the important and critical difference.
My core argument still stands that RXRPC is fundamental DoS'able, in a
way that is not matched by TCP or our routing code or similar
subsystems.
next prev parent reply other threads:[~2016-03-09 3:00 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-07 14:37 [PATCH 00/11] RxRPC: Rewrite part 2 David Howells
2016-03-07 14:38 ` [PATCH 01/11] rxrpc: Add a common object cache David Howells
2016-03-07 18:42 ` David Miller
2016-03-07 22:45 ` David Howells
2016-03-08 4:07 ` David Miller
2016-03-08 11:39 ` David Howells
2016-03-08 20:13 ` David Miller
2016-03-08 21:11 ` David Howells
2016-03-09 3:00 ` David Miller [this message]
2016-03-08 13:02 ` David Howells
2016-03-08 20:15 ` David Miller
2016-03-07 14:38 ` [PATCH 02/11] rxrpc: Do procfs lists through objcache David Howells
2016-03-07 14:38 ` [PATCH 03/11] rxrpc: Separate local endpoint object handling out into its own file David Howells
2016-03-07 14:38 ` [PATCH 04/11] rxrpc: Implement local endpoint cache David Howells
2016-03-07 14:38 ` [PATCH 05/11] rxrpc: procfs file to list local endpoints David Howells
2016-03-07 14:38 ` [PATCH 06/11] rxrpc: Rename ar-local.c to local-event.c David Howells
2016-03-07 14:38 ` [PATCH 07/11] rxrpc: Rename ar-peer.c to peer-object.c David Howells
2016-03-07 14:38 ` [PATCH 08/11] rxrpc: Implement peer endpoint cache David Howells
2016-03-07 14:39 ` [PATCH 09/11] rxrpc: Add /proc/net/rxrpc_peers to display the known remote endpoints David Howells
2016-03-07 14:39 ` [PATCH 10/11] rxrpc: Rename ar-error.c to peer-event.c David Howells
2016-03-07 14:39 ` [PATCH 11/11] rxrpc: Rename rxrpc_UDP_error_report() to rxrpc_error_report() David Howells
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160308.220004.1907883099752626015.davem@davemloft.net \
--to=davem@davemloft.net \
--cc=dhowells@redhat.com \
--cc=linux-afs@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).