From: Jon Maloy <jon.maloy@ericsson.com>
To: Neil Horman <nhorman@tuxdriver.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>,
David Miller <davem@davemloft.net>, <netdev@vger.kernel.org>,
Ying Xue <ying.xue@windriver.com>
Subject: Re: [PATCH net-next 2/3] tipc: byte-based overload control on socket receive queue
Date: Tue, 19 Feb 2013 18:54:14 +0100 [thread overview]
Message-ID: <5123BC46.40909@ericsson.com> (raw)
In-Reply-To: <20130219142629.GA31871@hmsreliant.think-freely.org>
On 02/19/2013 03:26 PM, Neil Horman wrote:
> On Tue, Feb 19, 2013 at 09:07:54AM +0100, Jon Maloy wrote:
>> On 02/18/2013 09:47 AM, Neil Horman wrote:
>>> On Fri, Feb 15, 2013 at 05:57:46PM -0500, Paul Gortmaker wrote:
>>>> From: Ying Xue <ying.xue@windriver.com>
>>>>
>>>> Change overload control to be purely byte-based, using
>>>> sk->sk_rmem_alloc as byte counter, and compare it to a calculated
>>>> upper limit for the socket receive queue.
>>
>> [...]
>>
>>>> + *
>>>> + * For all connectionless messages, by default new queue limits are
>>>> + * as belows:
>>>> + *
>>>> + * TIPC_LOW_IMPORTANCE (5MB)
>>>> + * TIPC_MEDIUM_IMPORTANCE (10MB)
>>>> + * TIPC_HIGH_IMPORTANCE (20MB)
>>>> + * TIPC_CRITICAL_IMPORTANCE (40MB)
>>>> + *
>>>> + * Returns overload limit according to corresponding message importance
>>>> + */
>>>> +static unsigned int rcvbuf_limit(struct sock *sk, struct sk_buff *buf)
>>>> +{
>>>> + struct tipc_msg *msg = buf_msg(buf);
>>>> + unsigned int limit;
>>>> +
>>>> + if (msg_connected(msg))
>>>> + limit = CONN_OVERLOAD_LIMIT;
>>> This still strikes me as a bit wierd. If you really can't tolerate the default
>>> rmem settings in proc, have you considered separating the rmem and wmem values
>>> out into their own sysctls?
>>
>> Initially we tried to set this value as default for sk_rcvbuf, and then use
>> fractions of it as limits, as you suggest below. The problem we found was that
>> if we want to change this via setsockopt(SOL_SOCKET, SO_RCVBUF) the value range
>> we can use is very limited, and doesn't fit our purposes.
>>
> Can you elaborate on this please? The above doesn't really explain why you
> can't do what I suggested. Not asserting that what you say is untrue, mind you,
> I'm just trying to understand what it is about TIPC that requires such a
> specific reception buffer envelope,
There are two reasons for this.
The first one due to the message oriented nature of the flow control for
connections. Max message size is 65k, and max number of unacked messages
(at socket level, that is) before the sending process will take a break
is 1024.
So, simple maths gives that we must allow for 64MB + sk_overhead to guarantee
that a connection never is broken because of receiver overload. Contrary to TCP,
we don't have the luxury to just drop packets and expect them to be retransmitted,
because in TIPC they have already passed through the retransmission and
defragmentation layers, and are committed for delivery when they reach the
receiving socket.
You may question the wisdom of having a message oriented flow control algorithm,
but this the way it is now. We do actually have a working prototype where we
introduce purely byte-based flow control, but it is not ready for delivery yet,
and compatibility constraints will require that we still keep this high limit
in some cases.
The second reason is that TIPC provides SOCK_RDM instead of SOCK_DGRAM as its
basic datagram service. (It is the only transport service doing this afaik.)
So, the risk of having datagram messages rejected (not dropped), for any reason,
must be extremely low. ("Rejected" here means that we notify the sender when a
message cannot be delivered, we don't just silently drop it.)
Given that we also often have seen designs with many clients sending towards
one server we have empirical experience that we must have a high threshold here.
One can discuss whether 2MB or 5MB is the adequate limit for the lowest level,
we don't really now since this a new design, but we have every reason to believe
that the upper limit permitted by setsockopt(SOL_SOCK) (425,984 bytes according
to a quick test I made) is not enough for us.
> and how enforcing queue limits here is so
> important when packets could just as easily be dropped at the ip layer (with
> ostensibly no fatal failure).
There is no IP layer. TIPC and its retransmission layer sits directly on top of
L2/Ethernet. Nothing can be dropped below that layer, for obvious reasons,
and to drop anything above that layer *has* fatal consequences, because TIPC
guarantees delivery both for connection oriented and connectionless (as far as
ever possible) services.
Anyway, only the receiving socket contains the info making it possible to
select which messages to reject, in the rare cases where that becomes
unavoidable.
>
>> We did consider to introduce a separate setsockopt at TIPC level for this,
>> but thought it had a value in itself to use the mechanism that is already there.
>> Hence the "re-interpretation" of sk_rcvbuf as we do below.
>> Considering the weird doubling of this parameter that is done elsewhere in the
>> code we thought that having our own interpretation might be acceptable.
> Thats quite different IMHO. The comments in sock_setsockopt make it pretty
> clear that the doubling of the rcvbuf value is done to account for the sk_buff
> overhead of packet reception, and thats documented in the socket(7) man page.
> What you have here is the ability to set sk_rcvbuf, and then have that setting
> be ignored, but only to within several different limits, depending on various
> conditions, all of which are not visible to user space.
>
>> We did of course see the potential issue with this, that is why we cc-ed
>> you for comments.
> I appreciate that.
>
>> Now I see that David already pulled the series, so I am a little uncertain
>> about how to proceed.
> I saw that too, and asked him about this. A follow-on patch (if we wind up
> deciding one is warranted) is the way to go here.
Good. The best solution I see now, if you think the times-32 scaling is
unacceptable, would be to introduce a setsockopt() at SOL_TIPC level, and allow
for much wider limits than SOL_SOCK permits now. That we need these wider limits
is beyond doubt, as I see it.
Regards
///jon
>
> Regards
> Neil
>
next prev parent reply other threads:[~2013-02-19 17:54 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-02-15 22:57 [PATCH net-next 0/3] tipc: two cleanups, plus overload respin Paul Gortmaker
2013-02-15 22:57 ` [PATCH net-next 1/3] tipc: eliminate duplicated discard_rx_queue routine Paul Gortmaker
2013-02-15 22:57 ` [PATCH net-next 2/3] tipc: byte-based overload control on socket receive queue Paul Gortmaker
2013-02-18 14:47 ` Neil Horman
2013-02-19 8:07 ` Jon Maloy
2013-02-19 14:26 ` Neil Horman
2013-02-19 17:54 ` Jon Maloy [this message]
2013-02-19 19:18 ` Neil Horman
2013-02-19 20:16 ` Jon Maloy
2013-02-19 21:44 ` Neil Horman
2013-02-21 10:24 ` Jon Maloy
2013-02-21 15:07 ` Neil Horman
2013-02-21 16:54 ` Jon Maloy
2013-02-21 18:16 ` Neil Horman
2013-02-21 21:05 ` Jon Maloy
2013-02-21 21:35 ` Neil Horman
2013-02-22 11:18 ` Jon Maloy
2013-02-22 11:54 ` David Laight
2013-02-22 12:08 ` Neil Horman
2013-02-15 22:57 ` [PATCH net-next 3/3] tipc: remove redundant checking for the number of iovecs in a send request Paul Gortmaker
2013-02-18 17:22 ` [PATCH net-next 0/3] tipc: two cleanups, plus overload respin David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5123BC46.40909@ericsson.com \
--to=jon.maloy@ericsson.com \
--cc=davem@davemloft.net \
--cc=netdev@vger.kernel.org \
--cc=nhorman@tuxdriver.com \
--cc=paul.gortmaker@windriver.com \
--cc=ying.xue@windriver.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).