From: Rick Jones <rick.jones2@hpe.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Eric Dumazet <edumazet@google.com>,
"David S . Miller" <davem@davemloft.net>,
netdev <netdev@vger.kernel.org>,
Tom Herbert <tom@herbertland.com>
Subject: Re: [RFC net-next 2/2] udp: No longer use SLAB_DESTROY_BY_RCU
Date: Mon, 28 Mar 2016 12:11:19 -0700 [thread overview]
Message-ID: <56F981D7.2050801@hpe.com> (raw)
In-Reply-To: <1459191346.6473.111.camel@edumazet-glaptop3.roam.corp.google.com>
On 03/28/2016 11:55 AM, Eric Dumazet wrote:
> On Mon, 2016-03-28 at 11:44 -0700, Rick Jones wrote:
>> On 03/28/2016 10:00 AM, Eric Dumazet wrote:
>>> If you mean that a busy DNS resolver spends _most_ of its time doing :
>>>
>>> fd = socket()
>>> bind(fd port=0)
>>> < send and receive one frame >
>>> close(fd)
>>
>> Yes. Although it has been a long time, I thought that say the likes of
>> a caching named in the middle between hosts and the rest of the DNS
>> would behave that way as it was looking-up names on behalf those who
>> asked it.
>
> I really doubt a modern program would dynamically allocate one UDP port
> for every in-flight request, as it would limit them to number of
> ephemeral ports concurrent requests (~30000 assuming the process can get
> them all on the host)
I was under the impression that individual DNS queries were supposed to
have not only random DNS query IDs but also originate from random UDP
source ports. https://tools.ietf.org/html/rfc5452 4.5 at least touches
on the topic but I don't see it making it hard and fast. By section 10
though it is more explicit:
This document recommends the use of UDP source port number
randomization to extend the effective DNS transaction ID beyond the
available 16 bits.
That being the case, if indeed there were to be 30000-odd concurrent
requests outstanding "upstream" from that location there'd have to be
30000 ephemeral ports in play.
rick
>
> Managing a pool would be more efficient (The 1.3 usec penalty becomes
> more like 4 usec in multi threaded programs)
>
> Sure, you always can find badly written programs, but they already hit
> scalability issues anyway.
>
> UDP refcounting cost about 2 cache line misses per packet in stress
> situations, this really has to go, so that well written programs can get
> full speed.
>
>
next prev parent reply other threads:[~2016-03-28 19:11 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-25 22:29 [RFC net-next 0/2] udp: use standard RCU rules Eric Dumazet
2016-03-25 22:29 ` [RFC net-next 1/2] net: add SOCK_RCU_FREE socket flag Eric Dumazet
2016-03-26 0:08 ` Tom Herbert
2016-03-25 22:29 ` [RFC net-next 2/2] udp: No longer use SLAB_DESTROY_BY_RCU Eric Dumazet
2016-03-26 0:08 ` Tom Herbert
2016-03-28 21:02 ` Eric Dumazet
2016-03-26 1:55 ` Alexei Starovoitov
2016-03-28 16:15 ` Rick Jones
2016-03-28 16:54 ` Tom Herbert
2016-03-28 17:00 ` Eric Dumazet
2016-03-28 18:44 ` Rick Jones
2016-03-28 18:55 ` Eric Dumazet
2016-03-28 19:11 ` Rick Jones [this message]
2016-03-28 20:01 ` Eric Dumazet
2016-03-28 20:15 ` Rick Jones
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56F981D7.2050801@hpe.com \
--to=rick.jones2@hpe.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eric.dumazet@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=tom@herbertland.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).