* Unix sockets via TCP on localhost: is TCP slower? @ 2008-11-12 23:20 Olaf van der Spek 2008-11-13 11:24 ` Arnaldo Carvalho de Melo 2008-11-14 0:19 ` J.R. Mauro 0 siblings, 2 replies; 17+ messages in thread From: Olaf van der Spek @ 2008-11-12 23:20 UTC (permalink / raw) To: Linux Kernel Mailing List Hi, Quite often in discussions, I see people claiming Unix sockets are faster then TCP sockets on a connection that stays inside localhost. Let's say from app A to app B. Is this indeed the case and if so, how much and why? My assumption is that the kernel can optimize the 'connection' and let any performance differences disappear. Greetings, Olaf ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-12 23:20 Unix sockets via TCP on localhost: is TCP slower? Olaf van der Spek @ 2008-11-13 11:24 ` Arnaldo Carvalho de Melo 2008-11-13 19:06 ` Olaf van der Spek 2008-11-14 0:19 ` J.R. Mauro 1 sibling, 1 reply; 17+ messages in thread From: Arnaldo Carvalho de Melo @ 2008-11-13 11:24 UTC (permalink / raw) To: Olaf van der Spek; +Cc: Linux Kernel Mailing List Em Thu, Nov 13, 2008 at 12:20:44AM +0100, Olaf van der Spek escreveu: > Hi, > > Quite often in discussions, I see people claiming Unix sockets are > faster then TCP sockets on a connection that stays inside localhost. > Let's say from app A to app B. > Is this indeed the case and if so, how much and why? > My assumption is that the kernel can optimize the 'connection' and let > any performance differences disappear. How much? Please measure. Faster? Not necessarily, Nagle comes to mind, among others. What kind of traffic? That matters too. Start here: http://en.wikipedia.org/wiki/Nagle's_algorithm - Arnaldo ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-13 11:24 ` Arnaldo Carvalho de Melo @ 2008-11-13 19:06 ` Olaf van der Spek 2008-11-13 23:04 ` Chris Friesen 0 siblings, 1 reply; 17+ messages in thread From: Olaf van der Spek @ 2008-11-13 19:06 UTC (permalink / raw) To: Arnaldo Carvalho de Melo, Olaf van der Spek, Linux Kernel Mailing List On Thu, Nov 13, 2008 at 12:24 PM, Arnaldo Carvalho de Melo <acme@redhat.com> wrote: > Em Thu, Nov 13, 2008 at 12:20:44AM +0100, Olaf van der Spek escreveu: >> Hi, >> >> Quite often in discussions, I see people claiming Unix sockets are >> faster then TCP sockets on a connection that stays inside localhost. >> Let's say from app A to app B. >> Is this indeed the case and if so, how much and why? >> My assumption is that the kernel can optimize the 'connection' and let >> any performance differences disappear. > > How much? Please measure. I can't be the first one to wonder about this. Has nobody done this kind of benchmark before? > Faster? Not necessarily, Nagle comes to mind, Eh, wouldn't that make TCP slower instead of faster? > among others. What kind of traffic? That matters too. > > Start here: http://en.wikipedia.org/wiki/Nagle's_algorithm Eh, I assume that algorithm is disabled on localhost. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-13 19:06 ` Olaf van der Spek @ 2008-11-13 23:04 ` Chris Friesen 0 siblings, 0 replies; 17+ messages in thread From: Chris Friesen @ 2008-11-13 23:04 UTC (permalink / raw) To: Olaf van der Spek; +Cc: Arnaldo Carvalho de Melo, Linux Kernel Mailing List Olaf van der Spek wrote: > On Thu, Nov 13, 2008 at 12:24 PM, Arnaldo Carvalho de Melo > <acme@redhat.com> wrote: >> Em Thu, Nov 13, 2008 at 12:20:44AM +0100, Olaf van der Spek escreveu: >>> Hi, >>> >>> Quite often in discussions, I see people claiming Unix sockets are >>> faster then TCP sockets on a connection that stays inside localhost. >>> Let's say from app A to app B. >>> Is this indeed the case and if so, how much and why? >>> My assumption is that the kernel can optimize the 'connection' and let >>> any performance differences disappear. >> How much? Please measure. Lmbench shows local tcp as noticeably slower than unix sockets on a Mac G5 running 2.6.27. *Local* Communication latencies in microseconds - smaller is better --------------------------------------------------------------- Host OS 2p/0K Pipe AF UDP RPC/ TCP RPC/ TCP ctxsw UNIX UDP TCP conn --------- ------- ----- ----- ---- ----- ----- ----- ----- ---- localhost 2.6.27 2.270 10.5 12.6 19.9 31.5 22.7 35.5 68. *Local* Communication bandwidths in MB/s - bigger is better ----------------------------------------------------------------------- Host OS Pipe AF TCP File Mmap Bcopy Bcopy Mem Mem UNIX reread reread (libc) (hand) read write --------- ------- ---- ---- ---- ------ ------ ------ ------ ---- ----- localhost 2.6.27 1368 1564 334. 1111.8 2068.8 930.3 947.0 2072 1269. Chris ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-12 23:20 Unix sockets via TCP on localhost: is TCP slower? Olaf van der Spek 2008-11-13 11:24 ` Arnaldo Carvalho de Melo @ 2008-11-14 0:19 ` J.R. Mauro 2008-11-14 0:22 ` David Miller 2008-11-14 8:51 ` Olaf van der Spek 1 sibling, 2 replies; 17+ messages in thread From: J.R. Mauro @ 2008-11-14 0:19 UTC (permalink / raw) To: Olaf van der Spek; +Cc: Linux Kernel Mailing List On Wed, Nov 12, 2008 at 6:20 PM, Olaf van der Spek <olafvdspek@gmail.com> wrote: > Hi, > > Quite often in discussions, I see people claiming Unix sockets are > faster then TCP sockets on a connection that stays inside localhost. Unix domain sockets should be faster because they're not subject to windowing, ACKs, flow control, encapsulation, etc. etc. > Let's say from app A to app B. > Is this indeed the case and if so, how much and why? > My assumption is that the kernel can optimize the 'connection' and let > any performance differences disappear. > > Greetings, > > Olaf > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-14 0:19 ` J.R. Mauro @ 2008-11-14 0:22 ` David Miller 2008-11-14 0:27 ` J.R. Mauro 2008-11-14 8:51 ` Olaf van der Spek 1 sibling, 1 reply; 17+ messages in thread From: David Miller @ 2008-11-14 0:22 UTC (permalink / raw) To: jrm8005; +Cc: olafvdspek, linux-kernel From: "J.R. Mauro" <jrm8005@gmail.com> Date: Thu, 13 Nov 2008 19:19:36 -0500 > On Wed, Nov 12, 2008 at 6:20 PM, Olaf van der Spek <olafvdspek@gmail.com> wrote: > > Hi, > > > > Quite often in discussions, I see people claiming Unix sockets are > > faster then TCP sockets on a connection that stays inside localhost. > > Unix domain sockets should be faster because they're not subject to > windowing, ACKs, flow control, encapsulation, etc. etc. And the wakeup rate is higher for TCP, and our process scheduler has been getting gradually slower and slower over time since 2.6.22 ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-14 0:22 ` David Miller @ 2008-11-14 0:27 ` J.R. Mauro 0 siblings, 0 replies; 17+ messages in thread From: J.R. Mauro @ 2008-11-14 0:27 UTC (permalink / raw) To: David Miller; +Cc: olafvdspek, linux-kernel On Thu, Nov 13, 2008 at 7:22 PM, David Miller <davem@davemloft.net> wrote: > From: "J.R. Mauro" <jrm8005@gmail.com> > Date: Thu, 13 Nov 2008 19:19:36 -0500 > >> On Wed, Nov 12, 2008 at 6:20 PM, Olaf van der Spek <olafvdspek@gmail.com> wrote: >> > Hi, >> > >> > Quite often in discussions, I see people claiming Unix sockets are >> > faster then TCP sockets on a connection that stays inside localhost. >> >> Unix domain sockets should be faster because they're not subject to >> windowing, ACKs, flow control, encapsulation, etc. etc. > > And the wakeup rate is higher for TCP, and our process scheduler has > been getting gradually slower and slower over time since 2.6.22 > I'm not sure about this, but I now also remember reading somewhere that TCP has an extra context switch, but I don't know if what I read was about Linux or a totally different OS. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-14 0:19 ` J.R. Mauro 2008-11-14 0:22 ` David Miller @ 2008-11-14 8:51 ` Olaf van der Spek 2008-11-14 8:54 ` Eric Dumazet 2008-11-14 8:56 ` David Miller 1 sibling, 2 replies; 17+ messages in thread From: Olaf van der Spek @ 2008-11-14 8:51 UTC (permalink / raw) To: J.R. Mauro; +Cc: Linux Kernel Mailing List On Fri, Nov 14, 2008 at 1:19 AM, J.R. Mauro <jrm8005@gmail.com> wrote: > On Wed, Nov 12, 2008 at 6:20 PM, Olaf van der Spek <olafvdspek@gmail.com> wrote: >> Hi, >> >> Quite often in discussions, I see people claiming Unix sockets are >> faster then TCP sockets on a connection that stays inside localhost. > > Unix domain sockets should be faster because they're not subject to > windowing, ACKs, flow control, encapsulation, etc. etc. Why would you use windowing, ACKs, flow control and encapsulation on localhost? I expected the kernel to copy data directly from user-space of the sending process to a kernel buffer of the receiving process, much like UNIX sockets. >> Let's say from app A to app B. >> Is this indeed the case and if so, how much and why? >> My assumption is that the kernel can optimize the 'connection' and let >> any performance differences disappear. >> >> Greetings, >> >> Olaf >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> Please read the FAQ at http://www.tux.org/lkml/ >> > ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-14 8:51 ` Olaf van der Spek @ 2008-11-14 8:54 ` Eric Dumazet 2008-11-14 9:06 ` Olaf van der Spek 2008-11-14 8:56 ` David Miller 1 sibling, 1 reply; 17+ messages in thread From: Eric Dumazet @ 2008-11-14 8:54 UTC (permalink / raw) To: Olaf van der Spek; +Cc: J.R. Mauro, Linux Kernel Mailing List Olaf van der Spek a écrit : > On Fri, Nov 14, 2008 at 1:19 AM, J.R. Mauro <jrm8005@gmail.com> wrote: >> On Wed, Nov 12, 2008 at 6:20 PM, Olaf van der Spek <olafvdspek@gmail.com> wrote: >>> Hi, >>> >>> Quite often in discussions, I see people claiming Unix sockets are >>> faster then TCP sockets on a connection that stays inside localhost. >> Unix domain sockets should be faster because they're not subject to >> windowing, ACKs, flow control, encapsulation, etc. etc. > > Why would you use windowing, ACKs, flow control and encapsulation on localhost? > > I expected the kernel to copy data directly from user-space of the > sending process to a kernel buffer of the receiving process, much like > UNIX sockets. > localhost uses a standard network device, and whole network stack is used, no 'special kludges'. You can add iptables rules, you can do trafic shaping, trafic sniffing (tcpdump), limiting memory used by all sockets (controlling memory pressure on the machine) Doing what you suggest would slow down AF_INET stack. You probably can expect AF_UNIX to be faster, since this one is really special and use shortcuts. Then, you probably can use shared memory instead of AF_UNIX, or pipes (and splice()), or ... Then you probably can use threads and do zero-copy ;) ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-14 8:54 ` Eric Dumazet @ 2008-11-14 9:06 ` Olaf van der Spek 2008-11-14 13:14 ` J.R. Mauro 0 siblings, 1 reply; 17+ messages in thread From: Olaf van der Spek @ 2008-11-14 9:06 UTC (permalink / raw) To: Eric Dumazet; +Cc: J.R. Mauro, Linux Kernel Mailing List On Fri, Nov 14, 2008 at 9:54 AM, Eric Dumazet <dada1@cosmosbay.com> wrote: >> I expected the kernel to copy data directly from user-space of the >> sending process to a kernel buffer of the receiving process, much like >> UNIX sockets. >> > > localhost uses a standard network device, and whole network stack > is used, no 'special kludges'. You can add iptables rules, you > can do trafic shaping, trafic sniffing (tcpdump), limiting > memory used by all sockets (controlling memory pressure on the > machine) > > Doing what you suggest would slow down AF_INET stack. Why? > You probably can expect AF_UNIX to be faster, since this one is really > special and use shortcuts. > > Then, you probably can use shared memory instead of AF_UNIX, or > pipes (and splice()), or ... > > Then you probably can use threads and do zero-copy ;) Hmm, I'd like to avoid running my web server inside of my database server process. ;) ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-14 9:06 ` Olaf van der Spek @ 2008-11-14 13:14 ` J.R. Mauro 0 siblings, 0 replies; 17+ messages in thread From: J.R. Mauro @ 2008-11-14 13:14 UTC (permalink / raw) To: Olaf van der Spek; +Cc: Eric Dumazet, Linux Kernel Mailing List On Fri, Nov 14, 2008 at 4:06 AM, Olaf van der Spek <olafvdspek@gmail.com> wrote: > On Fri, Nov 14, 2008 at 9:54 AM, Eric Dumazet <dada1@cosmosbay.com> wrote: >>> I expected the kernel to copy data directly from user-space of the >>> sending process to a kernel buffer of the receiving process, much like >>> UNIX sockets. >>> >> >> localhost uses a standard network device, and whole network stack >> is used, no 'special kludges'. You can add iptables rules, you >> can do trafic shaping, trafic sniffing (tcpdump), limiting >> memory used by all sockets (controlling memory pressure on the >> machine) >> >> Doing what you suggest would slow down AF_INET stack. > > Why? Because then the AF_INET stack would have to check *every* time something went through it and see if it's bound for localhost. You're adding more complexity to the stack just to make the time on 1 case speed up, but you're slowing down every single other case. > >> You probably can expect AF_UNIX to be faster, since this one is really >> special and use shortcuts. >> >> Then, you probably can use shared memory instead of AF_UNIX, or >> pipes (and splice()), or ... >> >> Then you probably can use threads and do zero-copy ;) > > Hmm, I'd like to avoid running my web server inside of my database > server process. ;) > ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-14 8:51 ` Olaf van der Spek 2008-11-14 8:54 ` Eric Dumazet @ 2008-11-14 8:56 ` David Miller 2008-11-14 9:09 ` Olaf van der Spek 1 sibling, 1 reply; 17+ messages in thread From: David Miller @ 2008-11-14 8:56 UTC (permalink / raw) To: olafvdspek; +Cc: jrm8005, linux-kernel From: "Olaf van der Spek" <olafvdspek@gmail.com> Date: Fri, 14 Nov 2008 09:51:44 +0100 > On Fri, Nov 14, 2008 at 1:19 AM, J.R. Mauro <jrm8005@gmail.com> wrote: > > On Wed, Nov 12, 2008 at 6:20 PM, Olaf van der Spek <olafvdspek@gmail.com> wrote: > >> Hi, > >> > >> Quite often in discussions, I see people claiming Unix sockets are > >> faster then TCP sockets on a connection that stays inside localhost. > > > > Unix domain sockets should be faster because they're not subject to > > windowing, ACKs, flow control, encapsulation, etc. etc. > > Why would you use windowing, ACKs, flow control and encapsulation on localhost? So that you could firewall, shape, redirect, and make other modifications to the traffic, as well as see it in tcpdumps. That's the power of Linux, and yes people do this stuff and yes people do want these features to work over loopback. > I expected the kernel to copy data directly from user-space of the > sending process to a kernel buffer of the receiving process, much like > UNIX sockets. Then all of the above features and debugging facilities go away. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-14 8:56 ` David Miller @ 2008-11-14 9:09 ` Olaf van der Spek 2008-11-14 10:37 ` Bernd Petrovitsch ` (2 more replies) 0 siblings, 3 replies; 17+ messages in thread From: Olaf van der Spek @ 2008-11-14 9:09 UTC (permalink / raw) To: David Miller; +Cc: jrm8005, linux-kernel On Fri, Nov 14, 2008 at 9:56 AM, David Miller <davem@davemloft.net> wrote: >> Why would you use windowing, ACKs, flow control and encapsulation on localhost? > > So that you could firewall, shape, redirect, and make other > modifications to the traffic, as well as see it in tcpdumps. That's > the power of Linux, and yes people do this stuff and yes people do > want these features to work over loopback. > >> I expected the kernel to copy data directly from user-space of the >> sending process to a kernel buffer of the receiving process, much like >> UNIX sockets. > > Then all of the above features and debugging facilities go away. So instead the recommendation is for all apps to support both TCP and Unix sockets? If you then use Unix sockets, you still lose all of those facilities and as a bonus, your apps are more complex. I'd prefer a switch that could be enabled to use such a shortcut for TCP. Firewalls should still work mostly (on connect), redirect would still work. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-14 9:09 ` Olaf van der Spek @ 2008-11-14 10:37 ` Bernd Petrovitsch 2008-11-14 13:17 ` J.R. Mauro 2008-11-14 21:07 ` Willy Tarreau 2 siblings, 0 replies; 17+ messages in thread From: Bernd Petrovitsch @ 2008-11-14 10:37 UTC (permalink / raw) To: Olaf van der Spek; +Cc: David Miller, jrm8005, linux-kernel On Fri, 2008-11-14 at 10:09 +0100, Olaf van der Spek wrote: > On Fri, Nov 14, 2008 at 9:56 AM, David Miller <davem@davemloft.net> wrote: > >> Why would you use windowing, ACKs, flow control and encapsulation on localhost? > > > > So that you could firewall, shape, redirect, and make other > > modifications to the traffic, as well as see it in tcpdumps. That's > > the power of Linux, and yes people do this stuff and yes people do > > want these features to work over loopback. ACK. Some people even patch their distributions init.d scripts so that you have aliases (127.0.0.2 and so on) on the loopback interface. And I'm not even in the simulation world where this probably comes erally handy. So as long as "lo" behaves as as normal network interface (with iptables and whatever is available for eth, br, ...) with an IP address, any tuning can be done. > >> I expected the kernel to copy data directly from user-space of the > >> sending process to a kernel buffer of the receiving process, much like > >> UNIX sockets. > > > > Then all of the above features and debugging facilities go away. > > So instead the recommendation is for all apps to support both TCP and > Unix sockets? Of course. That is simple (actually trivial IMO) and standard practice. > If you then use Unix sockets, you still lose all of those facilities > and as a bonus, your apps are more complex. If your (network) app has config abilities for the IP address, the complexity two either add two new fields (select AF_INET or AF_UNIX, a path for AF_UNIX) or simply parse the "IP address" and if it's a path, use AF_UNIX is neglectable. So please *show* the complexity in adding - very few config parameters, - a if() somewhere, and - the proper initialization of a "struct sockaddr_un". Actually I see (and use) AF_UNIX as "I have an network app and it's plain simply faster for local clients than AF_INET". And the IMNSHO added complexity - which you fail to mention - in the whole system (including the user) is to deal with "bug reports" like 'if I change the IP address from 127.0.0.1 to $SOMETHING_ELSE, it behaves quite different' or 'if I use eth0, it works, if I use "lo" not' *and* vice versa (because the user usually doesn't know what's really happening down below just above the TCP/IP stack) of course and similar "weird" problems. Or do you like apps where the documentation (and the app checks it if you are lucky - or some feature just doesn't work) say that it can't be used on "lo" because $FEATURE is not available there. You plain simply violate the rule to avoid the (most) unexpected. > I'd prefer a switch that could be enabled to use such a shortcut for TCP. > Firewalls should still work mostly (on connect), redirect would still work. Show numbers and propose a (tested) patch. Bernd -- Firmix Software GmbH http://www.firmix.at/ mobil: +43 664 4416156 fax: +43 1 7890849-55 Embedded Linux Development and Services ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-14 9:09 ` Olaf van der Spek 2008-11-14 10:37 ` Bernd Petrovitsch @ 2008-11-14 13:17 ` J.R. Mauro 2008-11-14 21:07 ` Willy Tarreau 2 siblings, 0 replies; 17+ messages in thread From: J.R. Mauro @ 2008-11-14 13:17 UTC (permalink / raw) To: Olaf van der Spek; +Cc: David Miller, linux-kernel On Fri, Nov 14, 2008 at 4:09 AM, Olaf van der Spek <olafvdspek@gmail.com> wrote: > On Fri, Nov 14, 2008 at 9:56 AM, David Miller <davem@davemloft.net> wrote: >>> Why would you use windowing, ACKs, flow control and encapsulation on localhost? >> >> So that you could firewall, shape, redirect, and make other >> modifications to the traffic, as well as see it in tcpdumps. That's >> the power of Linux, and yes people do this stuff and yes people do >> want these features to work over loopback. >> >>> I expected the kernel to copy data directly from user-space of the >>> sending process to a kernel buffer of the receiving process, much like >>> UNIX sockets. >> >> Then all of the above features and debugging facilities go away. > > So instead the recommendation is for all apps to support both TCP and > Unix sockets? > If you then use Unix sockets, you still lose all of those facilities > and as a bonus, your apps are more complex. Your application will not be much more complex. That's the point of having generic sockets. You can wrap whatever protocol you want underneath them. This isn't much harder than being able to read from a file or switching around to read stdin if the filename the user gives is "-" > > I'd prefer a switch that could be enabled to use such a shortcut for TCP. > Firewalls should still work mostly (on connect), redirect would still work. > I think this is unnecessary, but if you can make a patch and prove that it speeds up local connections while not slowing down everything else, have at it. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-14 9:09 ` Olaf van der Spek 2008-11-14 10:37 ` Bernd Petrovitsch 2008-11-14 13:17 ` J.R. Mauro @ 2008-11-14 21:07 ` Willy Tarreau 2008-11-14 22:40 ` Olaf van der Spek 2 siblings, 1 reply; 17+ messages in thread From: Willy Tarreau @ 2008-11-14 21:07 UTC (permalink / raw) To: Olaf van der Spek; +Cc: David Miller, jrm8005, linux-kernel On Fri, Nov 14, 2008 at 10:09:25AM +0100, Olaf van der Spek wrote: > On Fri, Nov 14, 2008 at 9:56 AM, David Miller <davem@davemloft.net> wrote: > >> Why would you use windowing, ACKs, flow control and encapsulation on localhost? > > > > So that you could firewall, shape, redirect, and make other > > modifications to the traffic, as well as see it in tcpdumps. That's > > the power of Linux, and yes people do this stuff and yes people do > > want these features to work over loopback. > > > >> I expected the kernel to copy data directly from user-space of the > >> sending process to a kernel buffer of the receiving process, much like > >> UNIX sockets. > > > > Then all of the above features and debugging facilities go away. > > So instead the recommendation is for all apps to support both TCP and > Unix sockets? > If you then use Unix sockets, you still lose all of those facilities > and as a bonus, your apps are more complex. > > I'd prefer a switch that could be enabled to use such a shortcut for TCP. > Firewalls should still work mostly (on connect), redirect would still work. I'm already wondering what problems you encounter with TCP performance on the loopback. I'm used to stress-test network proxies on the loopback for quick tests when I don't want to boot 3 machines, and seeing that it's easy to connect/accept 100k sessions/s and forward about 20-30 Gbps between two processes on consumer-grade machines, I'm really doubting that your applications needs that much out of your database. If you're really so sensible to local traffic tunning, you can already set a very large MTU on your loopback, you can have very large windows between your applications so that very few ACKs are sent, etc... And BTW checksums are already not even computed. Loopback *is* fast, there's no need to crapify the whole stack with your "switch" to gain 5% more out of it. Anyway, if you can come up with patches which proves all of us wrong without weakening the code, I'm sure they could be accepted. Willy ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Unix sockets via TCP on localhost: is TCP slower? 2008-11-14 21:07 ` Willy Tarreau @ 2008-11-14 22:40 ` Olaf van der Spek 0 siblings, 0 replies; 17+ messages in thread From: Olaf van der Spek @ 2008-11-14 22:40 UTC (permalink / raw) To: Willy Tarreau; +Cc: David Miller, jrm8005, linux-kernel On Fri, Nov 14, 2008 at 10:07 PM, Willy Tarreau <w@1wt.eu> wrote: > I'm already wondering what problems you encounter with TCP performance > on the loopback. I'm used to stress-test network proxies on the loopback None. It's just a theoretical question. > for quick tests when I don't want to boot 3 machines, and seeing that it's > easy to connect/accept 100k sessions/s and forward about 20-30 Gbps between > two processes on consumer-grade machines, I'm really doubting that your > applications needs that much out of your database. Hmm, those numbers look a lot better than the ones Chris Friesen posted. He posted 334 mbyte/s for TCP and 1564 for Unix. That's a 4.7x difference. > If you're really so sensible to local traffic tunning, you can already > set a very large MTU on your loopback, you can have very large windows > between your applications so that very few ACKs are sent, etc... And > BTW checksums are already not even computed. Loopback *is* fast, there's That was my initial question. If the performance difference is insignificant, that's fine with me. > no need to crapify the whole stack with your "switch" to gain 5% more out > of it. > > Anyway, if you can come up with patches which proves all of us wrong > without weakening the code, I'm sure they could be accepted. I'm sure too, but I won't. ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2008-11-14 22:40 UTC | newest] Thread overview: 17+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2008-11-12 23:20 Unix sockets via TCP on localhost: is TCP slower? Olaf van der Spek 2008-11-13 11:24 ` Arnaldo Carvalho de Melo 2008-11-13 19:06 ` Olaf van der Spek 2008-11-13 23:04 ` Chris Friesen 2008-11-14 0:19 ` J.R. Mauro 2008-11-14 0:22 ` David Miller 2008-11-14 0:27 ` J.R. Mauro 2008-11-14 8:51 ` Olaf van der Spek 2008-11-14 8:54 ` Eric Dumazet 2008-11-14 9:06 ` Olaf van der Spek 2008-11-14 13:14 ` J.R. Mauro 2008-11-14 8:56 ` David Miller 2008-11-14 9:09 ` Olaf van der Spek 2008-11-14 10:37 ` Bernd Petrovitsch 2008-11-14 13:17 ` J.R. Mauro 2008-11-14 21:07 ` Willy Tarreau 2008-11-14 22:40 ` Olaf van der Spek
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox