* Fix potential hang in https handshake (v3)
@ 2012-10-19 21:04 szager
2012-10-19 21:08 ` Jeff King
0 siblings, 1 reply; 3+ messages in thread
From: szager @ 2012-10-19 21:04 UTC (permalink / raw)
To: git; +Cc: peff, gitster, sop, daniel
>From 32e06128dbc97ceb0d060c88ec8db204fa51be5c Mon Sep 17 00:00:00 2001
From: Stefan Zager <szager@google.com>
Date: Thu, 18 Oct 2012 16:23:53 -0700
Subject: [PATCH] Fix potential hang in https handshake.
It has been observed that curl_multi_timeout may return a very long
timeout value (e.g., 294 seconds and some usec) just before
curl_multi_fdset returns no file descriptors for reading. The
upshot is that select() will hang for a long time -- long enough for
an https handshake to be dropped. The observed behavior is that
the git command will hang at the terminal and never transfer any
data.
This patch is a workaround for a probable bug in libcurl. The bug
only seems to manifest around a very specific set of circumstances:
- curl version (from curl/curlver.h):
#define LIBCURL_VERSION_NUM 0x071307
- git-remote-https running on an ubuntu-lucid VM.
- Connecting through squid proxy running on another VM.
Interestingly, the problem doesn't manifest if a host connects
through squid proxy running on localhost; only if the proxy is on
a separate VM (not sure if the squid host needs to be on a separate
physical machine). That would seem to suggest that this issue
is timing-sensitive.
This patch is more or less in line with a recommendation in the
curl docs about how to behave when curl_multi_fdset doesn't return
and file descriptors:
http://curl.haxx.se/libcurl/c/curl_multi_fdset.html
Signed-off-by: Stefan Zager <szager@google.com>
---
http.c | 11 +++++++++++
1 files changed, 11 insertions(+), 0 deletions(-)
diff --git a/http.c b/http.c
index df9bb71..51eef02 100644
--- a/http.c
+++ b/http.c
@@ -631,6 +631,17 @@ void run_active_slot(struct active_request_slot *slot)
FD_ZERO(&excfds);
curl_multi_fdset(curlm, &readfds, &writefds, &excfds, &max_fd);
+ /* It can happen that curl_multi_timeout returns a pathologically
+ * long timeout when curl_multi_fdset returns no file descriptors
+ * to read. See commit message for more details.
+ */
+ if (max_fd < 0 &&
+ select_timeout.tv_sec > 0 ||
+ select_timeout.tv_usec > 50000) {
+ select_timeout.tv_sec = 0;
+ select_timeout.tv_usec = 50000;
+ }
+
select(max_fd+1, &readfds, &writefds, &excfds, &select_timeout);
}
}
--
1.7.7.3
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: Fix potential hang in https handshake (v3)
2012-10-19 21:04 Fix potential hang in https handshake (v3) szager
@ 2012-10-19 21:08 ` Jeff King
2012-10-19 21:21 ` Junio C Hamano
0 siblings, 1 reply; 3+ messages in thread
From: Jeff King @ 2012-10-19 21:08 UTC (permalink / raw)
To: szager; +Cc: git, gitster, sop, daniel
On Fri, Oct 19, 2012 at 02:04:20PM -0700, szager@google.com wrote:
> From 32e06128dbc97ceb0d060c88ec8db204fa51be5c Mon Sep 17 00:00:00 2001
> From: Stefan Zager <szager@google.com>
> Date: Thu, 18 Oct 2012 16:23:53 -0700
Drop these lines.
> Subject: [PATCH] Fix potential hang in https handshake.
And make this one your actual email subject.
> It has been observed that curl_multi_timeout may return a very long
> timeout value (e.g., 294 seconds and some usec) just before
> curl_multi_fdset returns no file descriptors for reading. The
> upshot is that select() will hang for a long time -- long enough for
> an https handshake to be dropped. The observed behavior is that
> the git command will hang at the terminal and never transfer any
> data.
>
> This patch is a workaround for a probable bug in libcurl. The bug
> only seems to manifest around a very specific set of circumstances:
>
> - curl version (from curl/curlver.h):
>
> #define LIBCURL_VERSION_NUM 0x071307
>
> - git-remote-https running on an ubuntu-lucid VM.
> - Connecting through squid proxy running on another VM.
>
> Interestingly, the problem doesn't manifest if a host connects
> through squid proxy running on localhost; only if the proxy is on
> a separate VM (not sure if the squid host needs to be on a separate
> physical machine). That would seem to suggest that this issue
> is timing-sensitive.
Thanks, that explanation makes much more sense.
> diff --git a/http.c b/http.c
> index df9bb71..51eef02 100644
> --- a/http.c
> +++ b/http.c
> @@ -631,6 +631,17 @@ void run_active_slot(struct active_request_slot *slot)
> FD_ZERO(&excfds);
> curl_multi_fdset(curlm, &readfds, &writefds, &excfds, &max_fd);
>
> + /* It can happen that curl_multi_timeout returns a pathologically
> + * long timeout when curl_multi_fdset returns no file descriptors
> + * to read. See commit message for more details.
> + */
Minor nit, but our multi-line comment style is:
/*
* blah blah blah
*/
> + if (max_fd < 0 &&
> + select_timeout.tv_sec > 0 ||
> + select_timeout.tv_usec > 50000) {
> + select_timeout.tv_sec = 0;
> + select_timeout.tv_usec = 50000;
> + }
Should there be parentheses separating the || bit from the &&?
-Peff
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Fix potential hang in https handshake (v3)
2012-10-19 21:08 ` Jeff King
@ 2012-10-19 21:21 ` Junio C Hamano
0 siblings, 0 replies; 3+ messages in thread
From: Junio C Hamano @ 2012-10-19 21:21 UTC (permalink / raw)
To: Jeff King; +Cc: szager, git, sop, daniel
Jeff King <peff@peff.net> writes:
>> + if (max_fd < 0 &&
>> + select_timeout.tv_sec > 0 ||
>> + select_timeout.tv_usec > 50000) {
>> + select_timeout.tv_sec = 0;
>> + select_timeout.tv_usec = 50000;
>> + }
>
> Should there be parentheses separating the || bit from the &&?
Yeah, I think there should be. Thanks for sharp eyes.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2012-10-19 21:22 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-10-19 21:04 Fix potential hang in https handshake (v3) szager
2012-10-19 21:08 ` Jeff King
2012-10-19 21:21 ` Junio C Hamano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).