* Segfaults with USE_CURL_MULTI
@ 2006-05-20 18:47 Florian Weimer
[not found] ` <20060520184633.76b438cc.seanlkml@sympatico.ca>
2006-05-22 15:56 ` Segfaults with USE_CURL_MULTI Pavel Roskin
0 siblings, 2 replies; 6+ messages in thread
From: Florian Weimer @ 2006-05-20 18:47 UTC (permalink / raw)
To: git
Is anybody else seeing segfaults on dumb HTTP pull with
USE_CURL_MULTI? For example, this crashes for me:
$ git clone http://git.enyo.de/fw/debian/debfoster.git upstream
GDB shows that this happens inside the call to curl_multi_perform.
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] Remove possible segfault in http-fetch.
[not found] ` <20060520184633.76b438cc.seanlkml@sympatico.ca>
@ 2006-05-20 22:46 ` Sean
2006-05-20 23:00 ` Florian Weimer
2006-05-21 7:49 ` Junio C Hamano
0 siblings, 2 replies; 6+ messages in thread
From: Sean @ 2006-05-20 22:46 UTC (permalink / raw)
To: Florian Weimer; +Cc: git
Free the curl string lists after running http_cleanup to
avoid an occasional segfault in the curl library. Seems
to only occur if the website returns a 405 error.
Signed-off-by: Sean Estabrooks <seanlkml@sympatico.ca>
---
On Sat, 20 May 2006 20:47:54 +0200
Florian Weimer <fw@deneb.enyo.de> wrote:
> Is anybody else seeing segfaults on dumb HTTP pull with
> USE_CURL_MULTI? For example, this crashes for me:
>
> $ git clone http://git.enyo.de/fw/debian/debfoster.git upstream
>
> GDB shows that this happens inside the call to curl_multi_perform.
>
Florian, could you please test this patch.
It comes with a big disclaimer because I don't really know the
code in here all that well. However gdb reports the segfault
happens in a strncasecmp call, and seeing as we've released a
bunch of strings prior to the call....
Testing seems to confirm that the segfault is removed by this patch.
As to why the website returns a 405 error in the first place is still
a mystery to me.
Sean
http-fetch.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/http-fetch.c b/http-fetch.c
index 861644b..178f1ee 100644
--- a/http-fetch.c
+++ b/http-fetch.c
@@ -1269,10 +1269,10 @@ int main(int argc, char **argv)
if (pull(commit_id))
rc = 1;
- curl_slist_free_all(no_pragma_header);
-
http_cleanup();
+ curl_slist_free_all(no_pragma_header);
+
if (corrupt_object_found) {
fprintf(stderr,
"Some loose object were found to be corrupt, but they might be just\n"
--
1.3.GIT
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] Remove possible segfault in http-fetch.
2006-05-20 22:46 ` [PATCH] Remove possible segfault in http-fetch Sean
@ 2006-05-20 23:00 ` Florian Weimer
2006-05-21 7:49 ` Junio C Hamano
1 sibling, 0 replies; 6+ messages in thread
From: Florian Weimer @ 2006-05-20 23:00 UTC (permalink / raw)
To: Sean; +Cc: git
* Sean:
> Testing seems to confirm that the segfault is removed by this patch.
It seems to fix it for me, too.
> As to why the website returns a 405 error in the first place is still
> a mystery to me.
The web server does not support PROPFIND.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] Remove possible segfault in http-fetch.
2006-05-20 22:46 ` [PATCH] Remove possible segfault in http-fetch Sean
2006-05-20 23:00 ` Florian Weimer
@ 2006-05-21 7:49 ` Junio C Hamano
2006-05-31 16:07 ` Nick Hengeveld
1 sibling, 1 reply; 6+ messages in thread
From: Junio C Hamano @ 2006-05-21 7:49 UTC (permalink / raw)
To: Sean; +Cc: git, Nick Hengeveld
Sean <seanlkml@sympatico.ca> writes:
> Free the curl string lists after running http_cleanup to
> avoid an occasional segfault in the curl library. Seems
> to only occur if the website returns a 405 error.
>...
> It comes with a big disclaimer because I don't really know the
> code in here all that well. However gdb reports the segfault
> happens in a strncasecmp call, and seeing as we've released a
> bunch of strings prior to the call....
>
> http-fetch.c | 4 ++--
> 1 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/http-fetch.c b/http-fetch.c
> index 861644b..178f1ee 100644
> --- a/http-fetch.c
> +++ b/http-fetch.c
> @@ -1269,10 +1269,10 @@ int main(int argc, char **argv)
> if (pull(commit_id))
> rc = 1;
>
> - curl_slist_free_all(no_pragma_header);
> -
> http_cleanup();
>
> + curl_slist_free_all(no_pragma_header);
> +
> if (corrupt_object_found) {
> fprintf(stderr,
> "Some loose object were found to be corrupt, but they might be just\n"
curl_easy_cleanup() which is called from http_cleanup() says it
is safe to remove the strings _after_ you call that function, so
I think the change makes sense -- it was apparently unsafe to
free them before calling cleanup.
Knowing nothing about quirks in curl libraries, one thing that
is mystery to me is that we slist_append() to other two lists
(pragma_header and range_header) but we do not seem to ever free
them. Another slist dav_headers is allocated and then freed
inside a function, so that call-pattern seems well-formed.
Nick, care to help us out?
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Segfaults with USE_CURL_MULTI
2006-05-20 18:47 Segfaults with USE_CURL_MULTI Florian Weimer
[not found] ` <20060520184633.76b438cc.seanlkml@sympatico.ca>
@ 2006-05-22 15:56 ` Pavel Roskin
1 sibling, 0 replies; 6+ messages in thread
From: Pavel Roskin @ 2006-05-22 15:56 UTC (permalink / raw)
To: Florian Weimer; +Cc: git
On Sat, 2006-05-20 at 20:47 +0200, Florian Weimer wrote:
> Is anybody else seeing segfaults on dumb HTTP pull with
> USE_CURL_MULTI?
_Everybody_ is seeing them! Just look for "segfaults" in the archives:
http://marc.theaimsgroup.com/?l=git&w=2&r=1&s=segfaults&q=b
This patch looks promising, but I'm yet to test it:
http://marc.theaimsgroup.com/?l=git&m=114816558325617&w=2
> For example, this crashes for me:
>
> $ git clone http://git.enyo.de/fw/debian/debfoster.git upstream
>
> GDB shows that this happens inside the call to curl_multi_perform.
Same for me:
http://marc.theaimsgroup.com/?l=git&m=114790994726570&w=2
--
Regards,
Pavel Roskin
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] Remove possible segfault in http-fetch.
2006-05-21 7:49 ` Junio C Hamano
@ 2006-05-31 16:07 ` Nick Hengeveld
0 siblings, 0 replies; 6+ messages in thread
From: Nick Hengeveld @ 2006-05-31 16:07 UTC (permalink / raw)
To: Junio C Hamano; +Cc: Sean, git
On Sun, May 21, 2006 at 12:49:19AM -0700, Junio C Hamano wrote:
> curl_easy_cleanup() which is called from http_cleanup() says it
> is safe to remove the strings _after_ you call that function, so
> I think the change makes sense -- it was apparently unsafe to
> free them before calling cleanup.
>
> Knowing nothing about quirks in curl libraries, one thing that
> is mystery to me is that we slist_append() to other two lists
> (pragma_header and range_header) but we do not seem to ever free
> them. Another slist dav_headers is allocated and then freed
> inside a function, so that call-pattern seems well-formed.
>
> Nick, care to help us out?
I just got back from a trip to the midwest and am still getting caught
up. I was only gone for 10 days, you've all been quite busy...
You're correct wrt the other slists, I'll get to work on a patch for
that after I've caught up.
I'm also doing additional testing to see whether this fixes the DAV/405
segfault as I think there may be something else going on there.
--
For a successful technology, reality must take precedence over public
relations, for nature cannot be fooled.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2006-05-31 16:08 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-05-20 18:47 Segfaults with USE_CURL_MULTI Florian Weimer
[not found] ` <20060520184633.76b438cc.seanlkml@sympatico.ca>
2006-05-20 22:46 ` [PATCH] Remove possible segfault in http-fetch Sean
2006-05-20 23:00 ` Florian Weimer
2006-05-21 7:49 ` Junio C Hamano
2006-05-31 16:07 ` Nick Hengeveld
2006-05-22 15:56 ` Segfaults with USE_CURL_MULTI Pavel Roskin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).