From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from secure.elehost.com (secure.elehost.com [185.209.179.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D057AFC0E for ; Mon, 8 Jul 2024 01:28:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.209.179.11 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720402084; cv=none; b=c+48jMqJG6BqM+LyH4DReFmpax/SlYSrARZxN/06xwPfO0O7tZtUg7egVE0p3JikCKdwI/NSOwxT0z2+JW1VL0y0IO7Yo32JIEvcgQTYrgSyGiz2267nZOoiNoNW3c5UCOrEortWLS8vfy8FaBOGS+BUvHLv2QKRAaJHPbvCUS0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720402084; c=relaxed/simple; bh=FOpyNCfSyeHStCj2xQcsYigxk8xQSMOF0so8zN0vICc=; h=From:To:References:In-Reply-To:Subject:Date:Message-ID: MIME-Version:Content-Type; b=M97xqrtcclui7b8WShpwEIfcTftuz5kU9pBNyb01F4fa9ozwmzVi/X78NZOZ5Bi5CenrabwOfwC+JsrfZTgu2xqdX0PWPGxBPhr0npfnQ+y4NgpvT7JUx5+nScYkI0OxgSPwMmJOpnd69OFAqA7QdubBGYPk4B4m3AORkBO0Da0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=nexbridge.com; spf=pass smtp.mailfrom=nexbridge.com; arc=none smtp.client-ip=185.209.179.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=nexbridge.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=nexbridge.com X-Virus-Scanned: Debian amavisd-new at secure.elehost.com Received: from Mazikeen (pool-99-228-12-196.cpe.net.cable.rogers.com [99.228.12.196]) (authenticated bits=0) by secure.elehost.com (8.15.2/8.15.2/Debian-22ubuntu3) with ESMTPSA id 4681RvFb3097740 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 8 Jul 2024 01:27:58 GMT Reply-To: From: To: "'ellie'" , References: <0be201dab933$17c02530$47406f90$@nexbridge.com> <0beb01dab93b$c01dfa10$4059ee30$@nexbridge.com> <200c3bd2-6aa9-4bb2-8eda-881bb62cd064@horse64.org> In-Reply-To: Subject: RE: With big repos and slower connections, git clone can be hard to work with Date: Sun, 7 Jul 2024 21:27:52 -0400 Organization: Nexbridge Inc. Message-ID: <0a7401dad0d6$10d27e20$32777a60$@nexbridge.com> Precedence: bulk X-Mailing-List: git@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Mailer: Microsoft Outlook 16.0 Thread-Index: AQJ8D86lVlMiQEsTuxPOCK2qhFj8vwLR+fypAySGZOIBTladAgHDwEecAnFSq6GwTbxOUA== Content-Language: en-ca On Sunday, July 7, 2024 7:42 PM, ellie wrote: >I have now encountered a repository where even --deepen=3D1 is bound to = be failing >because it pulls in something fairly large that takes a few minutes. = (Possibly, the >server proxy has a faulty timeout setting that punishes slow = connections, but for >connections unreliable on the client side the problem would be the = same.) > >So this workaround sadly doesn't seem to cover all cases of resume. > >Regards, > >Ellie > >On 6/8/24 2:46 AM, ellie wrote: >> The deepening worked perfectly, thank you so much! I hope a resume >> will still be considered however, if even just to help out newcomers. >> >> Regards, >> >> Ellie >> >> On 6/8/24 2:35 AM, rsbecker@nexbridge.com wrote: >>> On Friday, June 7, 2024 8:03 PM, ellie wrote: >>>> Subject: Re: With big repos and slower connections, git clone can = be >>>> hard to work with >>>> >>>> Thanks, this is very helpful as an emergency workaround! >>>> >>>> Nevertheless, I usually want the entire history, especially since I >>>> wouldn't mind waiting half an hour. But without resume, I've >>>> encountered it regularly that it just won't complete even if I give >>>> it the time, while way longer downloads in the browser would. The >>>> key problem here seems to be the lack of any resume. >>>> >>>> I hope this helps to understand why I made the suggestion. >>>> >>>> Regards, >>>> >>>> Ellie >>>> >>>> On 6/8/24 1:33 AM, rsbecker@nexbridge.com wrote: >>>>> On Friday, June 7, 2024 7:28 PM, ellie wrote: >>>>>> I'm terribly sorry if this is the wrong place, but I'd like to >>>>>> suggest a potential issue with "git clone". >>>>>> >>>>>> The problem is that any sort of interruption or connection issue, >>>>>> no matter how brief, causes the clone to stop and leave nothing = behind: >>>>>> >>>>>> $ git clone https://github.com/Nheko-Reborn/nheko >>>>>> Cloning into 'nheko'... >>>>>> remote: Enumerating objects: 43991, done. >>>>>> remote: Counting objects: 100% (6535/6535), done. >>>>>> remote: Compressing objects: 100% (1449/1449), done. >>>>>> error: RPC failed; curl 92 HTTP/2 stream 5 was not closed = cleanly: >>>>>> CANCEL (err 8) >>>>>> error: 2771 bytes of body are still expected >>>>>> fetch-pack: unexpected disconnect while reading sideband packet >>>>>> fatal: early EOF >>>>>> fatal: fetch-pack: invalid index-pack output $ cd nheko >>>>>> bash: cd: nheko: No such file or director >>>>>> >>>>>> In my experience, this can be really impactful with 1. big >>>>>> repositories and 2. >>>>>> unreliable internet - which I would argue isn't unheard of! E.g. >>>>>> a developer may work via mobile connection on a business trip. = The >>>>>> result can even be that a repository is uncloneable for some = users! >>>>>> >>>>>> This has left me in the absurd situation where I was able to >>>>>> download a tarball via HTTPS from the git hoster just fine, even >>>>>> way larger binary release items, thanks to the browser's HTTPS >>>>>> resume. And yet a simple git clone of the same project failed = repeatedly. >>>>>> >>>>>> My deepest apologies if I missed an option to fix or address = this. >>>>>> But summed up, please consider making git clone recover from = hiccups. >>>>>> >>>>>> Regards, >>>>>> >>>>>> Ellie >>>>>> >>>>>> PS: I've seen git hosters have apparent proxy bugs, like timing >>>>>> out slower git clone connections from the server side even if the >>>>>> transfer is ongoing. A git auto-resume would reduce the impact of >>>>>> that, too. >>>>> >>>>> I suggest that you look into two git topics: --depth, which >>>>> controls how much >>>> history is obtained in a clone, and sparse-checkout, which = describes >>>> the part of the repository you will retrieve. You can prune the >>>> contents of the repository so that clone is faster, if you do not >>>> need all of the history, or all of the files. This is typically = done >>>> in complex large repositories, particularly those used for >>>> production support as release repositories. >>> >>> Consider doing the clone with --depth=3D1 then using git fetch >>> --depth=3Dn as the resume. There are other options that effectively >>> give you a resume, including --deepen=3Dn. >>> >>> Build automation, like Jenkins, uses this to speed up the = clone/checkout. Can you please provide more details on this? It is difficult to = understand your issue without knowing what situation is failing? What = size file? Is this a large single pack file? Can you reproduce this with = a script we can try?