git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marc Branchaud <marcnarc@xiplink.com>
To: Junio C Hamano <gitster@pobox.com>
Cc: Jeff King <peff@peff.net>, Git Mailing List <git@vger.kernel.org>
Subject: Re: Concurrent pushes updating the same ref
Date: Thu, 06 Jan 2011 16:51:53 -0500	[thread overview]
Message-ID: <4D263979.1080403@xiplink.com> (raw)
In-Reply-To: <7v1v4pbz6y.fsf@alter.siamese.dyndns.org>

On 11-01-06 02:37 PM, Junio C Hamano wrote:
> Jeff King <peff@peff.net> writes:
> 
>> On Thu, Jan 06, 2011 at 10:46:38AM -0500, Marc Branchaud wrote:
>>
>>> fatal: Unable to create
>>> '/usr/xiplink/git/public/Main.git/refs/builds/3.3.0-3.lock': File exists.
>>> If no other git process is currently running, this probably means a
>>> git process crashed in this repository earlier. Make sure no other git
>>> process is running and remove the file manually to continue.
>>> fatal: The remote end hung up unexpectedly
>>>
>>> I think the cause is pretty obvious, and in a normal interactive situation
>>> the solution would be to simply try again.  But in a script trying again
>>> isn't so straightforward.
>>>
>>> So I'm wondering if there's any sense or desire to make git a little more
>>> flexible here.  Maybe teach it to wait and try again once or twice when it
>>> sees a lock file.  I presume that normally a ref lock file should disappear
>>> pretty quickly, so there shouldn't be a need to wait very long.
>>
>> Yeah, we probably should try again. The simplest possible (and untested)
>> patch is below. However, a few caveats:
>>
>>   1. This patch unconditionally retries for all lock files. Do all
>>      callers want that?
> 
> I actually have to say that _no_ caller should want this.  If somebody
> earlier crashed, we would want to know about it (and how).  If somebody
> else alive is actively holding a lock, why not make it the responsibility
> of a calling script to decide if it wants to retry itself or perhaps
> decide to do something else?

I'm not sure I follow this.

How would retrying a few times prevent us from finding out about an earlier
crash?  It's not like we're overriding the lock by retrying.  Nobody's going
to be able to remove a lock created by a crashed process, right?

And if someone active doesn't release the lock and the low-level code retried
a few times, the caller can still decide what to do.  I don't see how it
would even impact that decision -- if the caller wants to try again, the
system can still retry a few times underneath the caller's one retry.  It
seems fine to me.

		M.

      reply	other threads:[~2011-01-06 21:52 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-06 15:46 Concurrent pushes updating the same ref Marc Branchaud
2011-01-06 16:30 ` Jeff King
2011-01-06 16:48   ` Shawn Pearce
2011-01-06 17:28     ` Ilari Liusvaara
2011-01-06 17:12   ` Marc Branchaud
2011-01-10 22:14     ` Marc Branchaud
2011-01-06 19:37   ` Junio C Hamano
2011-01-06 21:51     ` Marc Branchaud [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4D263979.1080403@xiplink.com \
    --to=marcnarc@xiplink.com \
    --cc=git@vger.kernel.org \
    --cc=gitster@pobox.com \
    --cc=peff@peff.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).