From: Michael Haggerty <mhagger@alum.mit.edu>
To: Jeff King <peff@peff.net>
Cc: Junio C Hamano <gitster@pobox.com>,
git discussion list <git@vger.kernel.org>
Subject: Re: Tag peeling peculiarities
Date: Sat, 16 Mar 2013 09:48:42 +0100 [thread overview]
Message-ID: <514431EA.2050402@alum.mit.edu> (raw)
In-Reply-To: <20130314134032.GA9222@sigill.intra.peff.net>
On 03/14/2013 02:40 PM, Jeff King wrote:
> Hmph. I coincidentally ran across another problem with 435c833 today.
> Try this:
> [...]
>
> But that's somewhat off-topic for this discussion. I'll look into it
> further and try to make a patch later today or tomorrow.
>
> On Thu, Mar 14, 2013 at 12:28:53PM +0100, Michael Haggerty wrote:
>> Your patch looks about right aside from its lack of comments :-). I was
>> going to implement approximately the same thing in a patch series that I
>> am working on, but if you prefer then go ahead and submit your patch and
>> I will rebase my branch on top of it.
>
> I just meant it as a quick sketch, since you said you were working in
> the area. I'm not sure what you are working on. If we don't consider it
> an urgent bugfix, I'm just as happy for you to incorporate the idea into
> what you are doing.
>
> But if we want to do it as a maint-track fix, I can clean up my patch.
> Junio?
My patch series is nearly done. I will need another day or two to
review and make it submission-ready, but I wanted to give you an idea of
what I'm up to and I could also use your feedback on some points.
The (preliminary) patch series is available on GitHub:
https://github.com/mhagger/git/commits/pack-refs-unification
Highlights:
* Move pack-refs code into refs.c
* Implement fully-peeled packed-refs files, as discussed upthread: peel
references outside of refs/tags, and keep rigorous track of
which references have REF_KNOWS_PEELED.
* Change how references are iterated over internally (without changing
the exported API):
* Provide direct access to the ref_entries
* Don't set current_ref if not iterating over all refs
* Change pack-refs to use the peeled information from ref_entries if it
is available, rather than looking up the references again.
* repack_without_ref() writes peeled refs to the packed-refs file.
Use the old values if available; otherwise look them up. (The old
version of repack_without_refs() wrote a packed-refs file without
any peeled values even if they were present in the old packed-refs
file.) Remove the deleted reference from the in-RAM packed ref cache
in-place instead of discarding the whole cache.
* Add some tests and lots of comments.
Here are my questions for you:
1. There are multiple functions for peeling refs:
peel_ref() (and peel_object(), which was extracted from it);
peel_entry() (derived from pack-refs.c:pack_one_ref()). It would be
nice to combine these into the One True Function. But given the
problem that you mentioned above (which is rooted in parts of the
code that I don't know) I don't know what that version should do.
2. repack_without_ref() now writes peeled refs, peeling them if
necessary. It does this *without* referring to the loose refs
to avoid having to load all of the loose references, which can be
time-consuming. But this means that it might try to lookup SHA1s
that are not the current value of the corresponding reference any
more, and might even have been garbage collected. Is the code that
I use to do this, in peel_entry(), safe to call for nonexistent
SHA1s (I would like it to return 0 if the SHA1 is invalid)?:
o = parse_object(entry->u.value.sha1);
if (o->type == OBJ_TAG) {
o = deref_tag(o, entry->name, 0);
if (o) {
hashcpy(entry->u.value.peeled, o->sha1);
entry->flag |= REF_KNOWS_PEELED;
return 1;
}
}
return 0;
I've tried to test this experimentally and it seems to work, but I
would like to have some theoretical support :-)
3. This same change to repack_with_ref() means that it could become
significantly slower to delete a packed ref if the packed-ref file
is not fully-peeled. On the plus side, once this is done once, the
packed-refs files will be rewritten in fully-peeled form. But if
two versions of Git are being used in a repository, this cost could
be incurred repeatedly. Does anybody object?
4. Should I change the locking policy as discussed in this thread?:
http://article.gmane.org/gmane.comp.version-control.git/212131
Michael
--
Michael Haggerty
mhagger@alum.mit.edu
http://softwareswirl.blogspot.com/
next prev parent reply other threads:[~2013-03-16 8:49 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-13 14:59 Tag peeling peculiarities Michael Haggerty
2013-03-13 15:34 ` Michael Haggerty
2013-03-13 17:29 ` Junio C Hamano
2013-03-13 21:58 ` Jeff King
2013-03-14 4:41 ` Michael Haggerty
2013-03-14 5:24 ` Jeff King
2013-03-14 5:32 ` Jeff King
2013-03-14 15:14 ` Junio C Hamano
2013-03-14 11:28 ` Michael Haggerty
2013-03-14 13:40 ` Jeff King
2013-03-15 5:12 ` Michael Haggerty
2013-03-15 16:28 ` Junio C Hamano
2013-03-16 8:48 ` Michael Haggerty [this message]
2013-03-16 9:34 ` Jeff King
2013-03-16 13:38 ` Michael Haggerty
2013-03-18 3:17 ` Michael Haggerty
2013-03-22 17:42 ` Jeff King
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=514431EA.2050402@alum.mit.edu \
--to=mhagger@alum.mit.edu \
--cc=git@vger.kernel.org \
--cc=gitster@pobox.com \
--cc=peff@peff.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).