* Using cvs2git to track an external CVS project
@ 2005-06-01 12:35 Martin Langhoff
2005-06-01 13:07 ` Anton Altaparmakov
0 siblings, 1 reply; 8+ messages in thread
From: Martin Langhoff @ 2005-06-01 12:35 UTC (permalink / raw)
To: Git Mailing List
Following the cvs2git threads, I'm left with a few doubts.
Linus has stated that it can be used incrementally to track a project
that uses CVS -- in which case I assume I would be maintaining two git
repos, one strictly tracking "upstream", pulling changes from CVS on a
crontab, and the 2nd one with my local changes. Or is it meant to work
on the "local" repo as a pull/merge/update?
What'd be the strategy in that case if I am working on patches that I
intend to feed upstream? To what degree will git try and remerge
against the local repo where the patch originates from? This kind of
smarts are nice when they work -- but I am interested in exploring
more git-style approaches, if git supports this at all.
In the scenario above, if I push _some_ patches upstream, does git
help me at all in sorting out what is upstream and what is not?
I suspect all this patch-based horsetrading amounts to cherry-picking,
and is therefore not supported. What strategy would work with git to
run local branches with a mix of patches that go upstream and others
that don't (or just may take longer to get there).
Right now we are using arch where a long-lived branch tracks
theexternal cvs repo, and we open short-lived branches where we do a
mix of development -- most of which is merged upstream in several
stages.
cheers,
martin
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Using cvs2git to track an external CVS project
2005-06-01 12:35 Using cvs2git to track an external CVS project Martin Langhoff
@ 2005-06-01 13:07 ` Anton Altaparmakov
2005-06-02 20:10 ` (was Re: Using cvs2git to track an external CVS project) Martin Langhoff
0 siblings, 1 reply; 8+ messages in thread
From: Anton Altaparmakov @ 2005-06-01 13:07 UTC (permalink / raw)
To: Martin Langhoff; +Cc: Git Mailing List
On Thu, 2005-06-02 at 00:35 +1200, Martin Langhoff wrote:
> Following the cvs2git threads, I'm left with a few doubts.
>
> Linus has stated that it can be used incrementally to track a project
> that uses CVS -- in which case I assume I would be maintaining two git
> repos, one strictly tracking "upstream", pulling changes from CVS on a
> crontab, and the 2nd one with my local changes. Or is it meant to work
> on the "local" repo as a pull/merge/update?
>
> What'd be the strategy in that case if I am working on patches that I
> intend to feed upstream? To what degree will git try and remerge
> against the local repo where the patch originates from? This kind of
> smarts are nice when they work -- but I am interested in exploring
> more git-style approaches, if git supports this at all.
>
> In the scenario above, if I push _some_ patches upstream, does git
> help me at all in sorting out what is upstream and what is not?
>
> I suspect all this patch-based horsetrading amounts to cherry-picking,
> and is therefore not supported. What strategy would work with git to
> run local branches with a mix of patches that go upstream and others
> that don't (or just may take longer to get there).
Disregarding anything about cvs2git there is one point you may not be
thinking about but you may want to care about: when you send something
upstream to the cvs repository and then get it back via cvs2git you will
get a completely different commit to the one your local git repository
has. So while the file changes inside those two commits are the same
the actual commits are not and you will end up with all those commits in
duplicate because of it as well as an automatic merge commit to merge
the two commits. If you don't want that to happen you would need to do
your local changes in throw-away git trees which you rm -rf after the
patch gets applied and you use cvs2git to get your changes. You could
of course do your local things in git branches and then throw-away the
branch that got applied to cvs and only keep the main trunk in sync but
I personally prefer separate trees to branches.
> Right now we are using arch where a long-lived branch tracks
> theexternal cvs repo, and we open short-lived branches where we do a
> mix of development -- most of which is merged upstream in several
> stages.
Best regards,
Anton
--
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer / IRC: #ntfs on irc.freenode.net
WWW: http://linux-ntfs.sf.net/ & http://www-stu.christs.cam.ac.uk/~aia21/
^ permalink raw reply [flat|nested] 8+ messages in thread
* (was Re: Using cvs2git to track an external CVS project)
2005-06-01 13:07 ` Anton Altaparmakov
@ 2005-06-02 20:10 ` Martin Langhoff
2005-06-02 20:59 ` Upstream merging and conflicts " Martin Langhoff
0 siblings, 1 reply; 8+ messages in thread
From: Martin Langhoff @ 2005-06-02 20:10 UTC (permalink / raw)
To: Anton Altaparmakov; +Cc: Git Mailing List
On 6/2/05, Anton Altaparmakov <aia21@cam.ac.uk> wrote:
> Disregarding anything about cvs2git there is one point you may not be
> thinking about but you may want to care about: when you send something
> upstream to the cvs repository and then get it back via cvs2git you will
> get a completely different commit to the one your local git repository
> has.
If upstream hasn't touched the files I'm patching, and cvs2gig/cvs2ps
use -kk there is some hope that the objects should be the same...
right?
> So while the file changes inside those two commits are the same
> the actual commits are not and you will end up with all those commits in
> duplicate because of it as well as an automatic merge commit to merge
> the two commits.
So, this is the scenario for a situation where the files I'm patching
have changed upstream, so upstream merges my patch on top of previous
patches, and I get commit objects echoed back that logically contain
my patch but git sees as different.
Is it any better if upstream is using git as well? Is there any chance
for a private branch or tree to survive anything but a "perfect match"
merge where upstream and the branch are in perfect sync before and
after?
I understand this is architected to _not_ support cherry picking in
the darcs/arch sense and I think it's a good idea. But it seems than
any non-trivial merge ends up being a completely manual process.
Anyone having to work on a series of patches for Linux that get
accepted in stages is going to find himself forced to a potentially
time-consuming remerge every time Linus does a partial merge. Argh.
So it seems to me that git is well suited for a set of closely related
HEADs that are very aggressive in synching with each other. Synching
work is pushed out to the peripheral branches -- a design decision I
agree with -- but there's very little support to help me keep a
peripheral branch in sync.
The assumption that those peripheral branches must be short-lived and
discardable is valid for a limited set of cases -- very circumscribed
to short-term dev work. As soon as a dev branch has to run a little
longer, it cannot afford to not sync with the HEAD. Particularly, it
cannot skip a _single_ patch coming from HEAD.
And if I'm doing development from my branch and pushing to HEAD, and
the guy doing the merge in HEAD merges my patches in a different order
I'll have a spurious conflict.
I use peripheral branches to track versions of code in production,
that may have different sets of patches applied. For that purpose,
patch-based SCMs are quite helpful (you can ask what patch is applied
where), but as Linus pointed out, they don't actually help convergence
at all. Git pulls towards convergence like a demon AFAICS -- yet some
primitive patch trading smarts would save a lot of effort at the
borders of the dev network.
cheers,
martin
^ permalink raw reply [flat|nested] 8+ messages in thread
* Upstream merging and conflicts (was Re: Using cvs2git to track an external CVS project)
2005-06-02 20:10 ` (was Re: Using cvs2git to track an external CVS project) Martin Langhoff
@ 2005-06-02 20:59 ` Martin Langhoff
2005-06-08 15:59 ` Linus Torvalds
0 siblings, 1 reply; 8+ messages in thread
From: Martin Langhoff @ 2005-06-02 20:59 UTC (permalink / raw)
To: Git Mailing List
(reposted with appropriate subject)
On 6/2/05, Anton Altaparmakov <aia21@cam.ac.uk> wrote:
> Disregarding anything about cvs2git there is one point you may not be
> thinking about but you may want to care about: when you send something
> upstream to the cvs repository and then get it back via cvs2git you will
> get a completely different commit to the one your local git repository
> has.
If upstream hasn't touched the files I'm patching, and cvs2gig/cvs2ps
use -kk there is some hope that the objects should be the same...
right?
> So while the file changes inside those two commits are the same
> the actual commits are not and you will end up with all those commits in
> duplicate because of it as well as an automatic merge commit to merge
> the two commits.
So, this is the scenario for a situation where the files I'm patching
have changed upstream, so upstream merges my patch on top of previous
patches, and I get commit objects echoed back that logically contain
my patch but git sees as different.
Is it any better if upstream is using git as well? Is there any chance
for a private branch or tree to survive anything but a "perfect match"
merge where upstream and the branch are in perfect sync before and
after?
I understand this is architected to _not_ support cherry picking in
the darcs/arch sense and I think it's a good idea. But it seems than
any non-trivial merge ends up being a completely manual process.
Anyone having to work on a series of patches for Linux that get
accepted in stages is going to find himself forced to a potentially
time-consuming remerge every time Linus does a partial merge. Argh.
So it seems to me that git is well suited for a set of closely related
HEADs that are very aggressive in synching with each other. Synching
work is pushed out to the peripheral branches -- a design decision I
agree with -- but there's very little support to help me keep a
peripheral branch in sync.
The assumption that those peripheral branches must be short-lived and
discardable is valid for a limited set of cases -- very circumscribed
to short-term dev work. As soon as a dev branch has to run a little
longer, it cannot afford to not sync with the HEAD. Particularly, it
cannot skip a _single_ patch coming from HEAD.
And if I'm doing development from my branch and pushing to HEAD, and
the guy doing the merge in HEAD merges my patches in a different order
I'll have a spurious conflict.
I use peripheral branches to track versions of code in production,
that may have different sets of patches applied. For that purpose,
patch-based SCMs are quite helpful (you can ask what patch is applied
where), but as Linus pointed out, they don't actually help convergence
at all. Git pulls towards convergence like a demon AFAICS -- yet some
primitive patch trading smarts would save a lot of effort at the
borders of the dev network.
cheers,
martin
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Upstream merging and conflicts (was Re: Using cvs2git to track an external CVS project)
2005-06-02 20:59 ` Upstream merging and conflicts " Martin Langhoff
@ 2005-06-08 15:59 ` Linus Torvalds
2005-06-08 22:09 ` Martin Langhoff
0 siblings, 1 reply; 8+ messages in thread
From: Linus Torvalds @ 2005-06-08 15:59 UTC (permalink / raw)
To: Martin Langhoff; +Cc: Git Mailing List
On Fri, 3 Jun 2005, Martin Langhoff wrote:
>
> I understand this is architected to _not_ support cherry picking in
> the darcs/arch sense and I think it's a good idea. But it seems than
> any non-trivial merge ends up being a completely manual process.
Yes, and right now that "manual" part is actualyl fairly high. I'll fix
that thing up (today, I hope), making nontrivial merges much easier.
> So it seems to me that git is well suited for a set of closely related
> HEADs that are very aggressive in synching with each other. Synching
> work is pushed out to the peripheral branches -- a design decision I
> agree with -- but there's very little support to help me keep a
> peripheral branch in sync.
Well, there's asome support for keeping a peripheral branch in sync, but
part of it ends up having to try to just merge into it from the main
branch every once in a while.
It all really boils down to "merge often, merge small". Git encourages
that behaviour. Git concentrates on trivial merges, because the git
philosophy is that if you let two trees diverge too much, your problems
are not with the merge, but with other things.
NOTE! This does _not_ mean that you can't do big changes. You can very
much do a branch that does _huge_ changes, and as long as that branch
keeps merging with the original, it should work out reasonably well. Every
merge "re-bases" the work, so you'll never have to merge old changes. You
may end up with a fair number of manual things to fix up each time (and
see my comment on the manual fixup problems right now), but at least the
thing should hopefully be incremental.
But yeah, if you actually do major re-organization, then the system
absolutely needs tons more smarts in the automated parts, since right now
it has no automated merging for renames etc things (things it _could_
detect, but doesn't do). There's really several layers of merging you can
automate, and git right now only automates the very very lowest stuff.
In other words, it should be possible to make git do pretty well even for
complex branches, but no, right now the automated parts don't even try.
And it's still _all_ geared towards development that ends up being merged
reasonably often - you can merge in one direction for a while to keep the
pain incremental, but yes, you do need to merge the other way
_eventually_.
> The assumption that those peripheral branches must be short-lived and
> discardable is valid for a limited set of cases -- very circumscribed
> to short-term dev work. As soon as a dev branch has to run a little
> longer, it cannot afford to not sync with the HEAD. Particularly, it
> cannot skip a _single_ patch coming from HEAD.
It's true that you can't skip patches, since a merge is always
all-or-nothing, and the global history really requires that (in a very
fundamental way). However, you _can_ merge from the HEAD, and then say
"that patch is no longer relevant in this tree", and remove it. Then
you'll never need to worry about that patch any more, because you've
merged it and in your merged result, it no longer exists).
That said, I don't claim that this is what you always want to do. In fact,
we have this exact issue in the kernel, and I think it's a great thing to
use a combination of a patch-based system _and_ git. You use git for the
"core", you use the patch-based system for the more fluid stuff. In the
case of the kernel, the patch-based system that seems to be used the most
is "quilt", but that's a detail. The big issue is that git is simply part
of a bigger process:
> I use peripheral branches to track versions of code in production,
> that may have different sets of patches applied. For that purpose,
> patch-based SCMs are quite helpful (you can ask what patch is applied
> where), but as Linus pointed out, they don't actually help convergence
> at all. Git pulls towards convergence like a demon AFAICS -- yet some
> primitive patch trading smarts would save a lot of effort at the
> borders of the dev network.
Yes. I think this is fundamental. "git" needs to converge. It's how git
works. git also "crystallizes" the history and makes it unbreakable.
Both of these things are _wonderful_ for what they mean, but both of these
fundamental issues are _horrible_ for other reasons.
So it's not a one-size-fits-all. git is designed to work around the kind
of stuff I do - my job in many ways is exactly to "crystallize" the work
that happens around me. I'm the ugly speck of dirt around which a
beautiful snowflake forms.
But to get the pieces that crystallize to form, you need all those unruly
patches floating around freely as a gas, and feeding the crystallization
process. And git is not suitable _at_all_ for that phase.
So I don't think that you should necessarily think of git as "the" souce
control management in the big picture. It's how you handle _one_ phase of
development, and the constraints it puts on that phase are both wonderful
and horrible, and thus you really should see the git phase as being the
solid core, but no more.
So git (apart from the use of "ephemeral branches for testing") is _not_
very conductive to wild development. I think git is actually a wonderful
way of doing the wild development too, but only in a microscopic sense:
you can use git either for the "big picture" crystal, or you can use git
for the "I want to keep track of what I did" on a small scale, but then in
between those things you'd need to have a patch-manager or something.
In other words, I'd expect that a _lot_ of git usage is of the type:
- clone a throw-away tree from the standard repositories
- do random things in it, merge with the standard repo every once in a
while. This is the "little picture".
- export the result as a patch when you're happy with it, and use
somethign else to keep track of the patch until it can be merged into
the "big picture".
So I don't think any development effort that is big enough necessarily
wants to use git as the _only_ way of doing development and merging stuff.
The kernel certainly does not. Not now, and not in the long run.
(We know about the long run, because these issues were largely trye of BK
too, although BK handled especially metadata merges much much better. But
even with BK, we always ended up having 50% or more of the actual
development end results going around as patches, and BK was really the way
to crystallize the end result. It shouldn't come as a surprise that git
does the same - git was designed not so much as a technical BK
replacement, but as a replacement for that _process_ we had for BK).
Final note: I do believe that the kernel is kind of strange. I doubt
anybody else ended up using BK the way we did, and I suspect _most_ BK
users used BK as a better CVS, where BK was the primary - and only - thing
that kept track of patches.
Whether the kernel model is applicable to anything else, I dunno.
Linus
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Upstream merging and conflicts (was Re: Using cvs2git to track an external CVS project)
2005-06-08 15:59 ` Linus Torvalds
@ 2005-06-08 22:09 ` Martin Langhoff
2005-06-09 2:34 ` Linus Torvalds
0 siblings, 1 reply; 8+ messages in thread
From: Martin Langhoff @ 2005-06-08 22:09 UTC (permalink / raw)
To: Git Mailing List
On 6/9/05, Linus Torvalds <torvalds@osdl.org> wrote:
> Yes, and right now that "manual" part is actualyl fairly high. I'll fix
> that thing up (today, I hope), making nontrivial merges much easier.
Wow -- I'll be tracking your tree closely then ;-)
> It's true that you can't skip patches, since a merge is always
> all-or-nothing, and the global history really requires that (in a very
> fundamental way). However, you _can_ merge from the HEAD, and then say
> "that patch is no longer relevant in this tree", and remove it. Then
> you'll never need to worry about that patch any more, because you've
> merged it and in your merged result, it no longer exists).
I had that strategy in my back-pocket already, but it doesn't sound right.
> That said, I don't claim that this is what you always want to do. In fact,
> we have this exact issue in the kernel, and I think it's a great thing to
> use a combination of a patch-based system _and_ git. You use git for the
> "core", you use the patch-based system for the more fluid stuff. In the
> case of the kernel, the patch-based system that seems to be used the most
> is "quilt", but that's a detail. The big issue is that git is simply part
> of a bigger process:
Sounds like I'll be reading up on quilt then. I guess that's what I
was looking for...
> Yes. I think this is fundamental. "git" needs to converge. It's how git
> works. git also "crystallizes" the history and makes it unbreakable.
> Both of these things are _wonderful_ for what they mean, but both of these
> fundamental issues are _horrible_ for other reasons.
Fair enough -- and actually I'm not convinced it's a horrible thing.
Having worked with forever-diverging tools like Arch, I can appreciate
the value of crystallizing and identifying when you've converged --
and rebasing all steps forward on the fact that you've converged. This
is huge.
A patch-based workflow is needed in the periphery of HEAD -- but
patch-based tools fail to see when they've converged. What I am
salivating about is the idea of some basic patch smarts based on
git/cogito that I can use to track things in simplistic scenarios.
Right now as soon as I'm one patch "off" all of git support breaks
down and it's really hard to keep merging forward. Unless I merge &
reverse as discussed.
> So I don't think that you should necessarily think of git as "the" souce
> control management in the big picture.
Yup. And quilt or other tools in the periphery. Something like the
git-aware darcs (which I haven't looked at yet).
> So I don't think any development effort that is big enough necessarily
> wants to use git as the _only_ way of doing development and merging stuff.
> The kernel certainly does not. Not now, and not in the long run.
Agreed. I'm happy to roll out some custom perl scripts around git (or
extend cogito a bit) if git can expose some stable holding points for
external tools to try and do some lightweight patch tracking.
> Whether the kernel model is applicable to anything else, I dunno.
I don't know either -- but I'm sure the toolset around git can support
a range of dev models. I don't think any other project has such a
large pool and strong convergence dynamics as the kernel. But git and
its tools and practices can be (I'm hoping) quite flexible to support
a range of dev models.
cheers,
martin
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Upstream merging and conflicts (was Re: Using cvs2git to track an external CVS project)
2005-06-08 22:09 ` Martin Langhoff
@ 2005-06-09 2:34 ` Linus Torvalds
2005-06-09 11:03 ` Martin Langhoff
0 siblings, 1 reply; 8+ messages in thread
From: Linus Torvalds @ 2005-06-09 2:34 UTC (permalink / raw)
To: Martin Langhoff; +Cc: Git Mailing List
On Thu, 9 Jun 2005, Martin Langhoff wrote:
> On 6/9/05, Linus Torvalds <torvalds@osdl.org> wrote:
> > Yes, and right now that "manual" part is actualyl fairly high. I'll fix
> > that thing up (today, I hope), making nontrivial merges much easier.
>
> Wow -- I'll be tracking your tree closely then ;-)
Well, it's done, and it's now "much easier" in the sense that anything
that doesn't have metadata changes should be picked up pretty trivially by
the three-way merge thing.
But if you move things around, then you'd need to have a merge that is
aware of movement, ie something much more sophisticated than just 3way
merge.
> > It's true that you can't skip patches, since a merge is always
> > all-or-nothing, and the global history really requires that (in a very
> > fundamental way). However, you _can_ merge from the HEAD, and then say
> > "that patch is no longer relevant in this tree", and remove it. Then
> > you'll never need to worry about that patch any more, because you've
> > merged it and in your merged result, it no longer exists).
>
> I had that strategy in my back-pocket already, but it doesn't sound right.
With global history, you really don't end up having much choice.
The alternative is to have per-file history, which sucks pretty bad
(you're screwed if you have a file that is touched by two things, so it
just moves the problem a bit, but more importantly you now have all the
CVS crud), or then you have to play games like arch or darcs which is all
about re-ordering patches (and then your history is totally malleable,
with all the problems that entails).
> > Yes. I think this is fundamental. "git" needs to converge. It's how git
> > works. git also "crystallizes" the history and makes it unbreakable.
> > Both of these things are _wonderful_ for what they mean, but both of these
> > fundamental issues are _horrible_ for other reasons.
>
> Fair enough -- and actually I'm not convinced it's a horrible thing.
It's absolutely not horrible, but it limits how you work. _I_ think it
limits you in good ways, but it's definitely a limitation.
> Having worked with forever-diverging tools like Arch, I can appreciate
> the value of crystallizing and identifying when you've converged --
> and rebasing all steps forward on the fact that you've converged. This
> is huge.
My gut feel is that it should be possible to have a hybrid system that
handles both the solid "crystalline" phase (aka git) and the "gas" phase
(aka free-lowing patches) and have them integrate with each other well.
That's kind of the way the kernel works, with people using quilt as a way
to capture the patches in between.
My read is that this analogy arch and darcs try to avoid the really solid
crystalline phase entirely and end up being amorphous. You can probably
have that too, but on the other hand it's fairly easy to merge between two
"crystallized" repositories and be totally unambigious about what the
result is, but if there's a lot of the amorpous stuff going on, it's not
clear any more.
> > Whether the kernel model is applicable to anything else, I dunno.
>
> I don't know either -- but I'm sure the toolset around git can support
> a range of dev models. I don't think any other project has such a
> large pool and strong convergence dynamics as the kernel. But git and
> its tools and practices can be (I'm hoping) quite flexible to support
> a range of dev models.
Hey, I obviously think you're right. Using git gives good ways of
communicating the core infrastructure between two (or more) groups, while
then internally within the group they may use loser patch-tracking systems
that don't have the same kind of convergence requirements (but which you
don't need for a "small" set of patches anyway, where "small" can
obviously be hundreds of internal patches).
Linus
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Upstream merging and conflicts (was Re: Using cvs2git to track an external CVS project)
2005-06-09 2:34 ` Linus Torvalds
@ 2005-06-09 11:03 ` Martin Langhoff
0 siblings, 0 replies; 8+ messages in thread
From: Martin Langhoff @ 2005-06-09 11:03 UTC (permalink / raw)
To: Git Mailing List
On 6/9/05, Linus Torvalds <torvalds@osdl.org> wrote:
> Well, it's done, and it's now "much easier" in the sense that anything
> that doesn't have metadata changes should be picked up pretty trivially by
> the three-way merge thing.
>
> But if you move things around, then you'd need to have a merge that is
> aware of movement, ie something much more sophisticated than just 3way
> merge.
I've followed the discussion and it's really good. I'll be playing
with the code on a cvs2git import I'm doing.
Two questions:
- Is there a way to ask "what is the patchlog that you'll merge if I
ask you to merge from X"?
- When an "automatic" merge happens, is there anything that
identifies commit with the commit that is being merged if the trees
are not identical? Is there a way to do that
What I am thinking of doing is having a perl (or bash?) script that
looks at the changelog of two branches since they last converged,
looks at each commit and makes an educated guess of what patches are
in both and /facilitates/ extracting a patch the remote branch and
applying it locally with the same commit msg.
There are no promises on the guess -- it has to be reviewed & checked
-- but I find that is always true when trading patches across
branches. A lightweight, best-effort script to help someone who is
going to backport some patches from HEAD to the stable branch.
> > > It's true that you can't skip patches, since a merge is always
> > > all-or-nothing, and the global history really requires that (in a very
> > > fundamental way). However, you _can_ merge from the HEAD, and then say
> > > "that patch is no longer relevant in this tree", and remove it. Then
> > > you'll never need to worry about that patch any more, because you've
> > > merged it and in your merged result, it no longer exists).
> >
> > I had that strategy in my back-pocket already, but it doesn't sound right.
>
> With global history, you really don't end up having much choice.
>
> The alternative is to have per-file history, which sucks pretty bad
I don't quite follow you on why the per-file or per-tree worldview
affects this. Not very important though ;)
> My gut feel is that it should be possible to have a hybrid system that
> handles both the solid "crystalline" phase (aka git) and the "gas" phase
> (aka free-lowing patches) and have them integrate with each other well.
> That's kind of the way the kernel works, with people using quilt as a way
> to capture the patches in between.
Absolutement. For most projects I suspect that the gas (fluid?) phase
can be quite simple. In fact, simple is better.
> My read is that this analogy arch and darcs try to avoid the really solid
> crystalline phase entirely and end up being amorphous. You can probably
> have that too, but on the other hand it's fairly easy to merge between two
> "crystallized" repositories and be totally unambigious about what the
> result is, but if there's a lot of the amorpous stuff going on, it's not
> clear any more.
After a while, and a few thousand patches since you "branched",
patch-based SCMs don't help converge. At no point can they recognized
that 2 trees have converged and that the patch track becomes
irrelevant.
> > > Whether the kernel model is applicable to anything else, I dunno.
> >
> > I don't know either -- but I'm sure the toolset around git can support
> > a range of dev models. I don't think any other project has such a
> > large pool and strong convergence dynamics as the kernel. But git and
> > its tools and practices can be (I'm hoping) quite flexible to support
> > a range of dev models.
>
> Hey, I obviously think you're right. Using git gives good ways of
> communicating the core infrastructure between two (or more) groups, while
> then internally within the group they may use loser patch-tracking systems
> that don't have the same kind of convergence requirements (but which you
> don't need for a "small" set of patches anyway, where "small" can
> obviously be hundreds of internal patches).
Interesting concept -- git as the head branch where everyone converges
while perhaps using other tools. Still... looser patch tracking
strategies can be based on git primitives, like some kind of commit
identity that travels with the merges, perhaps not to be used by git
but as a hooking point for naive patch tracking. But I suspect it may
be anathema to the git philosophy ;-)
cheers,
martin
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2005-06-09 10:59 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-06-01 12:35 Using cvs2git to track an external CVS project Martin Langhoff
2005-06-01 13:07 ` Anton Altaparmakov
2005-06-02 20:10 ` (was Re: Using cvs2git to track an external CVS project) Martin Langhoff
2005-06-02 20:59 ` Upstream merging and conflicts " Martin Langhoff
2005-06-08 15:59 ` Linus Torvalds
2005-06-08 22:09 ` Martin Langhoff
2005-06-09 2:34 ` Linus Torvalds
2005-06-09 11:03 ` Martin Langhoff
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).