git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Git Vs. Svn for a project which *must* distribute binaries too.
@ 2007-06-04 11:48 Bryan Childs
  2007-06-04 11:56 ` Julian Phillips
                   ` (4 more replies)
  0 siblings, 5 replies; 25+ messages in thread
From: Bryan Childs @ 2007-06-04 11:48 UTC (permalink / raw)
  To: git

Hello git users / maintainers / fans,

My fellow projecteers and I watched a presentation given by Linus
Torvalds on the advantages of git given at a google questions session
sometime recently.

Our project, www.rockbox.org, an open source firmware replacement
project for digital audio players currently makes use of subversion
for it's source code management system, but Linus's eloquent (though
sometimes rather blunt) speech has made us question whether git is
perhaps a better solution for us.

On the whole, we like a lot of the features it offers but, we have a
couple of issues which we've discussed, and so far have failed to come
up with a decent resolution for them.

1) Due to the nature of our project, with multiple architectures
supported, we strive to provide a binary build of our software with
every commit to the subversion repository. This is so that we can
provide a working firmware for the majority of our users that don't
have the necessary know-how for cross-compiling and so forth.

2) Unlike the Linux Kernel, which Linus uses as a prime example of
something git is very useful for, the Rockbox project has no central
figurehead for anyone to consider as owning the "master" repository
from which to build the "current" version of the Rockbox firmware for
any given target.

3) With a central repository, for which we have a limited number of
individuals having commit access, it's easy for us to automate a build
based on each commit the repository receives.

Given these three points, we wonder how we'd best achieve the same
using git. As far as we can make out we'd need to appoint someone as a
maintainer for a master repository whose job it is to co-ordinate
pulls from people based on when they've made changes we wish to
include in the latest version of our software. This sounds like a time
consuming role for a project which is only staffed by volunteers.

Can anyone offer any insights for us here?

Bryan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 11:48 Bryan Childs
@ 2007-06-04 11:56 ` Julian Phillips
  2007-06-04 13:18 ` Theodore Tso
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 25+ messages in thread
From: Julian Phillips @ 2007-06-04 11:56 UTC (permalink / raw)
  To: Bryan Childs; +Cc: git

On Mon, 4 Jun 2007, Bryan Childs wrote:

> 2) Unlike the Linux Kernel, which Linus uses as a prime example of
> something git is very useful for, the Rockbox project has no central
> figurehead for anyone to consider as owning the "master" repository
> from which to build the "current" version of the Rockbox firmware for
> any given target.
>
> 3) With a central repository, for which we have a limited number of
> individuals having commit access, it's easy for us to automate a build
> based on each commit the repository receives.
>
> Given these three points, we wonder how we'd best achieve the same
> using git. As far as we can make out we'd need to appoint someone as a
> maintainer for a master repository whose job it is to co-ordinate
> pulls from people based on when they've made changes we wish to
> include in the latest version of our software. This sounds like a time
> consuming role for a project which is only staffed by volunteers.

You can setup git to work in a centralised style if you wish.

See http://www.kernel.org/pub/software/scm/git/docs/cvs-migration.html

-- 
Julian

  ---
If reporters don't know that truth is plural, they ought to be lawyers.
 		-- Tom Wicker

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 11:48 Bryan Childs
  2007-06-04 11:56 ` Julian Phillips
@ 2007-06-04 13:18 ` Theodore Tso
  2007-06-04 14:58 ` Johannes Schindelin
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 25+ messages in thread
From: Theodore Tso @ 2007-06-04 13:18 UTC (permalink / raw)
  To: Bryan Childs; +Cc: git

On Mon, Jun 04, 2007 at 12:48:17PM +0100, Bryan Childs wrote:
> 2) Unlike the Linux Kernel, which Linus uses as a prime example of
> something git is very useful for, the Rockbox project has no central
> figurehead for anyone to consider as owning the "master" repository
> from which to build the "current" version of the Rockbox firmware for
> any given target.

> 3) With a central repository, for which we have a limited number of
> individuals having commit access, it's easy for us to automate a build
> based on each commit the repository receives.

You might want to take a look at http://repo.or.cz for an example of
how you can have a limited number of trusted inidividuals with commit
access.  As has been said before, <SCM> is not a substitute for
communication, and if you have multiple people who can commit into a
repository, you had better make sure those trusted individuals with
commit access are talking to each other.  

There are some folks who have created hooks to do more fine-grained
access control systems, if you want to replicate SVN's ability to
control who can commit to which branch.  

Regards,

						- Ted

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 11:48 Bryan Childs
  2007-06-04 11:56 ` Julian Phillips
  2007-06-04 13:18 ` Theodore Tso
@ 2007-06-04 14:58 ` Johannes Schindelin
  2007-06-04 15:20 ` Linus Torvalds
  2007-06-04 23:46 ` Jakub Narebski
  4 siblings, 0 replies; 25+ messages in thread
From: Johannes Schindelin @ 2007-06-04 14:58 UTC (permalink / raw)
  To: Bryan Childs; +Cc: git

Hi,

On Mon, 4 Jun 2007, Bryan Childs wrote:

> 1) Due to the nature of our project, with multiple architectures
> supported, we strive to provide a binary build of our software with
> every commit to the subversion repository.

Git has no problems with binaries. Actually, one could argue that it has 
less problems with binary files than with text files, since it only 
recently acquired the capability (disabled by default) to transcribe 
certain files into the CR/LF line ending some Windows programs still 
insist on.

As for checking in binaries, you even could set up a post-commit hook, 
which builds the binary, and checks it into a separate branch...

Ciao,
Dscho

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 11:48 Bryan Childs
                   ` (2 preceding siblings ...)
  2007-06-04 14:58 ` Johannes Schindelin
@ 2007-06-04 15:20 ` Linus Torvalds
  2007-06-04 15:38   ` Bryan Childs
  2007-06-04 23:46 ` Jakub Narebski
  4 siblings, 1 reply; 25+ messages in thread
From: Linus Torvalds @ 2007-06-04 15:20 UTC (permalink / raw)
  To: Bryan Childs; +Cc: git



On Mon, 4 Jun 2007, Bryan Childs wrote:
> 
> 1) Due to the nature of our project, with multiple architectures
> supported, we strive to provide a binary build of our software with
> every commit to the subversion repository. This is so that we can
> provide a working firmware for the majority of our users that don't
> have the necessary know-how for cross-compiling and so forth.

Git has no problems with binaries, but I _really_ I hope that you don't 
actually want to check these binaries into the repository? You could do 
that, and the git delta algorithm might even be able to compress the 
binaries against each other, but it could still be pretty nasty.

And by "pretty nasty" I don't mean that git won't be able to handle it: I 
suspect it's no worse from a disk size perspective than SVN.  But since 
git is distributed, it means that everybody who fetches it will get the 
whole archive with whole history - it means that cloning the result is 
going to be really painful with tons of old binaries that nobody really 
cares about beign pushed around.

So I *hope* that you want to just have automated build machinery that 
builds the binaries to a *separate* location? You could use git to archive 
them, and you can obviously (and easily) name the resulting binary blobs 
by the versions in the source tree, but I'm just saying that trying to 
track the binaries from within the same git repository as the source code 
is less than optimal.

> 2) Unlike the Linux Kernel, which Linus uses as a prime example of
> something git is very useful for, the Rockbox project has no central
> figurehead for anyone to consider as owning the "master" repository
> from which to build the "current" version of the Rockbox firmware for
> any given target.

The kernel is really kind of odd in that it has just a single maintainer. 
That's usually the case only for much smaller projects.

And no, git is not at all exclusively *designed* for that situation, 
although it is arguably one situation that git works really well for. 

There is nothing to say that you cannot have shared repositories that are 
writably by multiple users. Anything that works for a single person works 
equally well for a "group of people" that all write to the same central 
git repo. It ends up not being how the kernel does things (not because of 
git, but because it's not how I've ever worked), but the kernel situation 
really _is_ pretty unusual.

So git makes everybody have their own repository in order to commit, but 
you can (and some people do) just view that as your "CVS working tree", 
and every time you commit, you end up pushing to some central repository 
that is writable by the "core group" that has commit access.

In *practice*, I suspect that once you get used to the git model, you'd 
actually end up with a hybrid scheme, where you might have a *smaller* 
core group with commit access to the central repository (in git, it 
wouldn't be "commit access", it would really be "ability to push", but 
that's a technical difference ratehr than anything conceptually huge), and 
members in that core group end up pulling from others.

But that would literally be once you have gotten used to the git model, 
and you can start out just totally emulating the old CVS/SVN model with a 
single central repository.

> 3) With a central repository, for which we have a limited number of
> individuals having commit access, it's easy for us to automate a build
> based on each commit the repository receives.

.. and that's exactly how you'd do it with git too. You wouldn't have a 
"commit trigger", but you'd have a "receive trigger", which triggers 
whenever somebody pushes to the central repository.

And that does mean that a developer might do a series of _five_ commits 
locally on his own machine, and they are totally invisible to everybody 
until he pushes to the central repository: and then the build will build 
just the top-most end result commit. So you'd not necessarily have a 
binary for _each_ commit, but:

 - you could (if you really wanted to) actually force people to always 
   send just one commit at a time. You could even enforce that in the 
   pre-receive triggers, so that people *cannot* push multiple commits at 
   a time.

   Quite frankly, I really don't think you want to go this way. I think 
   you want to perhaps _encourage_ people to send just one commit at a 
   time, but the much better model is the other choice:

 - realize that the git model tends to encourage many small commits 
   (because you *can* make commits without impacting others), so when you 
   fix something, or add a new feature, with git, you can do it as many 
   small steps, and then only "push" when it's ready.

   IOW, if you encourage people to do small step-wise changes, you 
   probably don't even *want* a build for each commit, you really want a 
   build for the case where "my feature is now ready, I'll push". So you'd 
   effectively get one build not per commit, but per "publication point".

But anyway, it really boils down to: you *can* use a distributed 
development model to emulate a totally centralized situation (put another 
way: "centralized" is just one very trivial special case of 
"distributed"), but I suspect that while you might want to start out 
trying to change as little as possible in your development model, I 
equally strongly suspect that you'll find out that the distributed nature 
makes _some_ changes to the model very natural, and you'll end up with 
more of a hybrid setup: aspects of a centralized model, but with 
distributed elements.

		Linus

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 15:20 ` Linus Torvalds
@ 2007-06-04 15:38   ` Bryan Childs
  2007-06-04 16:23     ` Linus Torvalds
                       ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Bryan Childs @ 2007-06-04 15:38 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: git

On 6/4/07, Linus Torvalds < [send email to
torvalds@linux-foundation.org via gmail]
torvalds@linux-foundation.org> wrote:
> So I *hope* that you want to just have automated build machinery that
> builds the binaries to a *separate* location? You could use git to archive
> them, and you can obviously (and easily) name the resulting binary blobs
> by the versions in the source tree, but I'm just saying that trying to
> track the binaries from within the same git repository as the source code
> is less than optimal.

Oh lord no - I never meant to imply that we'd be checking those
binaries in, I just meant to hi-light that we need a central
repository to build those binaries from - otherwise we'd end up with a
selection of binaries for our users to download which contain a bunch
of different features if they were built from a combination of
repositories. I know you think everyone else is a moron, but we're not
quite dumb enough to think maintaining binaries in a repository is a
good idea :)


> In *practice*, I suspect that once you get used to the git model, you'd
> actually end up with a hybrid scheme, where you might have a *smaller*
> core group with commit access to the central repository (in git, it
> wouldn't be "commit access", it would really be "ability to push", but
> that's a technical difference rather than anything conceptually huge), and
> members in that core group end up pulling from others.

This sounds like what we eventually came up with. I'm not sure how
soon we'll make a switch to a git repository, but when we do, this
seems to be the best model for the conversion in the short term, and
perhaps in the long term too.


> .. and that's exactly how you'd do it with git too. You wouldn't have a
> "commit trigger", but you'd have a "receive trigger", which triggers
> whenever somebody pushes to the central repository.

Yes, after I'd sent my email this morning I found you could do pushes
as well as pulls. That'll teach me to RTFM properly next time.

>  - realize that the git model tends to encourage many small commits
>    (because you *can* make commits without impacting others), so when you
>    fix something, or add a new feature, with git, you can do it as many
>    small steps, and then only "push" when it's ready.

This is what I personally was trying to advocate in our discussion -
but I'm not sure everyone quite understood it. Hopefully your
explanation will do a better job :)

>    IOW, if you encourage people to do small step-wise changes, you
>    probably don't even *want* a build for each commit, you really want a
>    build for the case where "my feature is now ready, I'll push". So you'd
>    effectively get one build not per commit, but per "publication point".

Absolutely.

>                 Linus

Thanks for your time (and everyone else who replied) - it's very much
appreciated!

Bryan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 15:38   ` Bryan Childs
@ 2007-06-04 16:23     ` Linus Torvalds
  2007-06-04 17:57       ` Thomas Glanzmann
  2007-06-04 22:29     ` Martin Langhoff
  2007-06-04 23:48     ` Daniel Barkalow
  2 siblings, 1 reply; 25+ messages in thread
From: Linus Torvalds @ 2007-06-04 16:23 UTC (permalink / raw)
  To: Bryan Childs; +Cc: git



On Mon, 4 Jun 2007, Bryan Childs wrote:
> 
> Oh lord no - I never meant to imply that we'd be checking those
> binaries in, I just meant to hi-light that we need a central
> repository to build those binaries from

Heh. I get worried (and judging from other responses, I wasn't the only 
one) when people start talking about generated binaries and SCM's.

Because people _have_ traditionally done things like commit the generated 
files too. 

But if it's just an automated build server, everything is good. That's 
trivial to do.

> > In *practice*, I suspect that once you get used to the git model, you'd
> > actually end up with a hybrid scheme, where you might have a *smaller*
> > core group with commit access to the central repository (in git, it
> > wouldn't be "commit access", it would really be "ability to push", but
> > that's a technical difference rather than anything conceptually huge), and
> > members in that core group end up pulling from others.
> 
> This sounds like what we eventually came up with. I'm not sure how
> soon we'll make a switch to a git repository, but when we do, this
> seems to be the best model for the conversion in the short term, and
> perhaps in the long term too.

Yes. As mentioned, the kernel model of having just one person push is 
actually fairly rare. 

When you have multiple people pushing, you have issues that I never have, 
but that you've already seen with CVS/SVN, for all the same reasons: you 
may need to merge the changes that others have done while you were working 
on yours.

However, the git "push" model is *different* from the CVS/SVN "commit" 
model.

In CVS/SVN, if you want to commit, and somebody else has done updates to 
the central repository, the "cvs commit" phase will obviously tell you 
that you're not up-to-date, and you cannot commit at all. So you end up 
doing a "cvs update -d" equivalent to first update your tree, then you 
have to resolve any conflicts, and then you can try to commit again.

In git, this is technically very different, yet similar. Since you can 
always commit to your *local* repository, when you do a "git commit", 
you'll never have any conflicts at all, because there is no conflicting 
work!

But the conflicts happen when you then do a "git push" to send out your 
commit(s) to the central repository. If nobody else has done any changes, 
at that point, you'll get exactly the same kind of situation as when you 
do a CVS commit, and the server will tell you that you're not up-to-date, 
and will refuse to take your push.

(The message is different: git will tell you that you try to push a commit 
that is not a "strict superset" of what the central repository has).

So when that happens with git, you actually have two different options:

 - you can do "git pull" to merge the central changes, and in that case 
   you get the exact same kinds of conflict markers for any conflicting 
   code that you would have gotten for "cvs update"

   This is how most people would probably use it, and it's the simplest 
   one, where you get very traditional commit conflict markers, fix it up, 
   and commit the merge. 

   However, it does end up making the history explicitly showing the 
   parallelism that happened, and while that is *correct* and can be very 
   useful, sometimes it means that especially if you've done just trivial 
   changes, you might want to take an alternate approach that "linearizes" 
   the history and makes it appear linear instead of parallel:

 - instead of doing a "git pull" that merges the two branches (your work, 
   and the work that happened by somebody else in the central repo while 
   you did it), you *may* also just want to do a "git fetch" to fetch the 
   changes from the central repo, and then do "git rebase origin" to 
   linearize the work you did on _top_ of those central repo one (so that 
   it no longer looks like a branch, and looks linear)

   In the "git rebase" case, you'll effectively merge your commits one at 
   a time, and you may thus have to fix up *multiple* conflicts. So it's 
   potentially more work, but it results in a simpler history if you want 
   it.

Regardless of how you ended up sorting out the fact that you had parallel 
development, once you've resolved it, you do a "git push" again, and now 
the stuff you're pushing is a proper superset of what the central 
repository had, so it will happily push it out.

(Of course, the exact same thing that can happen with CVS central 
repositories can happen with git ones too: by the time you've resolved all 
the differences and are ready to push them to the central one, somebody 
else might have pushed *more*, and you may need to do another "update" ;)

> Yes, after I'd sent my email this morning I found you could do pushes
> as well as pulls. That'll teach me to RTFM properly next time.

I think we talk a lot more about pulls, because we have had more people 
ask about them, and because more people tend to pull than to push.

The pull is also somewhat easier to explain. The pushing thing always has 
to talk about resolving differences when different people have pushed, so 
teaching people to push by necessity involves first teaching them about 
merging (ie pull or rebase).

Also, "push" is also a bit more interesting to explain, because a "push" 
won't update the working tree on the other end, so when you explain 
pushing, you should also explain about "bare" repositories (which I didn't 
do)), ie about having git repositories without any working tree associated 
with them.

So there is a bit of a learning experience involved, but espeically if 
some of the developers have seen git used in other environments (perhaps 
not as developers, just as users), it shouldn't be *that* hard to pick up. 
But there does seem to be a pretty big mental leap from the "centralized" 
thing to the "distributed" thing - I just moved over so long ago that I 
even have trouble understanding why people sometimes don't seem to find 
the distributed model the only natural and sane thing to do.

(It really does seem to be one of those "aha!" moments. People think 
distributed just adds a lot of complexity, and it takes a "Oh, *THAT* is 
how it works" kind of enlightenment to just switch your brain over, and I 
guarantee that once that moment on enlightenment hits, you'll never go 
back, but I cannot guarantee that that moment will happen for all 
developers ;)

		Linus

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 16:23     ` Linus Torvalds
@ 2007-06-04 17:57       ` Thomas Glanzmann
  2007-06-04 20:45         ` Linus Torvalds
  0 siblings, 1 reply; 25+ messages in thread
From: Thomas Glanzmann @ 2007-06-04 17:57 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Bryan Childs, git

Hello,

>  - instead of doing a "git pull" that merges the two branches (your work, 
>    and the work that happened by somebody else in the central repo while 
>    you did it), you *may* also just want to do a "git fetch" to fetch the 
>    changes from the central repo, and then do "git rebase origin" to 
>    linearize the work you did on _top_ of those central repo one (so that 
>    it no longer looks like a branch, and looks linear)

>    In the "git rebase" case, you'll effectively merge your commits one at 
>    a time, and you may thus have to fix up *multiple* conflicts. So it's 
>    potentially more work, but it results in a simpler history if you want 
>    it.

Thank you a lot. I finally understood what "git rebase" is all about!

        Thomas

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 17:57       ` Thomas Glanzmann
@ 2007-06-04 20:45         ` Linus Torvalds
  2007-06-04 21:21           ` Olivier Galibert
  0 siblings, 1 reply; 25+ messages in thread
From: Linus Torvalds @ 2007-06-04 20:45 UTC (permalink / raw)
  To: Thomas Glanzmann; +Cc: Bryan Childs, git



On Mon, 4 Jun 2007, Thomas Glanzmann wrote:
> 
> >  - instead of doing a "git pull" that merges the two branches (your work, 
> >    and the work that happened by somebody else in the central repo while 
> >    you did it), you *may* also just want to do a "git fetch" to fetch the 
> >    changes from the central repo, and then do "git rebase origin" to 
> >    linearize the work you did on _top_ of those central repo one (so that 
> >    it no longer looks like a branch, and looks linear)
> > 
> >    In the "git rebase" case, you'll effectively merge your commits one at 
> >    a time, and you may thus have to fix up *multiple* conflicts. So it's 
> >    potentially more work, but it results in a simpler history if you want 
> >    it.
> 
> Thank you a lot. I finally understood what "git rebase" is all about!

I'd like to point out some more upsides and downsides of "git rebase".

Downsides:

 - you're rewriting history, so you MUST NOT have made your pre-rebase 
   changes available publicly anywhere else (or you are in a world of pain 
   with duplicate history and tons of confusion)

 - you can only rebase "simple" commits. If you don't just have a linear 
   history of your own commits, but have merged from others, rebasing 
   isn't a sane alternative (yeah, we could make it do something half-way 
   sane, but really, it's not worth even contemplating)

Upsides:

 - while there may be more conflicts you have to sort out, they may be 
   individually  simpler, so you *might* actually prefer to do it that 
   way.

 - if the reason for the conflicts is that upstream did some nice cleanup 
   in the same area, and you decide that you would actually want to re-do 
   your development based on that nice cleanup, then "git rebase" can 
   actually be used as a way to help you do exactly that. IOW, you can 
   take _advantage_ of the conflicts as a way to re-apply the patches but 
   also then fix them up by hand to work in the new (better) world order.

And finally, the upside that is probably the most common case for using 
"git rebase", and has nothing to do with resolving conflicts before 
pushing them out with "git push":

 - if you actually want to send your changes upstream as emailed *patches* 
   rather than by pushing them out (or asking somebody else to pull them),
   rebasing is an excellent way to keep the set of patches "fresh" on top 
   of the current development tree.

   People who send their patches out as emails are also unlikely to have 
   the downsides (ie they normally send them as patches exactly *because* 
   they don't want to make their git trees public, and they probably just 
   have a small set of simple patches in their tree anyway)

So I have to say, I'm still very ambivalent about rebasing. It's 
definitely a very useful thing to do, but at the same time I think "git 
pull" in many ways is often the more honest and correct way to do things.

		Linus

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 20:45         ` Linus Torvalds
@ 2007-06-04 21:21           ` Olivier Galibert
  2007-06-04 21:33             ` Linus Torvalds
  2007-06-05  2:56             ` Johannes Schindelin
  0 siblings, 2 replies; 25+ messages in thread
From: Olivier Galibert @ 2007-06-04 21:21 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Thomas Glanzmann, Bryan Childs, git

On Mon, Jun 04, 2007 at 01:45:26PM -0700, Linus Torvalds wrote:
> I'd like to point out some more upsides and downsides of "git rebase".
> 
> Downsides:
> 
>  - you're rewriting history, so you MUST NOT have made your pre-rebase 
>    changes available publicly anywhere else (or you are in a world of pain 
>    with duplicate history and tons of confusion)

Wouldn't it be possible to register the rebase somewhere (weak parent?
some kind of note not influencing the sha1 ?) that pull/merge could
follow?  Rebases and cherry-picking are a special kind of merge, so
maybe it can be handled like one where it counts...

  OG.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 21:21           ` Olivier Galibert
@ 2007-06-04 21:33             ` Linus Torvalds
  2007-06-04 22:30               ` Joel Becker
  2007-06-05  2:56             ` Johannes Schindelin
  1 sibling, 1 reply; 25+ messages in thread
From: Linus Torvalds @ 2007-06-04 21:33 UTC (permalink / raw)
  To: Olivier Galibert; +Cc: Thomas Glanzmann, Bryan Childs, git



On Mon, 4 Jun 2007, Olivier Galibert wrote:

> On Mon, Jun 04, 2007 at 01:45:26PM -0700, Linus Torvalds wrote:
> > I'd like to point out some more upsides and downsides of "git rebase".
> > 
> > Downsides:
> > 
> >  - you're rewriting history, so you MUST NOT have made your pre-rebase 
> >    changes available publicly anywhere else (or you are in a world of pain 
> >    with duplicate history and tons of confusion)
> 
> Wouldn't it be possible to register the rebase somewhere (weak parent?
> some kind of note not influencing the sha1 ?) that pull/merge could
> follow?  Rebases and cherry-picking are a special kind of merge, so
> maybe it can be handled like one where it counts...

Well, it's not like duplicate history is a disaster from a *technical* 
angle. It might be a small space-waster etc, but that's really not the 
real issue.

The problem with duplicate history is that it just makes things much 
harder to look at. IOW, it's *messy*. So the "tons of confusion" part is 
basically purely about humans, not about git itself. Git won't really 
care, and there's no reason to "handle" it specially in that sense.

So I would strongly discourage people from ever making rebased history 
available, but that's not because of any particular git technical issues 
as just because of it being a good way to confuse all the _humans_ 
involved.

(That said, gits own 'pu' branch ends up jumping around, and it hasn't 
caused all that much confusion, so maybe I'm overstating even that human 
confusion)

			Linus

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 15:38   ` Bryan Childs
  2007-06-04 16:23     ` Linus Torvalds
@ 2007-06-04 22:29     ` Martin Langhoff
  2007-06-04 23:48     ` Daniel Barkalow
  2 siblings, 0 replies; 25+ messages in thread
From: Martin Langhoff @ 2007-06-04 22:29 UTC (permalink / raw)
  To: Bryan Childs; +Cc: Linus Torvalds, git

On 6/5/07, Bryan Childs <godeater@gmail.com> wrote:
> Oh lord no - I never meant to imply that we'd be checking those
> binaries in, I just meant to hi-light that we need a central
> repository to build those binaries from - otherwise we'd end up with a

If your infrastructure to build the binaries is automated, you can
easily script the build for new incoming commits. The output of
git-describe is really useful for this if you are going to name your
builds `git describe`-<arch>.tar.gz.

OTOH, commit is different from push (vs SVN where both are one op),
and that means that when using git you can present a large change as a
better-explained patch-series. That's actually a good practice for new
development, and it might not make sense to have literally
one-build-per-commit.

Maybe I'd enable auto-builds for maintenance/bugfixes branches, and on
other (experimental/devel) branches only auto-build commits selected
explicitly (tagged?).

cheers,


martin

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 21:33             ` Linus Torvalds
@ 2007-06-04 22:30               ` Joel Becker
  2007-06-05 11:19                 ` Theodore Tso
  0 siblings, 1 reply; 25+ messages in thread
From: Joel Becker @ 2007-06-04 22:30 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Olivier Galibert, Thomas Glanzmann, Bryan Childs, git

On Mon, Jun 04, 2007 at 02:33:18PM -0700, Linus Torvalds wrote:
> (That said, gits own 'pu' branch ends up jumping around, and it hasn't 
> caused all that much confusion, so maybe I'm overstating even that human 
> confusion)

	It survives because it is well-known.  Everyone expects it to
break.  ocfs2 has an "ALL" branch that is everything we have working,
sort of a "test this bleeding edge" thing.  It gets rebased all the
time, and everyone knows that they can't trust it to update linearly.
Other developers have similar things in their repositories.

Joel

-- 

"What no boss of a programmer can ever understand is that a programmer
 is working when he's staring out of the window"
	- With apologies to Burton Rascoe

Joel Becker
Principal Software Developer
Oracle
E-mail: joel.becker@oracle.com
Phone: (650) 506-8127

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 11:48 Bryan Childs
                   ` (3 preceding siblings ...)
  2007-06-04 15:20 ` Linus Torvalds
@ 2007-06-04 23:46 ` Jakub Narebski
  2007-06-06 22:34   ` Jakub Narebski
  4 siblings, 1 reply; 25+ messages in thread
From: Jakub Narebski @ 2007-06-04 23:46 UTC (permalink / raw)
  To: git

Bryan Childs wrote:

> 3) With a central repository, for which we have a limited number of
> individuals having commit access, it's easy for us to automate a build
> based on each commit the repository receives.

Check out contrib/continuous/ scripts in git repository: you would have
to enable it only on one machine, of course.
-- 
Jakub Narebski
Warsaw, Poland
ShadeHawk on #git

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 15:38   ` Bryan Childs
  2007-06-04 16:23     ` Linus Torvalds
  2007-06-04 22:29     ` Martin Langhoff
@ 2007-06-04 23:48     ` Daniel Barkalow
  2007-06-05  0:21       ` Linus Torvalds
  2 siblings, 1 reply; 25+ messages in thread
From: Daniel Barkalow @ 2007-06-04 23:48 UTC (permalink / raw)
  To: Bryan Childs; +Cc: Linus Torvalds, git

On Mon, 4 Jun 2007, Bryan Childs wrote:

> On 6/4/07, Linus Torvalds < [send email to
> torvalds@linux-foundation.org via gmail]
> torvalds@linux-foundation.org> wrote:
> > So I *hope* that you want to just have automated build machinery that
> > builds the binaries to a *separate* location? You could use git to archive
> > them, and you can obviously (and easily) name the resulting binary blobs
> > by the versions in the source tree, but I'm just saying that trying to
> > track the binaries from within the same git repository as the source code
> > is less than optimal.
> 
> Oh lord no - I never meant to imply that we'd be checking those
> binaries in, I just meant to hi-light that we need a central
> repository to build those binaries from - otherwise we'd end up with a
> selection of binaries for our users to download which contain a bunch
> of different features if they were built from a combination of
> repositories. I know you think everyone else is a moron, but we're not
> quite dumb enough to think maintaining binaries in a repository is a
> good idea :)

Actually, I've been playing with using git's data-distribution mechanism 
to distribute generated binaries. You can do tags for arbitrary binary 
content (not in a tree or commit), and, if you have some way of finding 
the right tag name, you can fetch that and extract it.

I came up with this at my job when we were trying to decide what to do 
with firmware images that we'd shipped, so that we'd be able to examine 
them again even if we lose the compiler version we used at the time. We 
needed an immutable data store with a mapping of tags to objects, and I 
realized that we already had something with these exact characteristics.

	-Daniel
*This .sig left intentionally blank*

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 23:48     ` Daniel Barkalow
@ 2007-06-05  0:21       ` Linus Torvalds
  2007-06-05  1:42         ` david
  0 siblings, 1 reply; 25+ messages in thread
From: Linus Torvalds @ 2007-06-05  0:21 UTC (permalink / raw)
  To: Daniel Barkalow; +Cc: Bryan Childs, git



On Mon, 4 Jun 2007, Daniel Barkalow wrote:
> 
> Actually, I've been playing with using git's data-distribution mechanism 
> to distribute generated binaries. You can do tags for arbitrary binary 
> content (not in a tree or commit), and, if you have some way of finding 
> the right tag name, you can fetch that and extract it.

Yes, I think git should be very nice for doing binary stuff like firmware 
images too, my only worry is literally about "mixing it in" with other 
stuff.

Putting lots of binary blobs into a git archive should work fine: but 
if you would then start tying them together (with a commit chain), it just 
means that even if you only really want _one_ of them, you end up getting 
them all, which sounds like a potential disaster.

On the other hand, if you actually want a way to really *archive* the dang 
things, that may well be what you actually want. In that case, having a 
separate branch that only contains the binary stuff might actually be what 
you want to do (and depending on the kind of binary data you have, the 
delta algorithm might even be good at finding common data sequences and 
compressing it).

> I came up with this at my job when we were trying to decide what to do 
> with firmware images that we'd shipped, so that we'd be able to examine 
> them again even if we lose the compiler version we used at the time. We 
> needed an immutable data store with a mapping of tags to objects, and I 
> realized that we already had something with these exact characteristics.

Yeah, if you just tag individual blobs, git will keep track of them, but 
won't link them together, so you can easily just look up and fetch a 
single one from such an archive. Sounds sane enough.

		Linus

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-05  0:21       ` Linus Torvalds
@ 2007-06-05  1:42         ` david
  2007-06-05  3:58           ` Linus Torvalds
  0 siblings, 1 reply; 25+ messages in thread
From: david @ 2007-06-05  1:42 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: Daniel Barkalow, Bryan Childs, git

On Mon, 4 Jun 2007, Linus Torvalds wrote:

> On Mon, 4 Jun 2007, Daniel Barkalow wrote:
>>
>> Actually, I've been playing with using git's data-distribution mechanism
>> to distribute generated binaries. You can do tags for arbitrary binary
>> content (not in a tree or commit), and, if you have some way of finding
>> the right tag name, you can fetch that and extract it.
>
> Yes, I think git should be very nice for doing binary stuff like firmware
> images too, my only worry is literally about "mixing it in" with other
> stuff.
>
> Putting lots of binary blobs into a git archive should work fine: but
> if you would then start tying them together (with a commit chain), it just
> means that even if you only really want _one_ of them, you end up getting
> them all, which sounds like a potential disaster.

if you put the binaries in a seperate repository and do shallow clones to 
avoid getting all the old stuff wouldn't that work well?

David Lang

> On the other hand, if you actually want a way to really *archive* the dang
> things, that may well be what you actually want. In that case, having a
> separate branch that only contains the binary stuff might actually be what
> you want to do (and depending on the kind of binary data you have, the
> delta algorithm might even be good at finding common data sequences and
> compressing it).
>
>> I came up with this at my job when we were trying to decide what to do
>> with firmware images that we'd shipped, so that we'd be able to examine
>> them again even if we lose the compiler version we used at the time. We
>> needed an immutable data store with a mapping of tags to objects, and I
>> realized that we already had something with these exact characteristics.
>
> Yeah, if you just tag individual blobs, git will keep track of them, but
> won't link them together, so you can easily just look up and fetch a
> single one from such an archive. Sounds sane enough.
>
> 		Linus
> -
> To unsubscribe from this list: send the line "unsubscribe git" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 21:21           ` Olivier Galibert
  2007-06-04 21:33             ` Linus Torvalds
@ 2007-06-05  2:56             ` Johannes Schindelin
  1 sibling, 0 replies; 25+ messages in thread
From: Johannes Schindelin @ 2007-06-05  2:56 UTC (permalink / raw)
  To: Olivier Galibert; +Cc: Linus Torvalds, Thomas Glanzmann, Bryan Childs, git

Hi,

On Mon, 4 Jun 2007, Olivier Galibert wrote:

> On Mon, Jun 04, 2007 at 01:45:26PM -0700, Linus Torvalds wrote:
>
> > I'd like to point out some more upsides and downsides of "git rebase".
> > 
> > Downsides:
> > 
> >  - you're rewriting history, so you MUST NOT have made your pre-rebase 
> >    changes available publicly anywhere else (or you are in a world of 
> >    pain with duplicate history and tons of confusion)
> 
> Wouldn't it be possible to register the rebase somewhere (weak parent? 
> some kind of note not influencing the sha1 ?) that pull/merge could 
> follow?

Actually, with reflogs (if you did not explicitely disable them), you 
should have the information already.

> Rebases and cherry-picking are a special kind of merge, so maybe it can 
> be handled like one where it counts...

There is something I have to add as a real disadvantage in rebase:

Usually you are expected to test your commits. So, say that you work on 
some patch series, and produce 3 well tested patches. Then you fetch 
upstream and realize it advanced by some commits, and rebase your three 
patches.

However, _none_ of your patches is well tested, because there is a quite 
real chance that your patches interact _badly_ with the patches you just 
fetched.

And if that is the case, git-bisect can very well attribute it to a wrong 
patch, either because more than one patch is bad, or because the last 
patch in your series _exposes_ the bug (but does not _introduce_ it).

Ciao,
Dscho

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-05  1:42         ` david
@ 2007-06-05  3:58           ` Linus Torvalds
  0 siblings, 0 replies; 25+ messages in thread
From: Linus Torvalds @ 2007-06-05  3:58 UTC (permalink / raw)
  To: david; +Cc: Daniel Barkalow, Bryan Childs, git



On Mon, 4 Jun 2007, david@lang.hm wrote:
> 
> if you put the binaries in a seperate repository and do shallow clones to
> avoid getting all the old stuff wouldn't that work well?

Yes. I'm not a huge fan of shallow clones, and I suspect they've not 
gotten all that much testing, but that would certainly solve the problem 
of getting unnecessarily much data..

		Linus

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 22:30               ` Joel Becker
@ 2007-06-05 11:19                 ` Theodore Tso
  0 siblings, 0 replies; 25+ messages in thread
From: Theodore Tso @ 2007-06-05 11:19 UTC (permalink / raw)
  To: Joel Becker
  Cc: Linus Torvalds, Olivier Galibert, Thomas Glanzmann, Bryan Childs,
	git

On Mon, Jun 04, 2007 at 03:30:03PM -0700, Joel Becker wrote:
> 	It survives because it is well-known.  Everyone expects it to
> break.  ocfs2 has an "ALL" branch that is everything we have working,
> sort of a "test this bleeding edge" thing.  It gets rebased all the
> time, and everyone knows that they can't trust it to update linearly.
> Other developers have similar things in their repositories.

I wonder if it would be useful to be able to be able to flag a
branches as "jumping around a lot", where this flag would be
downloaded from another repository when it is cloned, so that a naive
user could get some kind of warning before committing a patch on top
of one of these branches that is known jump around.  

	"This branch gets rebased all the time and is really meant for
	testing.  If you really want to commit this changeset, please
	configure yourself for expert mode or use the --force."

Or maybe just a warning, ala what we do with detached heads.

						- Ted

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-04 23:46 ` Jakub Narebski
@ 2007-06-06 22:34   ` Jakub Narebski
  0 siblings, 0 replies; 25+ messages in thread
From: Jakub Narebski @ 2007-06-06 22:34 UTC (permalink / raw)
  To: git

Jakub Narebski wrote:

> Bryan Childs wrote:
> 
>> 3) With a central repository, for which we have a limited number of
>> individuals having commit access, it's easy for us to automate a build
>> based on each commit the repository receives.
> 
> Check out contrib/continuous/ scripts in git repository: you would have
> to enable it only on one machine, of course.

You can also use something similar to dodoc.sh script in 'todo' branch of
git repository, which script makes some build results and saves them in
_separate_ branch of repository.

-- 
Jakub Narebski
Warsaw, Poland
ShadeHawk on #git

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
@ 2007-06-07  4:36 linux
  2007-06-07  7:57 ` Bryan Childs
  0 siblings, 1 reply; 25+ messages in thread
From: linux @ 2007-06-07  4:36 UTC (permalink / raw)
  To: git, godeater

There's no reason that git can't do everything you have svn doing.
What SVN calls "commit access" is what git refers to as "push access",
but it's exactly the same thing.  I don't see how it's the tiniest bit
more difficult.

The only difference is that in git, it's a two-stage process: you commit
locally, and then push that commit (or, more commonly, a whole chain of
commits) when it's ready.  But to the receiving repository, it's just
another commit.

You can have a central server, just like you have with SVN, and when it
gets new versions, it can auto-build them and do whatever it likes with
the binaries.  (Including stick them in the same git repository, or
a different one.)

There's no need for such a central repository to be "owned" by one particular
person.  You can have a shared repository with multiple people having commit
access.  Linus likes to keep very tight security over his master repoisitory
and only pull, but git supports a shared-access repository just fine.

The only thing, and it's not a very big thing, is that if you want
fine-grained access control, you have to implement it yourself via the
pre-receive hook rather than having a canned implementation ready.


As for making a binary of every commit, git encourages a slightly
different workflow:
- Because commits are very easy, and private (until pushed), you're
  encouraged to make lots of small commits.  I used to hold of committing to
  CVS if working on a big patch.  Now, I use git freely on a private branch
  to keep track of my own hacking.
  (git-gui is nice for encouraging me to commit frequently.  As I make
  edits, I write the commit message and watch the patch grow.  When it
  gets big enough, click "commit" and keep going.  I have a perfectly good
  memory, but after three phone calls, two "just a quick question"s and
  an impromptu meeting, the notes make it quicker to get back into it.)
- If you later decide the commits aren't something you want to show the world,
  then don't.  You can cherry-pick the good ideas and kill the lousy ones.
- The simplest example of this is "git commit --amend".  Git lets you
  commit before testing, and if you find some stupid typo that prevents
  the code from even compiling, you can just fix it and re-do the commit.
  *Poof*, your embarassing mistake just disappeared.
  (When learning git merges, it took me a log time to get over my fear of
  committing a mis-merge.  With git, it doesn't matter; it's just as
  easy to undo a commit as to do one, as long as you haven't published
  the results.)

- On the other hand, if you want to enjoy the full benefits of git-bisect,
  which can let J. Random Bug-submitter find the commit that caused a
  regression while you eat chilled grapes on the beach, you want both
  small commits and commits that don't break the build.  So cleaning up
  your history before publishing can be a very worthwhile effort.
  This is a step that many people aren't used to doing, and you don't
  need to force it on your developers.  Linus has long required such
  efforts, to make code review easier, but there are different traditions.
  But it really does make tracking down bugs a lot easier.


Anyway, because of the small-commit tendency, you might want to only
build one binary per push, not one binary per version.  (Oh, I should
note that it is perfectly legal to push an old version that the receiving
repository already has.  It has no effect on the repository, but you
could have it tickle your autobuilder.  Check with someone who knows
whether git even runs the commit hooks in that case, though.)

But you can do whatever.  git-archive is a useful little tool for getting
source snapshots to compile.

Once you've built the binary, you can, if you like, put it into a git
branch by itself.  You could even put it in the same repository as the
sources, but with a totally disjoint history, but if you never intend
to merge the branches, that just complicates your life and increases
the chance that somebody will clone that branch.  It makes more sense
to use a separate repository.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-07  4:36 Git Vs. Svn for a project which *must* distribute binaries too linux
@ 2007-06-07  7:57 ` Bryan Childs
  2007-06-07 16:51   ` linux
  0 siblings, 1 reply; 25+ messages in thread
From: Bryan Childs @ 2007-06-07  7:57 UTC (permalink / raw)
  To: linux@horizon.com; +Cc: git

On 7 Jun 2007 00:36:32 -0400, linux@horizon.com <linux@horizon.com> wrote:
> There's no reason that git can't do everything you have svn doing.
> What SVN calls "commit access" is what git refers to as "push access",
> but it's exactly the same thing.  I don't see how it's the tiniest bit
> more difficult.

<snip>

Thanks for this extremely length and informative reply - it's answered
all our concerns, and we may well move to git sooner rather than later
now!

Bryan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-07  7:57 ` Bryan Childs
@ 2007-06-07 16:51   ` linux
  2007-06-08 20:41     ` Jan Hudec
  0 siblings, 1 reply; 25+ messages in thread
From: linux @ 2007-06-07 16:51 UTC (permalink / raw)
  To: godeater, linux; +Cc: git

> Thanks for this extremely length and informative reply - it's answered
> all our concerns, and we may well move to git sooner rather than later
> now!

You're welcome.  You just seemed to be under some misapprehension.
Git can certainly do all the basic things that svn does, and just
as easily.

You'll only have to learn more because you'll want to do more.

There are three big things that you'll want to get used to
when coming from CVS or any other centralized version system:

1) Don't blink, you might miss it.  If you're used to CVS, you might
   wonder whether git actually *did* anything.  Commits, in particular,
   are instantaneous if the necessary data is cached in RAM, and it can
   take a while to learn to trust that everything worked.

2) Commits can be undone.  It can be a bit scary the way a command
   like git-rebase will make a whole bunch if repository changes and
   then maybe get stuck with a patch conflict.  You want to be
   comfortable undoing things, or amending commits if you're
   not happy with what happened.

   This is why novices (and I used to be one) are reassured by the
   existence of "git merge --no-commit", but experienced users don't
   see the point.

   With git, pushing (or asking someone else to pull) is the
   moment of truth.  Committing has no lasting consequences.

3) Branches are your friend.  CVS users think branches are a big
   deal and require careful thought and planning.  Git users branch
   almost as often as CVS users commit.  A typical "big change"
   that might be a single commit in CVS would be a branch of
   several commits in git.

   In fact, a good piece of advice is to NEVER commit directly
   to your trunk ("master").  Do ALL development on branches, and
   merge them into the trunk.

   I cheat on that a lot, but I also know how to fix things if I get
   caught becauee a quick hack is proving not so quick: add a branch
   reference to the tip I'm developing on and then back up the master
   branch to where I should have left it when I started this project.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Git Vs. Svn for a project which *must* distribute binaries too.
  2007-06-07 16:51   ` linux
@ 2007-06-08 20:41     ` Jan Hudec
  0 siblings, 0 replies; 25+ messages in thread
From: Jan Hudec @ 2007-06-08 20:41 UTC (permalink / raw)
  To: linux; +Cc: godeater, git

[-- Attachment #1: Type: text/plain, Size: 2051 bytes --]

On Thu, Jun 07, 2007 at 12:51:50 -0400, linux@horizon.com wrote:
> 3) Branches are your friend.  CVS users think branches are a big
>    deal and require careful thought and planning.  Git users branch
>    almost as often as CVS users commit.  A typical "big change"
>    that might be a single commit in CVS would be a branch of
>    several commits in git.
> 
>    In fact, a good piece of advice is to NEVER commit directly
>    to your trunk ("master").  Do ALL development on branches, and
>    merge them into the trunk.
> 
>    I cheat on that a lot, but I also know how to fix things if I get
>    caught becauee a quick hack is proving not so quick: add a branch
>    reference to the tip I'm developing on and then back up the master
>    branch to where I should have left it when I started this project.

There is a big difference between the cvs and subversion notion of branches
and the git one, which make branches so much more friendly in git.

In cvs and subversion, branch name is part of the commit identity, so you
have to create it before you commit and it will stay with you forever. That
means you have to plan the branch, because there's no going back.

On the other hand git branch (head) names are just pointers to revisions you
base your work on. You can add branch name after you commit, you can rename
the branch anytime and you can delete branches that are no longer
interesting, either because they are already merged, or because they didn't
work out. That means you don't have to think twice whether you need a branch
before you commit, since you can always change your mind later.

This makes it possible to use heaps of short-lived branches for
experimenting and to give them silly names, because noone cares if you don't
publish them, which you don't need to do and ususally won't do until you are
confident that you are working in the right direction (at which point you
have much better idea about what name to publish them under).

-- 
						 Jan 'Bulb' Hudec <bulb@ucw.cz>

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2007-06-08 20:41 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-06-07  4:36 Git Vs. Svn for a project which *must* distribute binaries too linux
2007-06-07  7:57 ` Bryan Childs
2007-06-07 16:51   ` linux
2007-06-08 20:41     ` Jan Hudec
  -- strict thread matches above, loose matches on Subject: below --
2007-06-04 11:48 Bryan Childs
2007-06-04 11:56 ` Julian Phillips
2007-06-04 13:18 ` Theodore Tso
2007-06-04 14:58 ` Johannes Schindelin
2007-06-04 15:20 ` Linus Torvalds
2007-06-04 15:38   ` Bryan Childs
2007-06-04 16:23     ` Linus Torvalds
2007-06-04 17:57       ` Thomas Glanzmann
2007-06-04 20:45         ` Linus Torvalds
2007-06-04 21:21           ` Olivier Galibert
2007-06-04 21:33             ` Linus Torvalds
2007-06-04 22:30               ` Joel Becker
2007-06-05 11:19                 ` Theodore Tso
2007-06-05  2:56             ` Johannes Schindelin
2007-06-04 22:29     ` Martin Langhoff
2007-06-04 23:48     ` Daniel Barkalow
2007-06-05  0:21       ` Linus Torvalds
2007-06-05  1:42         ` david
2007-06-05  3:58           ` Linus Torvalds
2007-06-04 23:46 ` Jakub Narebski
2007-06-06 22:34   ` Jakub Narebski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).