git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ramkumar Ramachandra <artagnon@gmail.com>
To: Jakub Narebski <jnareb@gmail.com>
Cc: Nguyen Thai Ngoc Duy <pclouds@gmail.com>,
	git <git@vger.kernel.org>,
	computerdruid <computerdruid@gmail.com>, joey <joey@kitenet.net>,
	Jonathan Nieder <jrnieder@gmail.com>,
	Johannes Sixt <j.sixt@viscovery.net>
Subject: Re: [PATCH 3/3] {fetch,upload}-pack: allow --depth=0 to deepen into full repo again
Date: Fri, 20 Aug 2010 14:58:51 +0530	[thread overview]
Message-ID: <20100820092847.GF12794@kytes> (raw)
In-Reply-To: <201008201122.09392.jnareb@gmail.com>

Hi Jakub,

Jakub Narebski writes:
> >> Second, it would be nice (though probably not easy with parseopt, as
> >> it would require hacks/extensions) to be able to use --depth=inf
> >> (like wget supports '-l inf') to mean infinite depth...
> > 
> > Hmm.. make --depth a string parameter and fetch-pack should parse the
> > parameter itself, like git-clone. Good idea.
> 
> If there were more options that use <n> == 0 to actually mean unlimited
> (infinity), perhaps it would be better to extend parseopt to provide for
> such situation, e.g. OPT_INT_INF or something.  This way we would avoid
> code duplication.
> 
> ... oh, wait, the newly introduced[1] git-merge `--log-limit' option
> uses --log-limit=0 to mean unlimited.
> 
> [1] http://permalink.gmane.org/gmane.comp.version-control.git/153984
>     Message-ID: <20100820081641.GA32127@burratino>
>     Subject: Re: wishlist bugreport: make limit configurable for do_fmt_merge_msg (merge.log)

Just outdated by a few seconds. Johannes suggested that we reuse
merge.log, making it a bool_or_int option. I What about using -1 to
mean infinity and reserving 0 for false instead?

-- Ram

  reply	other threads:[~2010-08-20  9:30 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-17  0:49 fully deepening a shallow clone Joey Hess
2010-08-18  9:36 ` Nguyen Thai Ngoc Duy
2010-08-18 12:54   ` Daniel Johnson
2010-08-19 10:40     ` [PATCH 1/3] clone: do not accept --depth on local clones Nguyễn Thái Ngọc Duy
2010-08-19 14:31       ` Daniel Johnson
2010-08-19 22:15         ` Nguyen Thai Ngoc Duy
2010-08-19 20:49       ` Mikael Magnusson
2010-08-19 10:40     ` [PATCH 2/3] fetch-pack: use args.shallow to detect shallow clone instead of args.depth Nguyễn Thái Ngọc Duy
2010-08-19 10:40     ` [PATCH 3/3] {fetch,upload}-pack: allow --depth=0 to deepen into full repo again Nguyễn Thái Ngọc Duy
2010-08-19 21:22       ` Jakub Narebski
2010-08-19 22:11         ` Nguyen Thai Ngoc Duy
2010-08-20  9:22           ` Jakub Narebski
2010-08-20  9:28             ` Ramkumar Ramachandra [this message]
2010-08-20 11:55             ` [PATCH] grep -A/-B/-Cinfinity to get full context Jonathan Nieder
2010-08-20 13:32               ` Ramkumar Ramachandra
2010-08-18 15:48   ` fully deepening a shallow clone Joey Hess

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100820092847.GF12794@kytes \
    --to=artagnon@gmail.com \
    --cc=computerdruid@gmail.com \
    --cc=git@vger.kernel.org \
    --cc=j.sixt@viscovery.net \
    --cc=jnareb@gmail.com \
    --cc=joey@kitenet.net \
    --cc=jrnieder@gmail.com \
    --cc=pclouds@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).