git.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] bulk-checkin: only support blobs in index_bulk_checkin
@ 2023-09-20  3:52 Eric W. Biederman
  2023-09-20  6:59 ` Junio C Hamano
  0 siblings, 1 reply; 12+ messages in thread
From: Eric W. Biederman @ 2023-09-20  3:52 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: brian m. carlson, git


As the code is written today index_bulk_checkin only accepts blobs.
Remove the enum object_type parameter and rename index_bulk_checkin to
index_blob_bulk_checkin, index_stream to index_blob_stream,
deflate_to_pack to deflate_blob_to_pack, stream_to_pack to
stream_blobk_to_pack, to make this explicit.

Not supporting commits, tags, or trees has no downside as it is not
currently supported now, and commits, tags, and trees being smaller by
design do not have the problem that the problem that index_bulk_checkin
was built to solve.

What is more this is very desiable from the context of the hash function
transition.

For blob objects it is straight forward to compute multiple hash
functions during index_bulk_checkin as the object header and content of
a blob is the same no matter which hash function is being used to
compute the oid of a blob.

For commits, tress, and tags the object header and content that need to
be hashed ard different for different hashes.  Even worse the object
header can not be known until the size of the content that needs to be
hashed is known.  The size of the content that needs to be hashed can
not be known until a complete pass is made through all of the variable
length entries of the original object.

As far as I can tell this extra pass defeats most of the purpose of
streaming, and it is much easier to implement with in memory buffers.

So if it is needed to write commits, trees, and tags directly to pack
files writing a separate function to do the would be needed.

So let's just simplify the code base for now, simplify the development
needed for the hash function transition and only support blobs with the
existing bulk_checkin code.

Inspired-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
---
 bulk-checkin.c | 35 +++++++++++++++++------------------
 bulk-checkin.h |  6 +++---
 object-file.c  | 12 ++++++------
 3 files changed, 26 insertions(+), 27 deletions(-)

This is just a v2 of the description, that addresses Junio's
capitalization concern, and hopefully makes the justification clear to
other people.

I am sending it now mostly because the original version did not
land on the mailing list for some reason.  So I have switched
which email account I am using for now.

diff --git a/bulk-checkin.c b/bulk-checkin.c
index 73bff3a23d27..223562b4e748 100644
--- a/bulk-checkin.c
+++ b/bulk-checkin.c
@@ -155,10 +155,10 @@ static int already_written(struct bulk_checkin_packfile *state, struct object_id
  * status before calling us just in case we ask it to call us again
  * with a new pack.
  */
-static int stream_to_pack(struct bulk_checkin_packfile *state,
-			  git_hash_ctx *ctx, off_t *already_hashed_to,
-			  int fd, size_t size, enum object_type type,
-			  const char *path, unsigned flags)
+static int stream_blob_to_pack(struct bulk_checkin_packfile *state,
+			       git_hash_ctx *ctx, off_t *already_hashed_to,
+			       int fd, size_t size, const char *path,
+			       unsigned flags)
 {
 	git_zstream s;
 	unsigned char ibuf[16384];
@@ -170,7 +170,7 @@ static int stream_to_pack(struct bulk_checkin_packfile *state,
 
 	git_deflate_init(&s, pack_compression_level);
 
-	hdrlen = encode_in_pack_object_header(obuf, sizeof(obuf), type, size);
+	hdrlen = encode_in_pack_object_header(obuf, sizeof(obuf), OBJ_BLOB, size);
 	s.next_out = obuf + hdrlen;
 	s.avail_out = sizeof(obuf) - hdrlen;
 
@@ -247,11 +247,10 @@ static void prepare_to_stream(struct bulk_checkin_packfile *state,
 		die_errno("unable to write pack header");
 }
 
-static int deflate_to_pack(struct bulk_checkin_packfile *state,
-			   struct object_id *result_oid,
-			   int fd, size_t size,
-			   enum object_type type, const char *path,
-			   unsigned flags)
+static int deflate_blob_to_pack(struct bulk_checkin_packfile *state,
+				struct object_id *result_oid,
+				int fd, size_t size,
+				const char *path, unsigned flags)
 {
 	off_t seekback, already_hashed_to;
 	git_hash_ctx ctx;
@@ -265,7 +264,7 @@ static int deflate_to_pack(struct bulk_checkin_packfile *state,
 		return error("cannot find the current offset");
 
 	header_len = format_object_header((char *)obuf, sizeof(obuf),
-					  type, size);
+					  OBJ_BLOB, size);
 	the_hash_algo->init_fn(&ctx);
 	the_hash_algo->update_fn(&ctx, obuf, header_len);
 
@@ -282,8 +281,8 @@ static int deflate_to_pack(struct bulk_checkin_packfile *state,
 			idx->offset = state->offset;
 			crc32_begin(state->f);
 		}
-		if (!stream_to_pack(state, &ctx, &already_hashed_to,
-				    fd, size, type, path, flags))
+		if (!stream_blob_to_pack(state, &ctx, &already_hashed_to,
+					 fd, size, path, flags))
 			break;
 		/*
 		 * Writing this object to the current pack will make
@@ -350,12 +349,12 @@ void fsync_loose_object_bulk_checkin(int fd, const char *filename)
 	}
 }
 
-int index_bulk_checkin(struct object_id *oid,
-		       int fd, size_t size, enum object_type type,
-		       const char *path, unsigned flags)
+int index_blob_bulk_checkin(struct object_id *oid,
+			    int fd, size_t size,
+			    const char *path, unsigned flags)
 {
-	int status = deflate_to_pack(&bulk_checkin_packfile, oid, fd, size, type,
-				     path, flags);
+	int status = deflate_blob_to_pack(&bulk_checkin_packfile, oid, fd, size,
+					  path, flags);
 	if (!odb_transaction_nesting)
 		flush_bulk_checkin_packfile(&bulk_checkin_packfile);
 	return status;
diff --git a/bulk-checkin.h b/bulk-checkin.h
index 48fe9a6e9171..aa7286a7b3e1 100644
--- a/bulk-checkin.h
+++ b/bulk-checkin.h
@@ -9,9 +9,9 @@
 void prepare_loose_object_bulk_checkin(void);
 void fsync_loose_object_bulk_checkin(int fd, const char *filename);
 
-int index_bulk_checkin(struct object_id *oid,
-		       int fd, size_t size, enum object_type type,
-		       const char *path, unsigned flags);
+int index_blob_bulk_checkin(struct object_id *oid,
+			    int fd, size_t size,
+			    const char *path, unsigned flags);
 
 /*
  * Tell the object database to optimize for adding
diff --git a/object-file.c b/object-file.c
index 7dc0c4bfbba8..7c7afe579364 100644
--- a/object-file.c
+++ b/object-file.c
@@ -2446,11 +2446,11 @@ static int index_core(struct index_state *istate,
  * binary blobs, they generally do not want to get any conversion, and
  * callers should avoid this code path when filters are requested.
  */
-static int index_stream(struct object_id *oid, int fd, size_t size,
-			enum object_type type, const char *path,
-			unsigned flags)
+static int index_blob_stream(struct object_id *oid, int fd, size_t size,
+			     const char *path,
+			     unsigned flags)
 {
-	return index_bulk_checkin(oid, fd, size, type, path, flags);
+	return index_blob_bulk_checkin(oid, fd, size, path, flags);
 }
 
 int index_fd(struct index_state *istate, struct object_id *oid,
@@ -2472,8 +2472,8 @@ int index_fd(struct index_state *istate, struct object_id *oid,
 		ret = index_core(istate, oid, fd, xsize_t(st->st_size),
 				 type, path, flags);
 	else
-		ret = index_stream(oid, fd, xsize_t(st->st_size), type, path,
-				   flags);
+		ret = index_blob_stream(oid, fd, xsize_t(st->st_size), path,
+					flags);
 	close(fd);
 	return ret;
 }
-- 
2.41.0

Eric

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] bulk-checkin: only support blobs in index_bulk_checkin
  2023-09-20  3:52 [PATCH v2] bulk-checkin: only support blobs in index_bulk_checkin Eric W. Biederman
@ 2023-09-20  6:59 ` Junio C Hamano
  2023-09-20 12:24   ` Eric W. Biederman
  0 siblings, 1 reply; 12+ messages in thread
From: Junio C Hamano @ 2023-09-20  6:59 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: brian m. carlson, git

"Eric W. Biederman" <ebiederm@gmail.com> writes:

> As the code is written today index_bulk_checkin only accepts blobs.
> Remove the enum object_type parameter and rename index_bulk_checkin to
> index_blob_bulk_checkin, index_stream to index_blob_stream,
> deflate_to_pack to deflate_blob_to_pack, stream_to_pack to
> stream_blobk_to_pack, to make this explicit.

> Not supporting commits, tags, or trees has no downside as it is not
> currently supported now, and commits, tags, and trees being smaller by
> design do not have the problem that the problem that index_bulk_checkin
> was built to solve.

Exactly.  The streaming was primarily to help dealing with huge
blobs that cannot be held in-core.  Of course other parts (like
comparing them) of the system would require to hold them in-core
so some things may not work for them, but at least it is a start
to be able to _hash_ them to store them in the object store and to
give them names.

> What is more this is very desiable from the context of the hash function
> transition.

A bit hard to parse; perhaps want a comma before "this"?

> For blob objects it is straight forward to compute multiple hash
> functions during index_bulk_checkin as the object header and content of
> a blob is the same no matter which hash function is being used to
> compute the oid of a blob.

OK.

> For commits, tress, and tags the object header and content that need to
> be hashed ard different for different hashes.  Even worse the object
> header can not be known until the size of the content that needs to be
> hashed is known.  The size of the content that needs to be hashed can
> not be known until a complete pass is made through all of the variable
> length entries of the original object.

"tress" -> "trees".  Also a comma after "worse".

> As far as I can tell this extra pass defeats most of the purpose of
> streaming, and it is much easier to implement with in memory buffers.

The purpose of streaming being the ability to hash and compute the
object name without having to hold the entirety of the object, I am
not sure the above is a good argument.  You can run multiple passes
by streaming the same data twice if you needed to, and how much
easier the implementation may become if you can assume that you can
hold everything in-core, what you cannot fit in-core would not fit
in-core, so ...

> So if it is needed to write commits, trees, and tags directly to pack
> files writing a separate function to do the would be needed.

But I am OK with this conclusion.  As the way to compute the
fallback hashes for different types of objects are very different,
compared to a single-hash world where as long as you come up with a
serialization you have only a single way to hash and name the
object.  We would end up having separate helper functions per target
type anyway, even if we kept a single entry point function like
index_stream().  The single entry point function will only be used
to just dispatch to type specific ones, so renaming what we have today
and making it clear they are for "blobs" does make sense.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2] bulk-checkin: only support blobs in index_bulk_checkin
  2023-09-20  6:59 ` Junio C Hamano
@ 2023-09-20 12:24   ` Eric W. Biederman
  2023-09-26 15:58     ` [PATCH v3] " Eric W. Biederman
  0 siblings, 1 reply; 12+ messages in thread
From: Eric W. Biederman @ 2023-09-20 12:24 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: brian m. carlson, git

Junio C Hamano <gitster@pobox.com> writes:

> "Eric W. Biederman" <ebiederm@gmail.com> writes:
>
>> As far as I can tell this extra pass defeats most of the purpose of
>> streaming, and it is much easier to implement with in memory buffers.
>
> The purpose of streaming being the ability to hash and compute the
> object name without having to hold the entirety of the object, I am
> not sure the above is a good argument.  You can run multiple passes
> by streaming the same data twice if you needed to, and how much
> easier the implementation may become if you can assume that you can
> hold everything in-core, what you cannot fit in-core would not fit
> in-core, so ...

Yes this wording needs to be clarified.

If streaming to handle objects that don't fit in memory is the purpose,
I agree there are slow multi-pass ways to deal with trees, commits and
tags.

If writing directly to the pack is the purpose, using an in-core
buffer for trees, commits, and tags is better.

I will put on the wording on the back burner and see what I come up
with.

>> So if it is needed to write commits, trees, and tags directly to pack
>> files writing a separate function to do the would be needed.
>
> But I am OK with this conclusion.  As the way to compute the
> fallback hashes for different types of objects are very different,
> compared to a single-hash world where as long as you come up with a
> serialization you have only a single way to hash and name the
> object.  We would end up having separate helper functions per target
> type anyway, even if we kept a single entry point function like
> index_stream().  The single entry point function will only be used
> to just dispatch to type specific ones, so renaming what we have today
> and making it clear they are for "blobs" does make sense.

Good.  I am glad I am able to step back and successfully explain the
whys of things.

Eric


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v3] bulk-checkin: only support blobs in index_bulk_checkin
  2023-09-20 12:24   ` Eric W. Biederman
@ 2023-09-26 15:58     ` Eric W. Biederman
  2023-09-26 21:48       ` Junio C Hamano
  2023-09-28  9:39       ` Oswald Buddenhagen
  0 siblings, 2 replies; 12+ messages in thread
From: Eric W. Biederman @ 2023-09-26 15:58 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: brian m. carlson, git


As the code is written today index_bulk_checkin only accepts blobs.
Remove the enum object_type parameter and rename index_bulk_checkin to
index_blob_bulk_checkin, index_stream to index_blob_stream,
deflate_to_pack to deflate_blob_to_pack, stream_to_pack to
stream_blob_to_pack, to make this explicit.

Not supporting commits, tags, or trees has no downside as it is not
currently supported now, and commits, tags, and trees being smaller by
design do not have the problem that the problem that index_bulk_checkin
was built to solve.

Before we start adding code to support the hash function transition
supporting additional objects types in index_bulk_checkin has no real
additional cost, just an extra function parameter to know what the
object type is.  Once we begin the hash function transition this is not
the case.

The hash function transition document specifies that a repository with
compatObjectFormat enabled will compute and store both the SHA-1 and
SHA-256 hash of every object in the repository.

What makes this a challenge is that it is not just an additional hash
over the same object.  Instead the hash function transition document
specifies that the compatibility hash (specified with
compatObjectFormat) be computed over the equivalent object that another
git repository whose storage hash (specified with objectFormat) would
store.  When comparing equivalent repositories built with different
storage hash functions, the oids embedded in objects used to refer to
other objects differ and the location of signatures within objects
differ.

As blob objects have neither oids referring to other objects nor stored
signatures their storage hash and their compatibility hash are computed
over the same object.

The other kinds of objects: trees, commits, and tags, all store oids
referring to other objects.  Signatures are stored in commit and tag
objects.  As oids and the tags to store signatures are not the same size
in repositories built with different storage hashes the size of the
equivalent objects are also different.

A version of index_bulk_checkin that supports more than just blobs when
computing both the SHA-1 and the SHA-256 of every object added would
need a different, and more expensive structure.  The structure is more
expensive because it would be required to temporarily buffering the
equivalent object the compatibility hash needs to be computed over.

A temporary object is needed, because before a hash over an object can
computed it's object header needs to be computed.  One of the members of
the object header is the entire size of the object.  To know the size of
an equivalent object an entire pass over the original object needs to be
made, as trees, commits, and tags are composed of a variable number of
variable sized pieces.  Unfortunately there is no formula to compute the
size of an equivalent object from just the size of the original object.

Avoid all of those future complications by limiting index_bulk_checkin
to only work on blobs.

Inspired-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
---
 bulk-checkin.c | 35 +++++++++++++++++------------------
 bulk-checkin.h |  6 +++---
 object-file.c  | 12 ++++++------
 3 files changed, 26 insertions(+), 27 deletions(-)

diff --git a/bulk-checkin.c b/bulk-checkin.c
index 73bff3a23d27..223562b4e748 100644
--- a/bulk-checkin.c
+++ b/bulk-checkin.c
@@ -155,10 +155,10 @@ static int already_written(struct bulk_checkin_packfile *state, struct object_id
  * status before calling us just in case we ask it to call us again
  * with a new pack.
  */
-static int stream_to_pack(struct bulk_checkin_packfile *state,
-			  git_hash_ctx *ctx, off_t *already_hashed_to,
-			  int fd, size_t size, enum object_type type,
-			  const char *path, unsigned flags)
+static int stream_blob_to_pack(struct bulk_checkin_packfile *state,
+			       git_hash_ctx *ctx, off_t *already_hashed_to,
+			       int fd, size_t size, const char *path,
+			       unsigned flags)
 {
 	git_zstream s;
 	unsigned char ibuf[16384];
@@ -170,7 +170,7 @@ static int stream_to_pack(struct bulk_checkin_packfile *state,
 
 	git_deflate_init(&s, pack_compression_level);
 
-	hdrlen = encode_in_pack_object_header(obuf, sizeof(obuf), type, size);
+	hdrlen = encode_in_pack_object_header(obuf, sizeof(obuf), OBJ_BLOB, size);
 	s.next_out = obuf + hdrlen;
 	s.avail_out = sizeof(obuf) - hdrlen;
 
@@ -247,11 +247,10 @@ static void prepare_to_stream(struct bulk_checkin_packfile *state,
 		die_errno("unable to write pack header");
 }
 
-static int deflate_to_pack(struct bulk_checkin_packfile *state,
-			   struct object_id *result_oid,
-			   int fd, size_t size,
-			   enum object_type type, const char *path,
-			   unsigned flags)
+static int deflate_blob_to_pack(struct bulk_checkin_packfile *state,
+				struct object_id *result_oid,
+				int fd, size_t size,
+				const char *path, unsigned flags)
 {
 	off_t seekback, already_hashed_to;
 	git_hash_ctx ctx;
@@ -265,7 +264,7 @@ static int deflate_to_pack(struct bulk_checkin_packfile *state,
 		return error("cannot find the current offset");
 
 	header_len = format_object_header((char *)obuf, sizeof(obuf),
-					  type, size);
+					  OBJ_BLOB, size);
 	the_hash_algo->init_fn(&ctx);
 	the_hash_algo->update_fn(&ctx, obuf, header_len);
 
@@ -282,8 +281,8 @@ static int deflate_to_pack(struct bulk_checkin_packfile *state,
 			idx->offset = state->offset;
 			crc32_begin(state->f);
 		}
-		if (!stream_to_pack(state, &ctx, &already_hashed_to,
-				    fd, size, type, path, flags))
+		if (!stream_blob_to_pack(state, &ctx, &already_hashed_to,
+					 fd, size, path, flags))
 			break;
 		/*
 		 * Writing this object to the current pack will make
@@ -350,12 +349,12 @@ void fsync_loose_object_bulk_checkin(int fd, const char *filename)
 	}
 }
 
-int index_bulk_checkin(struct object_id *oid,
-		       int fd, size_t size, enum object_type type,
-		       const char *path, unsigned flags)
+int index_blob_bulk_checkin(struct object_id *oid,
+			    int fd, size_t size,
+			    const char *path, unsigned flags)
 {
-	int status = deflate_to_pack(&bulk_checkin_packfile, oid, fd, size, type,
-				     path, flags);
+	int status = deflate_blob_to_pack(&bulk_checkin_packfile, oid, fd, size,
+					  path, flags);
 	if (!odb_transaction_nesting)
 		flush_bulk_checkin_packfile(&bulk_checkin_packfile);
 	return status;
diff --git a/bulk-checkin.h b/bulk-checkin.h
index 48fe9a6e9171..aa7286a7b3e1 100644
--- a/bulk-checkin.h
+++ b/bulk-checkin.h
@@ -9,9 +9,9 @@
 void prepare_loose_object_bulk_checkin(void);
 void fsync_loose_object_bulk_checkin(int fd, const char *filename);
 
-int index_bulk_checkin(struct object_id *oid,
-		       int fd, size_t size, enum object_type type,
-		       const char *path, unsigned flags);
+int index_blob_bulk_checkin(struct object_id *oid,
+			    int fd, size_t size,
+			    const char *path, unsigned flags);
 
 /*
  * Tell the object database to optimize for adding
diff --git a/object-file.c b/object-file.c
index 7dc0c4bfbba8..7c7afe579364 100644
--- a/object-file.c
+++ b/object-file.c
@@ -2446,11 +2446,11 @@ static int index_core(struct index_state *istate,
  * binary blobs, they generally do not want to get any conversion, and
  * callers should avoid this code path when filters are requested.
  */
-static int index_stream(struct object_id *oid, int fd, size_t size,
-			enum object_type type, const char *path,
-			unsigned flags)
+static int index_blob_stream(struct object_id *oid, int fd, size_t size,
+			     const char *path,
+			     unsigned flags)
 {
-	return index_bulk_checkin(oid, fd, size, type, path, flags);
+	return index_blob_bulk_checkin(oid, fd, size, path, flags);
 }
 
 int index_fd(struct index_state *istate, struct object_id *oid,
@@ -2472,8 +2472,8 @@ int index_fd(struct index_state *istate, struct object_id *oid,
 		ret = index_core(istate, oid, fd, xsize_t(st->st_size),
 				 type, path, flags);
 	else
-		ret = index_stream(oid, fd, xsize_t(st->st_size), type, path,
-				   flags);
+		ret = index_blob_stream(oid, fd, xsize_t(st->st_size), path,
+					flags);
 	close(fd);
 	return ret;
 }
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] bulk-checkin: only support blobs in index_bulk_checkin
  2023-09-26 15:58     ` [PATCH v3] " Eric W. Biederman
@ 2023-09-26 21:48       ` Junio C Hamano
  2023-09-27  1:38         ` Taylor Blau
  2023-09-28  9:39       ` Oswald Buddenhagen
  1 sibling, 1 reply; 12+ messages in thread
From: Junio C Hamano @ 2023-09-26 21:48 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: brian m. carlson, git

"Eric W. Biederman" <ebiederm@gmail.com> writes:

> As the code is written today index_bulk_checkin only accepts blobs.
> Remove the enum object_type parameter and rename index_bulk_checkin to
> index_blob_bulk_checkin, index_stream to index_blob_stream,
> deflate_to_pack to deflate_blob_to_pack, stream_to_pack to
> stream_blob_to_pack, to make this explicit.
>
> Not supporting commits, tags, or trees has no downside as it is not
> currently supported now, and commits, tags, and trees being smaller by
> design do not have the problem that the problem that index_bulk_checkin
> was built to solve.
>
> Before we start adding code to support the hash function transition
> supporting additional objects types in index_bulk_checkin has no real
> additional cost, just an extra function parameter to know what the
> object type is.  Once we begin the hash function transition this is not
> the case.
>
> The hash function transition document specifies that a repository with
> compatObjectFormat enabled will compute and store both the SHA-1 and
> SHA-256 hash of every object in the repository.
>
> What makes this a challenge is that it is not just an additional hash
> over the same object.  Instead the hash function transition document
> specifies that the compatibility hash (specified with
> compatObjectFormat) be computed over the equivalent object that another
> git repository whose storage hash (specified with objectFormat) would
> store.  When comparing equivalent repositories built with different
> storage hash functions, the oids embedded in objects used to refer to
> other objects differ and the location of signatures within objects
> differ.
>
> As blob objects have neither oids referring to other objects nor stored
> signatures their storage hash and their compatibility hash are computed
> over the same object.
>
> The other kinds of objects: trees, commits, and tags, all store oids
> referring to other objects.  Signatures are stored in commit and tag
> objects.  As oids and the tags to store signatures are not the same size
> in repositories built with different storage hashes the size of the
> equivalent objects are also different.
>
> A version of index_bulk_checkin that supports more than just blobs when
> computing both the SHA-1 and the SHA-256 of every object added would
> need a different, and more expensive structure.  The structure is more
> expensive because it would be required to temporarily buffering the
> equivalent object the compatibility hash needs to be computed over.
>
> A temporary object is needed, because before a hash over an object can
> computed it's object header needs to be computed.  One of the members of
> the object header is the entire size of the object.  To know the size of
> an equivalent object an entire pass over the original object needs to be
> made, as trees, commits, and tags are composed of a variable number of
> variable sized pieces.  Unfortunately there is no formula to compute the
> size of an equivalent object from just the size of the original object.
>
> Avoid all of those future complications by limiting index_bulk_checkin
> to only work on blobs.

Thanks.  Will queue.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] bulk-checkin: only support blobs in index_bulk_checkin
  2023-09-26 21:48       ` Junio C Hamano
@ 2023-09-27  1:38         ` Taylor Blau
  2023-09-27  4:08           ` Junio C Hamano
  2023-09-27 20:13           ` Eric W. Biederman
  0 siblings, 2 replies; 12+ messages in thread
From: Taylor Blau @ 2023-09-27  1:38 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Eric W. Biederman, brian m. carlson, git

On Tue, Sep 26, 2023 at 02:48:31PM -0700, Junio C Hamano wrote:
> > Avoid all of those future complications by limiting index_bulk_checkin
> > to only work on blobs.
>
> Thanks.  Will queue.

Hmm. I wonder if retaining some flexibility in the bulk-checkin
mechanism may be worthwhile. We discussed at the Contributor's
Summit[^1] today that the bulk-checkin system may be a good fit for
packing any blobs/trees created by `merge-tree` or `replay` instead of
writing them out as loose objects.

Being able to write trees in addition to blobs is definitely important
there, so we may want to wait on merging this down until that direction
solidifies a bit more. (FWIW, I started working on that today and hope
to have patches on the list in the next day or two).

Alternatively, if there is an urgency to merge these down, we can always
come back to it in the future and revert it if need be. Either way :-).

Thanks,
Taylor

[^1]: I'll clean up our notes in the next day or two and share them with
  the list here.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] bulk-checkin: only support blobs in index_bulk_checkin
  2023-09-27  1:38         ` Taylor Blau
@ 2023-09-27  4:08           ` Junio C Hamano
  2023-09-27 14:34             ` Taylor Blau
  2023-09-27 20:13           ` Eric W. Biederman
  1 sibling, 1 reply; 12+ messages in thread
From: Junio C Hamano @ 2023-09-27  4:08 UTC (permalink / raw)
  To: Taylor Blau; +Cc: Eric W. Biederman, brian m. carlson, git

Taylor Blau <me@ttaylorr.com> writes:

> Hmm. I wonder if retaining some flexibility in the bulk-checkin
> mechanism may be worthwhile. We discussed at the Contributor's
> Summit[^1] today that the bulk-checkin system may be a good fit for
> packing any blobs/trees created by `merge-tree` or `replay` instead of
> writing them out as loose objects.

But see the last paragraph of my review comments for the earlier
round upthread.  This particular function implements logic that is
only applicable to blob objects, and streaming trees, commits, and
tags will need their own separate helper functions.  And when they
are written, the top-level stream_to_pack() function can be
reintroduced, which will be a thin dispatcher to the four
type-specific helpers.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] bulk-checkin: only support blobs in index_bulk_checkin
  2023-09-27  4:08           ` Junio C Hamano
@ 2023-09-27 14:34             ` Taylor Blau
  2023-09-27 16:26               ` Junio C Hamano
  0 siblings, 1 reply; 12+ messages in thread
From: Taylor Blau @ 2023-09-27 14:34 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Eric W. Biederman, brian m. carlson, Elijah Newren, git

On Tue, Sep 26, 2023 at 09:08:59PM -0700, Junio C Hamano wrote:
> Taylor Blau <me@ttaylorr.com> writes:
>
> > Hmm. I wonder if retaining some flexibility in the bulk-checkin
> > mechanism may be worthwhile. We discussed at the Contributor's
> > Summit[^1] today that the bulk-checkin system may be a good fit for
> > packing any blobs/trees created by `merge-tree` or `replay` instead of
> > writing them out as loose objects.
>
> But see the last paragraph of my review comments for the earlier
> round upthread.  This particular function implements logic that is
> only applicable to blob objects, and streaming trees, commits, and
> tags will need their own separate helper functions.  And when they
> are written, the top-level stream_to_pack() function can be
> reintroduced, which will be a thin dispatcher to the four
> type-specific helpers.

I am not sure that I follow. If we have an address in memory from which
we want to stream raw bytes directly to the packfile, that should work
for all objects regardless of type, no?

Having stream_to_pack() take a non-OBJ_BLOB 'type' argument would be OK
provided that the file descriptor 'fd' contains the raw contents of an
object which matches type 'type'.

IIUC, for callers like in the ORT backend which assemble e.g. the raw
bytes of a tree in its merge-ort.c::write_tree() function like so:

    for (i = 0; i < nr; i++) {
        struct merged_info *mi = versions->items[offset+i].util;
        struct version_info *ri = &mi->result;

        strbuf_addf(&buf, "%o %s%c", ri->mode,
                    versions->items[offset+i].string, '\0');
        strbuf_add(&buf, ri->oid.hash, hash_size);
    }

we'd want some variant of stream_to_pack() that acts on a 'void *,
size_t' pair rather than an 'int (fd), size_t' pair. Likely its
signature would look something like:

    /* write raw bytes to a bulk-checkin pack */
    static int write_to_pack(struct bulk_checkin_packfile *state,
                             git_hash_ctx *ctx, off_t *already_hashed_to,
                             void *ptr, size_t size, enum object_type type,
                             unsigned flags);

    /* write an object from memory to a bulk-checkin pack */
    static int deflate_to_pack_mem(struct bulk_checkin_packfile *state,
                                   struct object_id *result_oid,
                                   void *ptr, size_t size,
                                   enum object_type type, unsigned flags);

, where the above are analogous to `stream_to_pack()` and
`deflate_to_pack()`, respectively. ORT would be taught to conditionally
replace calls like:

    write_object_file(buf.buf, buf.len, OBJ_TREE, result_oid);

with:

    deflate_to_pack_mem(&state, result_oid, buf.buf, buf.len,
                        OBJ_TREE, HASH_WRITE_OBJECT);

I guess after writing all of that out, you'd never have any callers of
the existing `deflate_to_pack()` function that pass a file descriptor
containing the contents of a non-blob object. So in that sense, I don't
think that my proposal would change anything about this patch.

But I worry that I am missing something here, so having a sanity check
would be appreciated ;-).

Thanks,
Taylor

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] bulk-checkin: only support blobs in index_bulk_checkin
  2023-09-27 14:34             ` Taylor Blau
@ 2023-09-27 16:26               ` Junio C Hamano
  2023-09-27 20:06                 ` Eric W. Biederman
  0 siblings, 1 reply; 12+ messages in thread
From: Junio C Hamano @ 2023-09-27 16:26 UTC (permalink / raw)
  To: Taylor Blau; +Cc: Eric W. Biederman, brian m. carlson, Elijah Newren, git

Taylor Blau <me@ttaylorr.com> writes:

> I am not sure that I follow. If we have an address in memory from which
> we want to stream raw bytes directly to the packfile, that should work
> for all objects regardless of type, no?

For a single hash world, yes.  For keeping track of "the other hash"
and correspondence, you need to (1) interpret the contents of the
object (e.g., if you received a tree contents for SHA-1 repository,
you'd need to split them into tree entries and know which parts of
the bytestream are SHA-1 hashes of the tree contebnts), (2) come
up with the corresponding tree contents in the SHA-256 world (you
should be able to do that now you know SHA-1 names of the objects
directly referred to by the tree) and hash that using SHA-256, and
(3) remember the SHA-1 and the SHA-256 name correspondence of the
tree object you just hashed, in addition to the usual (4) hashing
the contents using SHA-1 hash algorithm without caring what the byte
stream represents.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] bulk-checkin: only support blobs in index_bulk_checkin
  2023-09-27 16:26               ` Junio C Hamano
@ 2023-09-27 20:06                 ` Eric W. Biederman
  0 siblings, 0 replies; 12+ messages in thread
From: Eric W. Biederman @ 2023-09-27 20:06 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Taylor Blau, brian m. carlson, Elijah Newren, git

Junio C Hamano <gitster@pobox.com> writes:

> Taylor Blau <me@ttaylorr.com> writes:
>
>> I am not sure that I follow. If we have an address in memory from which
>> we want to stream raw bytes directly to the packfile, that should work
>> for all objects regardless of type, no?
>
> For a single hash world, yes.  For keeping track of "the other hash"
> and correspondence, you need to (1) interpret the contents of the
> object (e.g., if you received a tree contents for SHA-1 repository,
> you'd need to split them into tree entries and know which parts of
> the bytestream are SHA-1 hashes of the tree contebnts), (2) come
> up with the corresponding tree contents in the SHA-256 world (you
> should be able to do that now you know SHA-1 names of the objects
> directly referred to by the tree) and hash that using SHA-256, and
> (3) remember the SHA-1 and the SHA-256 name correspondence of the
> tree object you just hashed, in addition to the usual (4) hashing
> the contents using SHA-1 hash algorithm without caring what the byte
> stream represents.

If it helps I just posted a patchset that implements what it takes
to deal with objects small enough to live in-core.

You can read object-file-convert.c to see what it takes to generate
an object in the other hash function world.

The exercise for the reader is how to apply this to objects that
are too large to fit in memory.

Eric


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] bulk-checkin: only support blobs in index_bulk_checkin
  2023-09-27  1:38         ` Taylor Blau
  2023-09-27  4:08           ` Junio C Hamano
@ 2023-09-27 20:13           ` Eric W. Biederman
  1 sibling, 0 replies; 12+ messages in thread
From: Eric W. Biederman @ 2023-09-27 20:13 UTC (permalink / raw)
  To: Taylor Blau; +Cc: Junio C Hamano, brian m. carlson, git

Taylor Blau <me@ttaylorr.com> writes:

> On Tue, Sep 26, 2023 at 02:48:31PM -0700, Junio C Hamano wrote:
>> > Avoid all of those future complications by limiting index_bulk_checkin
>> > to only work on blobs.
>>
>> Thanks.  Will queue.
>
> Hmm. I wonder if retaining some flexibility in the bulk-checkin
> mechanism may be worthwhile. We discussed at the Contributor's
> Summit[^1] today that the bulk-checkin system may be a good fit for
> packing any blobs/trees created by `merge-tree` or `replay` instead of
> writing them out as loose objects.
>
> Being able to write trees in addition to blobs is definitely important
> there, so we may want to wait on merging this down until that direction
> solidifies a bit more. (FWIW, I started working on that today and hope
> to have patches on the list in the next day or two).
>
> Alternatively, if there is an urgency to merge these down, we can always
> come back to it in the future and revert it if need be. Either way
> :-).

There are two things that index_bulk_checkin does.
- Handle objects that are too large to fit into a memory
- Place objects immediately in a pack.

Do I read things correctly that you want to take an object that is small
enough to fit into memory, and to immediately into a pack?

If so you essentially want write_object_file that directly writes to a
pack?

A version of write_object_file that that directly writes to a pack is
much easier than the chunking that index_bulk_checkin does.

Perhaps your version could be called index_pack_checkin?

Eric

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v3] bulk-checkin: only support blobs in index_bulk_checkin
  2023-09-26 15:58     ` [PATCH v3] " Eric W. Biederman
  2023-09-26 21:48       ` Junio C Hamano
@ 2023-09-28  9:39       ` Oswald Buddenhagen
  1 sibling, 0 replies; 12+ messages in thread
From: Oswald Buddenhagen @ 2023-09-28  9:39 UTC (permalink / raw)
  To: Eric W. Biederman; +Cc: Junio C Hamano, brian m. carlson, git

just language nits on the commit message:

On Tue, Sep 26, 2023 at 10:58:43AM -0500, Eric W. Biederman wrote:
>Not supporting commits, tags, or trees has no downside as it is not
>currently supported now, and commits, tags, and trees being smaller by
>design do not have the problem that the problem that index_bulk_checkin
				     ^^^^^^^^^^^^^^^^
				     duplicated!

>was built to solve.

>A version of index_bulk_checkin that supports more than just blobs when
>computing both the SHA-1 and the SHA-256 of every object added would
>need a different, and more expensive structure.  The structure is more
>expensive because it would be required to temporarily buffering the
							     ^^^
							no 'ing' here.

>equivalent object the compatibility hash needs to be computed over.


>A temporary object is needed, because before a hash over an object can
>computed it's
>
"be computed, its"

>object header needs to be computed.  One of the members of

regards

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-09-28  9:40 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-20  3:52 [PATCH v2] bulk-checkin: only support blobs in index_bulk_checkin Eric W. Biederman
2023-09-20  6:59 ` Junio C Hamano
2023-09-20 12:24   ` Eric W. Biederman
2023-09-26 15:58     ` [PATCH v3] " Eric W. Biederman
2023-09-26 21:48       ` Junio C Hamano
2023-09-27  1:38         ` Taylor Blau
2023-09-27  4:08           ` Junio C Hamano
2023-09-27 14:34             ` Taylor Blau
2023-09-27 16:26               ` Junio C Hamano
2023-09-27 20:06                 ` Eric W. Biederman
2023-09-27 20:13           ` Eric W. Biederman
2023-09-28  9:39       ` Oswald Buddenhagen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).