* Re: [PATCH] bulk-checkin: Only accept blobs
[not found] <87fs39vppo.fsf@email.froward.int.ebiederm.org>
@ 2023-09-19 22:15 ` Junio C Hamano
0 siblings, 0 replies; 2+ messages in thread
From: Junio C Hamano @ 2023-09-19 22:15 UTC (permalink / raw)
To: Eric W. Biederman; +Cc: git, brian m. carlson
"Eric W. Biederman" <ebiederm@xmission.com> writes:
> Subject: Re: [PATCH] bulk-checkin: Only accept blobs
s/Only/only/;
> As the code is written today bulk_checkin only accepts blobs. When
> dealing with multiple hash algorithms it is necessary to distinguish
> between blobs and object types that have embedded oids. For object
> that embed oids a completely new object needs to be generated to
> compute the compatibility hash on. For blobs however all that is
> needed is to compute the compatibility hash on the same blob as the
> storage hash.
All of the above is understandable, but ...
> As the code will need the compatiblity hash from a bulk checkin, remove
> support for a bulk checkin of anything except blobs.
... I am not sure what the first half of this sentence is saying.
Do you mean something like:
The function takes "enum object_type" and pretends that it could
stream trees and commits, if we wanted to interoperate with
multiple hash algorithms, there are a lot more information than
just the contents and object type needed. A tree object needs
"compatibility hash" (i.e. the hash computed for the other hash
functions) for objects the tree contains, a commit object
likewise needs "compatibility hash" for its tree and its parent
commits. IOW, the existing interface into the functions this
patch touches is way too narrow to be useful for object types
other than blobs.
perhaps? I agree with the conclusion that the functions on the
callchain involved in the bulk-checkin feature can safely be
limited to handle blobs in the current code (and I have my reasons
to think why it is not regressing the interface), but it makes me
feel uneasy that I am not quite sure what your reasoning is.
>
> Inspired-by: brian m. carlson <sandals@crustytoothpaste.net>
> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
> ---
>
> Apologies to anyone who has seen this before. The last time I sent
> this patch out it seems to have disappeared into a black hole so
> I am trying again.
I do not know if vger is dropping your messages, but I haven't seen
this on https://lore.kernel.org/git/ archive, so I'll quote from the
message I am responding to everything without omitting anything, to
let others who may find out about this patch via my response.
Thanks.
> bulk-checkin.c | 35 +++++++++++++++++------------------
> bulk-checkin.h | 6 +++---
> object-file.c | 12 ++++++------
> 3 files changed, 26 insertions(+), 27 deletions(-)
>
> diff --git a/bulk-checkin.c b/bulk-checkin.c
> index 73bff3a23d27..223562b4e748 100644
> --- a/bulk-checkin.c
> +++ b/bulk-checkin.c
> @@ -155,10 +155,10 @@ static int already_written(struct bulk_checkin_packfile *state, struct object_id
> * status before calling us just in case we ask it to call us again
> * with a new pack.
> */
> -static int stream_to_pack(struct bulk_checkin_packfile *state,
> - git_hash_ctx *ctx, off_t *already_hashed_to,
> - int fd, size_t size, enum object_type type,
> - const char *path, unsigned flags)
> +static int stream_blob_to_pack(struct bulk_checkin_packfile *state,
> + git_hash_ctx *ctx, off_t *already_hashed_to,
> + int fd, size_t size, const char *path,
> + unsigned flags)
> {
> git_zstream s;
> unsigned char ibuf[16384];
> @@ -170,7 +170,7 @@ static int stream_to_pack(struct bulk_checkin_packfile *state,
>
> git_deflate_init(&s, pack_compression_level);
>
> - hdrlen = encode_in_pack_object_header(obuf, sizeof(obuf), type, size);
> + hdrlen = encode_in_pack_object_header(obuf, sizeof(obuf), OBJ_BLOB, size);
> s.next_out = obuf + hdrlen;
> s.avail_out = sizeof(obuf) - hdrlen;
>
> @@ -247,11 +247,10 @@ static void prepare_to_stream(struct bulk_checkin_packfile *state,
> die_errno("unable to write pack header");
> }
>
> -static int deflate_to_pack(struct bulk_checkin_packfile *state,
> - struct object_id *result_oid,
> - int fd, size_t size,
> - enum object_type type, const char *path,
> - unsigned flags)
> +static int deflate_blob_to_pack(struct bulk_checkin_packfile *state,
> + struct object_id *result_oid,
> + int fd, size_t size,
> + const char *path, unsigned flags)
> {
> off_t seekback, already_hashed_to;
> git_hash_ctx ctx;
> @@ -265,7 +264,7 @@ static int deflate_to_pack(struct bulk_checkin_packfile *state,
> return error("cannot find the current offset");
>
> header_len = format_object_header((char *)obuf, sizeof(obuf),
> - type, size);
> + OBJ_BLOB, size);
> the_hash_algo->init_fn(&ctx);
> the_hash_algo->update_fn(&ctx, obuf, header_len);
>
> @@ -282,8 +281,8 @@ static int deflate_to_pack(struct bulk_checkin_packfile *state,
> idx->offset = state->offset;
> crc32_begin(state->f);
> }
> - if (!stream_to_pack(state, &ctx, &already_hashed_to,
> - fd, size, type, path, flags))
> + if (!stream_blob_to_pack(state, &ctx, &already_hashed_to,
> + fd, size, path, flags))
> break;
> /*
> * Writing this object to the current pack will make
> @@ -350,12 +349,12 @@ void fsync_loose_object_bulk_checkin(int fd, const char *filename)
> }
> }
>
> -int index_bulk_checkin(struct object_id *oid,
> - int fd, size_t size, enum object_type type,
> - const char *path, unsigned flags)
> +int index_blob_bulk_checkin(struct object_id *oid,
> + int fd, size_t size,
> + const char *path, unsigned flags)
> {
> - int status = deflate_to_pack(&bulk_checkin_packfile, oid, fd, size, type,
> - path, flags);
> + int status = deflate_blob_to_pack(&bulk_checkin_packfile, oid, fd, size,
> + path, flags);
> if (!odb_transaction_nesting)
> flush_bulk_checkin_packfile(&bulk_checkin_packfile);
> return status;
> diff --git a/bulk-checkin.h b/bulk-checkin.h
> index 48fe9a6e9171..aa7286a7b3e1 100644
> --- a/bulk-checkin.h
> +++ b/bulk-checkin.h
> @@ -9,9 +9,9 @@
> void prepare_loose_object_bulk_checkin(void);
> void fsync_loose_object_bulk_checkin(int fd, const char *filename);
>
> -int index_bulk_checkin(struct object_id *oid,
> - int fd, size_t size, enum object_type type,
> - const char *path, unsigned flags);
> +int index_blob_bulk_checkin(struct object_id *oid,
> + int fd, size_t size,
> + const char *path, unsigned flags);
>
> /*
> * Tell the object database to optimize for adding
> diff --git a/object-file.c b/object-file.c
> index 7dc0c4bfbba8..7c7afe579364 100644
> --- a/object-file.c
> +++ b/object-file.c
> @@ -2446,11 +2446,11 @@ static int index_core(struct index_state *istate,
> * binary blobs, they generally do not want to get any conversion, and
> * callers should avoid this code path when filters are requested.
> */
> -static int index_stream(struct object_id *oid, int fd, size_t size,
> - enum object_type type, const char *path,
> - unsigned flags)
> +static int index_blob_stream(struct object_id *oid, int fd, size_t size,
> + const char *path,
> + unsigned flags)
> {
> - return index_bulk_checkin(oid, fd, size, type, path, flags);
> + return index_blob_bulk_checkin(oid, fd, size, path, flags);
> }
>
> int index_fd(struct index_state *istate, struct object_id *oid,
> @@ -2472,8 +2472,8 @@ int index_fd(struct index_state *istate, struct object_id *oid,
> ret = index_core(istate, oid, fd, xsize_t(st->st_size),
> type, path, flags);
> else
> - ret = index_stream(oid, fd, xsize_t(st->st_size), type, path,
> - flags);
> + ret = index_blob_stream(oid, fd, xsize_t(st->st_size), path,
> + flags);
> close(fd);
> return ret;
> }
^ permalink raw reply [flat|nested] 2+ messages in thread
* [PATCH] bulk-checkin: Only accept blobs
[not found] <87a5thrak6.fsf@email.froward.int.ebiederm.org>
@ 2023-09-20 0:28 ` Eric W. Biederman
0 siblings, 0 replies; 2+ messages in thread
From: Eric W. Biederman @ 2023-09-20 0:28 UTC (permalink / raw)
To: Junio C Hamano; +Cc: git, brian m. carlson
Junio C Hamano <gitster@pobox.com> writes:
> "Eric W. Biederman" <ebiederm@xmission.com> writes:
>
>> Subject: Re: [PATCH] bulk-checkin: Only accept blobs
>
> s/Only/only/;
>
>> As the code is written today bulk_checkin only accepts blobs. When
>> dealing with multiple hash algorithms it is necessary to distinguish
>> between blobs and object types that have embedded oids. For object
>> that embed oids a completely new object needs to be generated to
>> compute the compatibility hash on. For blobs however all that is
>> needed is to compute the compatibility hash on the same blob as the
>> storage hash.
>
> All of the above is understandable, but ...
>
>> As the code will need the compatiblity hash from a bulk checkin, remove
>> support for a bulk checkin of anything except blobs.
>
> ... I am not sure what the first half of this sentence is saying.
I mean the code will need to compute both the SHA-1 and the SHA-256
oid of the object being checked in.
What I left implied is that for blobs this is straight forward because
the object header and content are the same in both cases, and we just
need calls from the compatible hash algorithm to allow computing both
SHA-1 and SHA-256 simultaneously.
For commits, trees, and tags the entire objects must be walked before
the size of the compatible object is known. The size of the compatible
object appears in the compatible object header. Which means that for
commits, trees, and tags their compatible object must be computed
before their compatible hash can be computed.
Computing all of that in two passes is not what streaming interface
is about.
> Do you mean something like:
>
> The function takes "enum object_type" and pretends that it could
> stream trees and commits, if we wanted to interoperate with
> multiple hash algorithms, there are a lot more information than
> just the contents and object type needed. A tree object needs
> "compatibility hash" (i.e. the hash computed for the other hash
> functions) for objects the tree contains, a commit object
> likewise needs "compatibility hash" for its tree and its parent
> commits. IOW, the existing interface into the functions this
> patch touches is way too narrow to be useful for object types
> other than blobs.
>
> perhaps? I agree with the conclusion that the functions on the
> callchain involved in the bulk-checkin feature can safely be
> limited to handle blobs in the current code (and I have my reasons
> to think why it is not regressing the interface), but it makes me
> feel uneasy that I am not quite sure what your reasoning is.
The interface is wide enough, as we can safely assume that all
objects a commit, tag, or tree would refer to already have both
their SHA-1 and SHA-256 oid computed.
The problem is that it would take a 2 passes over a tree, commit, or tag
object to compute both it's SHA-1 oid and it's SHA-256 oid.
>>
>> Inspired-by: brian m. carlson <sandals@crustytoothpaste.net>
>> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
>> ---
>>
>> Apologies to anyone who has seen this before. The last time I sent
>> this patch out it seems to have disappeared into a black hole so
>> I am trying again.
>
> I do not know if vger is dropping your messages, but I haven't seen
> this on https://lore.kernel.org/git/ archive, so I'll quote from the
> message I am responding to everything without omitting anything, to
> let others who may find out about this patch via my response.
I don't see the messages at vger either. So I am going to try this
conversation from my gmail address and see how well that works.
Eric
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2023-09-20 0:29 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <87a5thrak6.fsf@email.froward.int.ebiederm.org>
2023-09-20 0:28 ` [PATCH] bulk-checkin: Only accept blobs Eric W. Biederman
[not found] <87fs39vppo.fsf@email.froward.int.ebiederm.org>
2023-09-19 22:15 ` Junio C Hamano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).