* [POC PATCH 0/5] Threaded loose object and pack access
@ 2011-12-09 8:39 Thomas Rast
2011-12-09 8:39 ` [POC PATCH 1/5] Turn grep's use_threads into a global flag Thomas Rast
` (6 more replies)
0 siblings, 7 replies; 11+ messages in thread
From: Thomas Rast @ 2011-12-09 8:39 UTC (permalink / raw)
To: git; +Cc: René Scharfe, Junio C Hamano, Eric Herman
Well, just to make sure we're all left in a confused mess of partly
conflicting patches, here's another angle on the same thing:
Jeff King wrote:
> Wow, that's horrible. Leaving aside the parallelism, it's just terrible
> that reading from the cache is 20 times slower than the worktree. I get
> similar results on my quad-core machine.
By poking around in sha1_file.c I got that down to about 10. It's not
great yet, but it seems a start.
The goal would be to improve it to the point where a patch lookup that
already has all relevant packs open and windows mapped can proceed
without locking. I'm not sure that's doable short of duplicating the
whole pack state (including fds and windows) across threads, but I'll
give it some more thought before going that route.
Thomas Rast (5):
Turn grep's use_threads into a global flag
grep: push locking into read_sha1_*
sha1_file_name_buf(): sha1_file_name in caller's buffer
sha1_file: stuff various pack reading variables into a struct
sha1_file: make the pack machinery thread-safe
builtin/grep.c | 60 +++++-----------
cache.h | 1 +
replace_object.c | 5 +-
sha1_file.c | 213 +++++++++++++++++++++++++++++++++++++++++-------------
thread-utils.c | 30 ++++++++
thread-utils.h | 23 ++++++
6 files changed, 240 insertions(+), 92 deletions(-)
--
1.7.8.431.g2abf2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [POC PATCH 1/5] Turn grep's use_threads into a global flag
2011-12-09 8:39 [POC PATCH 0/5] Threaded loose object and pack access Thomas Rast
@ 2011-12-09 8:39 ` Thomas Rast
2011-12-09 8:39 ` [POC PATCH 2/5] grep: push locking into read_sha1_* Thomas Rast
` (5 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Thomas Rast @ 2011-12-09 8:39 UTC (permalink / raw)
To: git; +Cc: René Scharfe, Junio C Hamano, Eric Herman
In preparation for further work on this, turn use_threads into a flag
shared across the whole code base. The supporting (un)lock_if_threaded()
functions are to be used for locking; they return immediately when not
threading.
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
---
builtin/grep.c | 20 ++++++++------------
thread-utils.c | 16 ++++++++++++++++
thread-utils.h | 16 ++++++++++++++++
3 files changed, 40 insertions(+), 12 deletions(-)
diff --git a/builtin/grep.c b/builtin/grep.c
index 988ea1d..76f2c4f 100644
--- a/builtin/grep.c
+++ b/builtin/grep.c
@@ -24,8 +24,6 @@
NULL
};
-static int use_threads = 1;
-
#ifndef NO_PTHREADS
#define THREADS 8
static pthread_t threads[THREADS];
@@ -76,14 +74,12 @@ struct work_item {
static inline void grep_lock(void)
{
- if (use_threads)
- pthread_mutex_lock(&grep_mutex);
+ lock_if_threaded(&grep_mutex);
}
static inline void grep_unlock(void)
{
- if (use_threads)
- pthread_mutex_unlock(&grep_mutex);
+ unlock_if_threaded(&grep_mutex);
}
/* Used to serialize calls to read_sha1_file. */
@@ -91,14 +87,12 @@ static inline void grep_unlock(void)
static inline void read_sha1_lock(void)
{
- if (use_threads)
- pthread_mutex_lock(&read_sha1_mutex);
+ lock_if_threaded(&read_sha1_mutex);
}
static inline void read_sha1_unlock(void)
{
- if (use_threads)
- pthread_mutex_unlock(&read_sha1_mutex);
+ unlock_if_threaded(&read_sha1_mutex);
}
/* Signalled when a new work_item is added to todo. */
@@ -984,6 +978,10 @@ int cmd_grep(int argc, const char **argv, const char *prefix)
argc--;
}
+#ifndef NO_PTHREADS
+ use_threads = 1;
+#endif
+
if (show_in_pager == default_pager)
show_in_pager = git_pager(1);
if (show_in_pager) {
@@ -1011,8 +1009,6 @@ int cmd_grep(int argc, const char **argv, const char *prefix)
skip_first_line = 1;
start_threads(&opt);
}
-#else
- use_threads = 0;
#endif
compile_grep_patterns(&opt);
diff --git a/thread-utils.c b/thread-utils.c
index 7f4b76a..fb75a29 100644
--- a/thread-utils.c
+++ b/thread-utils.c
@@ -1,6 +1,8 @@
#include "cache.h"
#include "thread-utils.h"
+int use_threads;
+
#if defined(hpux) || defined(__hpux) || defined(_hpux)
# include <sys/pstat.h>
#endif
@@ -59,3 +61,17 @@ int init_recursive_mutex(pthread_mutex_t *m)
}
return ret;
}
+
+#ifndef NO_PTHREADS
+void lock_if_threaded(pthread_mutex_t *m)
+{
+ if (use_threads)
+ pthread_mutex_lock(m);
+}
+
+void unlock_if_threaded(pthread_mutex_t *m)
+{
+ if (use_threads)
+ pthread_mutex_unlock(m);
+}
+#endif
diff --git a/thread-utils.h b/thread-utils.h
index 6fb98c3..9a780a2 100644
--- a/thread-utils.h
+++ b/thread-utils.h
@@ -1,11 +1,27 @@
#ifndef THREAD_COMPAT_H
#define THREAD_COMPAT_H
+/*
+ * This variable is used by commands to globally tell affected
+ * subsystems that they must use thread-safe mechanisms.
+ */
+extern int use_threads;
+
#ifndef NO_PTHREADS
#include <pthread.h>
extern int online_cpus(void);
extern int init_recursive_mutex(pthread_mutex_t*);
+/* These functions do nothing if use_threads==0 or NO_PTHREADS */
+extern void lock_if_threaded(pthread_mutex_t*);
+extern void unlock_if_threaded(pthread_mutex_t*);
+
+#else
+
+#define lock_if_threaded(lock)
+#define unlock_if_threaded(lock)
+
#endif
+
#endif /* THREAD_COMPAT_H */
--
1.7.8.431.g2abf2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [POC PATCH 2/5] grep: push locking into read_sha1_*
2011-12-09 8:39 [POC PATCH 0/5] Threaded loose object and pack access Thomas Rast
2011-12-09 8:39 ` [POC PATCH 1/5] Turn grep's use_threads into a global flag Thomas Rast
@ 2011-12-09 8:39 ` Thomas Rast
2011-12-09 8:39 ` [POC PATCH 3/5] sha1_file_name_buf(): sha1_file_name in caller's buffer Thomas Rast
` (4 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Thomas Rast @ 2011-12-09 8:39 UTC (permalink / raw)
To: git; +Cc: René Scharfe, Junio C Hamano, Eric Herman
Move the locking away from grep (the user) and into read_sha1_* and
read_object_* (the subsystem). This will allow future work on the
locking granularity there.
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
---
builtin/grep.c | 35 ++++-------------------------------
sha1_file.c | 12 ++++++++++--
thread-utils.c | 11 +++++++++++
thread-utils.h | 6 ++++++
4 files changed, 31 insertions(+), 33 deletions(-)
diff --git a/builtin/grep.c b/builtin/grep.c
index 76f2c4f..6c5bdfa 100644
--- a/builtin/grep.c
+++ b/builtin/grep.c
@@ -82,19 +82,6 @@ static inline void grep_unlock(void)
unlock_if_threaded(&grep_mutex);
}
-/* Used to serialize calls to read_sha1_file. */
-static pthread_mutex_t read_sha1_mutex;
-
-static inline void read_sha1_lock(void)
-{
- lock_if_threaded(&read_sha1_mutex);
-}
-
-static inline void read_sha1_unlock(void)
-{
- unlock_if_threaded(&read_sha1_mutex);
-}
-
/* Signalled when a new work_item is added to todo. */
static pthread_cond_t cond_add;
@@ -248,8 +235,8 @@ static void start_threads(struct grep_opt *opt)
{
int i;
+ init_subsystem_locks();
pthread_mutex_init(&grep_mutex, NULL);
- pthread_mutex_init(&read_sha1_mutex, NULL);
pthread_cond_init(&cond_add, NULL);
pthread_cond_init(&cond_write, NULL);
pthread_cond_init(&cond_result, NULL);
@@ -296,16 +283,14 @@ static int wait_all(void)
}
pthread_mutex_destroy(&grep_mutex);
- pthread_mutex_destroy(&read_sha1_mutex);
pthread_cond_destroy(&cond_add);
pthread_cond_destroy(&cond_write);
pthread_cond_destroy(&cond_result);
+ destroy_subsystem_locks();
return hit;
}
#else /* !NO_PTHREADS */
-#define read_sha1_lock()
-#define read_sha1_unlock()
static int wait_all(void)
{
@@ -363,21 +348,11 @@ static int grep_config(const char *var, const char *value, void *cb)
return 0;
}
-static void *lock_and_read_sha1_file(const unsigned char *sha1, enum object_type *type, unsigned long *size)
-{
- void *data;
-
- read_sha1_lock();
- data = read_sha1_file(sha1, type, size);
- read_sha1_unlock();
- return data;
-}
-
static void *load_sha1(const unsigned char *sha1, unsigned long *size,
const char *name)
{
enum object_type type;
- void *data = lock_and_read_sha1_file(sha1, &type, size);
+ void *data = read_sha1_file(sha1, &type, size);
if (!data)
error(_("'%s': unable to read %s"), name, sha1_to_hex(sha1));
@@ -578,7 +553,7 @@ static int grep_tree(struct grep_opt *opt, const struct pathspec *pathspec,
void *data;
unsigned long size;
- data = lock_and_read_sha1_file(entry.sha1, &type, &size);
+ data = read_sha1_file(entry.sha1, &type, &size);
if (!data)
die(_("unable to read tree (%s)"),
sha1_to_hex(entry.sha1));
@@ -608,10 +583,8 @@ static int grep_object(struct grep_opt *opt, const struct pathspec *pathspec,
struct strbuf base;
int hit, len;
- read_sha1_lock();
data = read_object_with_reference(obj->sha1, tree_type,
&size, NULL);
- read_sha1_unlock();
if (!data)
die(_("unable to read tree (%s)"), sha1_to_hex(obj->sha1));
diff --git a/sha1_file.c b/sha1_file.c
index 956422b..c3595b3 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -18,6 +18,7 @@
#include "refs.h"
#include "pack-revindex.h"
#include "sha1-lookup.h"
+#include "thread-utils.h"
#ifndef O_NOATIME
#if defined(__linux__) && (defined(__i386__) || defined(__PPC__))
@@ -2237,13 +2238,19 @@ void *read_sha1_file_extended(const unsigned char *sha1,
void *data;
char *path;
const struct packed_git *p;
- const unsigned char *repl = (flag & READ_SHA1_FILE_REPLACE)
+ const unsigned char *repl;
+
+ lock_if_threaded(&read_sha1_mutex);
+
+ repl = (flag & READ_SHA1_FILE_REPLACE)
? lookup_replace_object(sha1) : sha1;
errno = 0;
data = read_object(repl, type, size);
- if (data)
+ if (data) {
+ unlock_if_threaded(&read_sha1_mutex);
return data;
+ }
if (errno && errno != ENOENT)
die_errno("failed to read object %s", sha1_to_hex(sha1));
@@ -2263,6 +2270,7 @@ void *read_sha1_file_extended(const unsigned char *sha1,
die("packed object %s (stored in %s) is corrupt",
sha1_to_hex(repl), p->pack_name);
+ unlock_if_threaded(&read_sha1_mutex);
return NULL;
}
diff --git a/thread-utils.c b/thread-utils.c
index fb75a29..70af3f9 100644
--- a/thread-utils.c
+++ b/thread-utils.c
@@ -62,6 +62,17 @@ int init_recursive_mutex(pthread_mutex_t *m)
return ret;
}
+pthread_mutex_t read_sha1_mutex;
+void init_subsystem_locks(void)
+{
+ pthread_mutex_init(&read_sha1_mutex, NULL);
+}
+
+void destroy_subsystem_locks(void)
+{
+ pthread_mutex_destroy(&read_sha1_mutex);
+}
+
#ifndef NO_PTHREADS
void lock_if_threaded(pthread_mutex_t *m)
{
diff --git a/thread-utils.h b/thread-utils.h
index 9a780a2..3906753 100644
--- a/thread-utils.h
+++ b/thread-utils.h
@@ -17,10 +17,16 @@
extern void lock_if_threaded(pthread_mutex_t*);
extern void unlock_if_threaded(pthread_mutex_t*);
+extern pthread_mutex_t read_sha1_mutex;
+extern void init_subsystem_locks(void);
+extern void destroy_subsystem_locks(void);
+
#else
#define lock_if_threaded(lock)
#define unlock_if_threaded(lock)
+#define init_subsystem_locks()
+#define destroy_subsystem_locks()
#endif
--
1.7.8.431.g2abf2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [POC PATCH 3/5] sha1_file_name_buf(): sha1_file_name in caller's buffer
2011-12-09 8:39 [POC PATCH 0/5] Threaded loose object and pack access Thomas Rast
2011-12-09 8:39 ` [POC PATCH 1/5] Turn grep's use_threads into a global flag Thomas Rast
2011-12-09 8:39 ` [POC PATCH 2/5] grep: push locking into read_sha1_* Thomas Rast
@ 2011-12-09 8:39 ` Thomas Rast
2011-12-09 8:39 ` [POC PATCH 4/5] sha1_file: stuff various pack reading variables into a struct Thomas Rast
` (3 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Thomas Rast @ 2011-12-09 8:39 UTC (permalink / raw)
To: git; +Cc: René Scharfe, Junio C Hamano, Eric Herman
sha1_file_name is non-reentrant because of its use of a static buffer.
Split it into just the buffer writing (which can be called even from
threads as long as the buffer is stack'd) and a small wrapper that
uses the static buffer as before.
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
---
sha1_file.c | 29 +++++++++++++++++++----------
1 files changed, 19 insertions(+), 10 deletions(-)
diff --git a/sha1_file.c b/sha1_file.c
index c3595b3..18648c3 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -153,18 +153,11 @@ static void fill_sha1_path(char *pathbuf, const unsigned char *sha1)
}
/*
- * NOTE! This returns a statically allocated buffer, so you have to be
- * careful about using it. Do an "xstrdup()" if you need to save the
- * filename.
- *
- * Also note that this returns the location for creating. Reading
- * SHA1 file can happen from any alternate directory listed in the
- * DB_ENVIRONMENT environment variable if it is not found in
- * the primary object database.
+ * Similar to sha1_file_name but you provide a buffer of size at least
+ * PATH_MAX.
*/
-char *sha1_file_name(const unsigned char *sha1)
+void sha1_file_name_buf(char *buf, const unsigned char *sha1)
{
- static char buf[PATH_MAX];
const char *objdir;
int len;
@@ -179,6 +172,22 @@ char *sha1_file_name(const unsigned char *sha1)
buf[len+3] = '/';
buf[len+42] = '\0';
fill_sha1_path(buf + len + 1, sha1);
+}
+
+/*
+ * NOTE! This returns a statically allocated buffer, so you have to be
+ * careful about using it. Do an "xstrdup()" if you need to save the
+ * filename.
+ *
+ * Also note that this returns the location for creating. Reading
+ * SHA1 file can happen from any alternate directory listed in the
+ * DB_ENVIRONMENT environment variable if it is not found in
+ * the primary object database.
+ */
+char *sha1_file_name(const unsigned char *sha1)
+{
+ static char buf[PATH_MAX];
+ sha1_file_name_buf(buf, sha1);
return buf;
}
--
1.7.8.431.g2abf2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [POC PATCH 4/5] sha1_file: stuff various pack reading variables into a struct
2011-12-09 8:39 [POC PATCH 0/5] Threaded loose object and pack access Thomas Rast
` (2 preceding siblings ...)
2011-12-09 8:39 ` [POC PATCH 3/5] sha1_file_name_buf(): sha1_file_name in caller's buffer Thomas Rast
@ 2011-12-09 8:39 ` Thomas Rast
2011-12-09 8:39 ` [POC PATCH 5/5] sha1_file: make the pack machinery thread-safe Thomas Rast
` (2 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Thomas Rast @ 2011-12-09 8:39 UTC (permalink / raw)
To: git; +Cc: René Scharfe, Junio C Hamano, Eric Herman
In preparation for making these variables thread-local, put various
delta-cache related bits of pack reading state into a struct. For now
the accessor function is a dummy that always returns a static instance
of this struct.
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
---
sha1_file.c | 99 ++++++++++++++++++++++++++++++++++++++++-------------------
1 files changed, 67 insertions(+), 32 deletions(-)
diff --git a/sha1_file.c b/sha1_file.c
index 18648c3..7c367f9 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -1655,21 +1655,50 @@ static void *unpack_compressed_entry(struct packed_git *p,
#define MAX_DELTA_CACHE (256)
-static size_t delta_base_cached;
-
-static struct delta_base_cache_lru_list {
+struct delta_base_cache_lru_list {
struct delta_base_cache_lru_list *prev;
struct delta_base_cache_lru_list *next;
-} delta_base_cache_lru = { &delta_base_cache_lru, &delta_base_cache_lru };
+};
+
+static struct delta_base_cache_lru_list main_delta_base_cache_lru = {
+ &main_delta_base_cache_lru, &main_delta_base_cache_lru
+};
-static struct delta_base_cache_entry {
+struct delta_base_cache_entry {
struct delta_base_cache_lru_list lru;
void *data;
struct packed_git *p;
off_t base_offset;
unsigned long size;
enum object_type type;
-} delta_base_cache[MAX_DELTA_CACHE];
+};
+
+static struct delta_base_cache_entry main_delta_base_cache[MAX_DELTA_CACHE];
+
+struct pack_context {
+ size_t delta_base_cached;
+ struct delta_base_cache_entry *delta_base_cache;
+ struct delta_base_cache_lru_list *delta_base_cache_lru;
+ struct packed_git *last_found;
+};
+
+static struct pack_context main_pack_context = {
+ 0, main_delta_base_cache, &main_delta_base_cache_lru, (void*)1
+};
+
+static struct pack_context *pack_context_alloc(void)
+{
+ struct pack_context *ctx = xmalloc(sizeof(struct pack_context));
+ ctx->delta_base_cached = 0;
+ ctx->delta_base_cache_lru = xmalloc(sizeof(struct delta_base_cache_lru_list));
+ ctx->delta_base_cache_lru->prev = ctx->delta_base_cache_lru;
+ ctx->delta_base_cache_lru->next = ctx->delta_base_cache_lru;
+ ctx->delta_base_cache = xcalloc(MAX_DELTA_CACHE, sizeof(struct delta_base_cache_entry));
+ ctx->last_found = (void*)1;
+ return ctx;
+}
+
+#define get_thread_pack_context() (&main_pack_context)
static unsigned long pack_entry_hash(struct packed_git *p, off_t base_offset)
{
@@ -1683,7 +1712,8 @@ static unsigned long pack_entry_hash(struct packed_git *p, off_t base_offset)
static int in_delta_base_cache(struct packed_git *p, off_t base_offset)
{
unsigned long hash = pack_entry_hash(p, base_offset);
- struct delta_base_cache_entry *ent = delta_base_cache + hash;
+ struct delta_base_cache_entry *ent
+ = get_thread_pack_context()->delta_base_cache + hash;
return (ent->data && ent->p == p && ent->base_offset == base_offset);
}
@@ -1692,7 +1722,8 @@ static void *cache_or_unpack_entry(struct packed_git *p, off_t base_offset,
{
void *ret;
unsigned long hash = pack_entry_hash(p, base_offset);
- struct delta_base_cache_entry *ent = delta_base_cache + hash;
+ struct pack_context *ctx = get_thread_pack_context();
+ struct delta_base_cache_entry *ent = ctx->delta_base_cache + hash;
ret = ent->data;
if (!ret || ent->p != p || ent->base_offset != base_offset)
@@ -1702,7 +1733,7 @@ static void *cache_or_unpack_entry(struct packed_git *p, off_t base_offset,
ent->data = NULL;
ent->lru.next->prev = ent->lru.prev;
ent->lru.prev->next = ent->lru.next;
- delta_base_cached -= ent->size;
+ ctx->delta_base_cached -= ent->size;
} else {
ret = xmemdupz(ent->data, ent->size);
}
@@ -1711,48 +1742,52 @@ static void *cache_or_unpack_entry(struct packed_git *p, off_t base_offset,
return ret;
}
-static inline void release_delta_base_cache(struct delta_base_cache_entry *ent)
+static inline void release_delta_base_cache(struct pack_context *ctx,
+ struct delta_base_cache_entry *ent)
{
if (ent->data) {
free(ent->data);
ent->data = NULL;
ent->lru.next->prev = ent->lru.prev;
ent->lru.prev->next = ent->lru.next;
- delta_base_cached -= ent->size;
+ ctx->delta_base_cached -= ent->size;
}
}
void clear_delta_base_cache(void)
{
unsigned long p;
+ struct pack_context *ctx = get_thread_pack_context();
+ struct delta_base_cache_entry *delta_base_cache = ctx->delta_base_cache;
for (p = 0; p < MAX_DELTA_CACHE; p++)
- release_delta_base_cache(&delta_base_cache[p]);
+ release_delta_base_cache(ctx, &delta_base_cache[p]);
}
static void add_delta_base_cache(struct packed_git *p, off_t base_offset,
void *base, unsigned long base_size, enum object_type type)
{
unsigned long hash = pack_entry_hash(p, base_offset);
- struct delta_base_cache_entry *ent = delta_base_cache + hash;
+ struct pack_context *ctx = get_thread_pack_context();
+ struct delta_base_cache_entry *ent = ctx->delta_base_cache + hash;
struct delta_base_cache_lru_list *lru;
- release_delta_base_cache(ent);
- delta_base_cached += base_size;
+ release_delta_base_cache(ctx, ent);
+ ctx->delta_base_cached += base_size;
- for (lru = delta_base_cache_lru.next;
- delta_base_cached > delta_base_cache_limit
- && lru != &delta_base_cache_lru;
+ for (lru = ctx->delta_base_cache_lru->next;
+ ctx->delta_base_cached > delta_base_cache_limit
+ && lru != ctx->delta_base_cache_lru;
lru = lru->next) {
struct delta_base_cache_entry *f = (void *)lru;
if (f->type == OBJ_BLOB)
- release_delta_base_cache(f);
+ release_delta_base_cache(ctx, f);
}
- for (lru = delta_base_cache_lru.next;
- delta_base_cached > delta_base_cache_limit
- && lru != &delta_base_cache_lru;
+ for (lru = ctx->delta_base_cache_lru->next;
+ ctx->delta_base_cached > delta_base_cache_limit
+ && lru != ctx->delta_base_cache_lru;
lru = lru->next) {
struct delta_base_cache_entry *f = (void *)lru;
- release_delta_base_cache(f);
+ release_delta_base_cache(ctx, f);
}
ent->p = p;
@@ -1760,10 +1795,10 @@ static void add_delta_base_cache(struct packed_git *p, off_t base_offset,
ent->type = type;
ent->data = base;
ent->size = base_size;
- ent->lru.next = &delta_base_cache_lru;
- ent->lru.prev = delta_base_cache_lru.prev;
- delta_base_cache_lru.prev->next = &ent->lru;
- delta_base_cache_lru.prev = &ent->lru;
+ ent->lru.next = ctx->delta_base_cache_lru;
+ ent->lru.prev = ctx->delta_base_cache_lru->prev;
+ ctx->delta_base_cache_lru->prev->next = &ent->lru;
+ ctx->delta_base_cache_lru->prev = &ent->lru;
}
static void *read_object(const unsigned char *sha1, enum object_type *type,
@@ -2021,14 +2056,14 @@ int is_pack_valid(struct packed_git *p)
static int find_pack_entry(const unsigned char *sha1, struct pack_entry *e)
{
- static struct packed_git *last_found = (void *)1;
struct packed_git *p;
off_t offset;
+ struct pack_context *ctx = get_thread_pack_context();
prepare_packed_git();
if (!packed_git)
return 0;
- p = (last_found == (void *)1) ? packed_git : last_found;
+ p = (ctx->last_found == (void *)1) ? packed_git : ctx->last_found;
do {
if (p->num_bad_objects) {
@@ -2055,16 +2090,16 @@ static int find_pack_entry(const unsigned char *sha1, struct pack_entry *e)
e->offset = offset;
e->p = p;
hashcpy(e->sha1, sha1);
- last_found = p;
+ ctx->last_found = p;
return 1;
}
next:
- if (p == last_found)
+ if (p == ctx->last_found)
p = packed_git;
else
p = p->next;
- if (p == last_found)
+ if (p == ctx->last_found)
p = p->next;
} while (p);
return 0;
--
1.7.8.431.g2abf2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [POC PATCH 5/5] sha1_file: make the pack machinery thread-safe
2011-12-09 8:39 [POC PATCH 0/5] Threaded loose object and pack access Thomas Rast
` (3 preceding siblings ...)
2011-12-09 8:39 ` [POC PATCH 4/5] sha1_file: stuff various pack reading variables into a struct Thomas Rast
@ 2011-12-09 8:39 ` Thomas Rast
2012-04-09 14:43 ` Nguyen Thai Ngoc Duy
2011-12-09 8:45 ` [POC PATCH 0/5] Threaded loose object and pack access Thomas Rast
2011-12-10 15:51 ` Nguyen Thai Ngoc Duy
6 siblings, 1 reply; 11+ messages in thread
From: Thomas Rast @ 2011-12-09 8:39 UTC (permalink / raw)
To: git; +Cc: René Scharfe, Junio C Hamano, Eric Herman
More precisely speaking, this pushes the locking down from
read_object() into bits of the pack machinery that cannot (yet) run in
parallel.
There are several hacks here:
a) prepare_packed_git() must be called before any parallel accesses
happen. It now unconditionally opens and maps all index files.
b) similarly, prepare_replace_object() must be called before any
parallel read_sha1_file() happens
This simplification lets us avoid locking outright to guard the index
accesses; locking is then mainly required for open_packed_git(),
[un]use_pack(), and such.
The ultimate goal would of course be to let at least _some_ pack
accesses happen without any locking whatsoever. But grep already
benefits from it with a nice speed boost on non-worktree greps.
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
---
builtin/grep.c | 9 ++++++
cache.h | 1 +
replace_object.c | 5 ++-
sha1_file.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++------
thread-utils.c | 9 ++++--
thread-utils.h | 3 +-
6 files changed, 93 insertions(+), 15 deletions(-)
diff --git a/builtin/grep.c b/builtin/grep.c
index 6c5bdfa..212497d 100644
--- a/builtin/grep.c
+++ b/builtin/grep.c
@@ -980,6 +980,15 @@ int cmd_grep(int argc, const char **argv, const char *prefix)
if (opt.pre_context || opt.post_context || opt.file_break ||
opt.funcbody)
skip_first_line = 1;
+ /*
+ * This does the non-threadsafe work early. FIXME:
+ * grep shouldn't have to know about this mess.
+ */
+ use_threads = 0;
+ prepare_replace_object();
+ prepare_packed_git();
+ use_threads = 1;
+
start_threads(&opt);
}
#endif
diff --git a/cache.h b/cache.h
index 8c98d05..379dd44 100644
--- a/cache.h
+++ b/cache.h
@@ -764,6 +764,7 @@ static inline const unsigned char *lookup_replace_object(const unsigned char *sh
return sha1;
return do_lookup_replace_object(sha1);
}
+extern void prepare_replace_object(void);
/* Read and unpack a sha1 file into memory, write memory to a sha1 file */
extern int sha1_object_info(const unsigned char *, unsigned long *);
diff --git a/replace_object.c b/replace_object.c
index d0b1548..b303392 100644
--- a/replace_object.c
+++ b/replace_object.c
@@ -2,6 +2,7 @@
#include "sha1-lookup.h"
#include "refs.h"
#include "commit.h"
+#include "thread-utils.h"
static struct replace_object {
unsigned char sha1[2][20];
@@ -76,13 +77,15 @@ static int register_replace_ref(const char *refname,
return 0;
}
-static void prepare_replace_object(void)
+void prepare_replace_object(void)
{
static int replace_object_prepared;
if (replace_object_prepared)
return;
+ assert(!use_threads);
+
for_each_replace_ref(register_replace_ref, NULL);
replace_object_prepared = 1;
if (!replace_object_nr)
diff --git a/sha1_file.c b/sha1_file.c
index 7c367f9..b61692e 100644
--- a/sha1_file.c
+++ b/sha1_file.c
@@ -429,7 +429,8 @@ void prepare_alt_odb(void)
static int has_loose_object_local(const unsigned char *sha1)
{
- char *name = sha1_file_name(sha1);
+ char name[PATH_MAX];
+ sha1_file_name_buf(name, sha1);
return !access(name, F_OK);
}
@@ -650,9 +651,12 @@ static int unuse_one_window(struct packed_git *current, int keep_fd)
void release_pack_memory(size_t need, int fd)
{
- size_t cur = pack_mapped;
+ size_t cur;
+ lock_if_threaded(&pack_access_mutex);
+ cur = pack_mapped;
while (need >= (cur - pack_mapped) && unuse_one_window(NULL, fd))
; /* nothing */
+ unlock_if_threaded(&pack_access_mutex);
}
void *xmmap(void *start, size_t length,
@@ -689,9 +693,12 @@ void close_pack_windows(struct packed_git *p)
void unuse_pack(struct pack_window **w_cursor)
{
struct pack_window *w = *w_cursor;
+
if (w) {
+ lock_if_threaded(&pack_access_mutex);
w->inuse_cnt--;
*w_cursor = NULL;
+ unlock_if_threaded(&pack_access_mutex);
}
}
@@ -712,10 +719,13 @@ void close_pack_index(struct packed_git *p)
* must subsist at this point. If ever objects from this pack are requested
* again, the new version of the pack will be reinitialized through
* reprepare_packed_git().
+ *
+ * NOT THREAD-SAFE
*/
void free_pack_by_name(const char *pack_name)
{
struct packed_git *p, **pp = &packed_git;
+ assert(!use_threads);
while (*pp) {
p = *pp;
@@ -821,13 +831,33 @@ static int open_packed_git_1(struct packed_git *p)
static int open_packed_git(struct packed_git *p)
{
- if (!open_packed_git_1(p))
+ lock_if_threaded(&pack_access_mutex);
+ /*
+ * is_pack_valid() took the easy route and did not
+ * lock. This is probably okay; if the pack was
+ * *ever* open, it was valid unless another process is
+ * actively trying to corrupt it, in which case:
+ * meh.
+ *
+ * However, a concurrent open_packed_git() may already have
+ * opened it before we get here. So we test again in a locked
+ * section. If it beat us to it, then we have no work left to
+ * do.
+ */
+ if (p->pack_fd != -1) {
+ unlock_if_threaded(&pack_access_mutex);
return 0;
+ }
+ if (!open_packed_git_1(p)) {
+ unlock_if_threaded(&pack_access_mutex);
+ return 0;
+ }
if (p->pack_fd != -1) {
close(p->pack_fd);
pack_open_fds--;
p->pack_fd = -1;
}
+ unlock_if_threaded(&pack_access_mutex);
return -1;
}
@@ -858,6 +888,9 @@ unsigned char *use_pack(struct packed_git *p,
*/
if (!p->pack_size && p->pack_fd == -1 && open_packed_git(p))
die("packfile %s cannot be accessed", p->pack_name);
+
+ lock_if_threaded(&pack_access_mutex);
+
if (offset > (p->pack_size - 20))
die("offset beyond end of packfile (truncated pack?)");
@@ -916,6 +949,9 @@ unsigned char *use_pack(struct packed_git *p,
offset -= win->offset;
if (left)
*left = win->len - xsize_t(offset);
+
+ unlock_if_threaded(&pack_access_mutex);
+
return win->base + offset;
}
@@ -1044,6 +1080,7 @@ static void prepare_packed_git_one(char *objdir, int local)
if (!p)
continue;
install_packed_git(p);
+ open_pack_index(p);
}
closedir(dir);
}
@@ -1102,6 +1139,12 @@ static void rearrange_packed_git(void)
free(ary);
}
+/*
+ * NOT THREAD-SAFE
+ *
+ * However, it's ok if you run this early, before starting threads,
+ * and then use the pack machinery from threads.
+ */
static int prepare_packed_git_run_once = 0;
void prepare_packed_git(void)
{
@@ -1109,6 +1152,7 @@ void prepare_packed_git(void)
if (prepare_packed_git_run_once)
return;
+ assert (!use_threads);
prepare_packed_git_one(get_object_directory(), 1);
prepare_alt_odb();
for (alt = alt_odb_list; alt; alt = alt->next) {
@@ -1180,8 +1224,10 @@ static int git_open_noatime(const char *name)
static int open_sha1_file(const unsigned char *sha1)
{
int fd;
- char *name = sha1_file_name(sha1);
+ char namebuf[PATH_MAX];
+ char *name = namebuf;
struct alternate_object_database *alt;
+ sha1_file_name_buf(name, sha1);
fd = git_open_noatime(name);
if (fd >= 0)
@@ -1698,7 +1744,22 @@ static struct pack_context *pack_context_alloc(void)
return ctx;
}
+#ifdef NO_PTHREADS
#define get_thread_pack_context() (&main_pack_context)
+#else
+static struct pack_context *get_thread_pack_context(void)
+{
+ struct pack_context *ctx;
+ if (!use_threads)
+ return &main_pack_context;
+ ctx = pthread_getspecific(pack_context_key);
+ if (ctx)
+ return ctx;
+ ctx = pack_context_alloc();
+ pthread_setspecific(pack_context_key, ctx);
+ return ctx;
+}
+#endif
static unsigned long pack_entry_hash(struct packed_git *p, off_t base_offset)
{
@@ -2219,6 +2280,10 @@ static void *read_packed_sha1(const unsigned char *sha1,
return data;
}
+/*
+ * WARNING: must never be called concurrently with read_sha1_file and
+ * friends! They do lookups in the cached_objects without locking.
+ */
int pretend_sha1_file(void *buf, unsigned long len, enum object_type type,
unsigned char *sha1)
{
@@ -2280,19 +2345,15 @@ void *read_sha1_file_extended(const unsigned char *sha1,
unsigned flag)
{
void *data;
- char *path;
const struct packed_git *p;
const unsigned char *repl;
- lock_if_threaded(&read_sha1_mutex);
-
repl = (flag & READ_SHA1_FILE_REPLACE)
? lookup_replace_object(sha1) : sha1;
errno = 0;
data = read_object(repl, type, size);
if (data) {
- unlock_if_threaded(&read_sha1_mutex);
return data;
}
@@ -2305,7 +2366,8 @@ void *read_sha1_file_extended(const unsigned char *sha1,
sha1_to_hex(repl), sha1_to_hex(sha1));
if (has_loose_object(repl)) {
- path = sha1_file_name(sha1);
+ char path[PATH_MAX];
+ sha1_file_name_buf(path, sha1);
die("loose object %s (stored in %s) is corrupt",
sha1_to_hex(repl), path);
}
@@ -2314,7 +2376,6 @@ void *read_sha1_file_extended(const unsigned char *sha1,
die("packed object %s (stored in %s) is corrupt",
sha1_to_hex(repl), p->pack_name);
- unlock_if_threaded(&read_sha1_mutex);
return NULL;
}
diff --git a/thread-utils.c b/thread-utils.c
index 70af3f9..0da2b65 100644
--- a/thread-utils.c
+++ b/thread-utils.c
@@ -62,15 +62,18 @@ int init_recursive_mutex(pthread_mutex_t *m)
return ret;
}
-pthread_mutex_t read_sha1_mutex;
+pthread_mutex_t pack_access_mutex;
+pthread_key_t pack_context_key;
void init_subsystem_locks(void)
{
- pthread_mutex_init(&read_sha1_mutex, NULL);
+ init_recursive_mutex(&pack_access_mutex);
+ pthread_key_create(&pack_context_key, NULL);
}
void destroy_subsystem_locks(void)
{
- pthread_mutex_destroy(&read_sha1_mutex);
+ pthread_mutex_destroy(&pack_access_mutex);
+ pthread_key_delete(pack_context_key);
}
#ifndef NO_PTHREADS
diff --git a/thread-utils.h b/thread-utils.h
index 3906753..7d3cc0a 100644
--- a/thread-utils.h
+++ b/thread-utils.h
@@ -17,7 +17,8 @@
extern void lock_if_threaded(pthread_mutex_t*);
extern void unlock_if_threaded(pthread_mutex_t*);
-extern pthread_mutex_t read_sha1_mutex;
+extern pthread_mutex_t pack_access_mutex;
+extern pthread_key_t pack_context_key;
extern void init_subsystem_locks(void);
extern void destroy_subsystem_locks(void);
--
1.7.8.431.g2abf2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [POC PATCH 0/5] Threaded loose object and pack access
2011-12-09 8:39 [POC PATCH 0/5] Threaded loose object and pack access Thomas Rast
` (4 preceding siblings ...)
2011-12-09 8:39 ` [POC PATCH 5/5] sha1_file: make the pack machinery thread-safe Thomas Rast
@ 2011-12-09 8:45 ` Thomas Rast
2011-12-10 15:51 ` Nguyen Thai Ngoc Duy
6 siblings, 0 replies; 11+ messages in thread
From: Thomas Rast @ 2011-12-09 8:45 UTC (permalink / raw)
To: git; +Cc: René Scharfe, Junio C Hamano, Eric Herman, Jeff King
Thomas Rast wrote:
> Well, just to make sure we're all left in a confused mess of partly
> conflicting patches, here's another angle on the same thing:
Bleh, obviously that was intended to be a reply to
http://thread.gmane.org/gmane.comp.version-control.git/185932/focus=186231
and CC'd to Peff.
Sorry for the mess. I'll go sulking with a few cups of coffee.
--
Thomas Rast
trast@{inf,student}.ethz.ch
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [POC PATCH 0/5] Threaded loose object and pack access
2011-12-09 8:39 [POC PATCH 0/5] Threaded loose object and pack access Thomas Rast
` (5 preceding siblings ...)
2011-12-09 8:45 ` [POC PATCH 0/5] Threaded loose object and pack access Thomas Rast
@ 2011-12-10 15:51 ` Nguyen Thai Ngoc Duy
6 siblings, 0 replies; 11+ messages in thread
From: Nguyen Thai Ngoc Duy @ 2011-12-10 15:51 UTC (permalink / raw)
To: Thomas Rast; +Cc: git, René Scharfe, Junio C Hamano, Eric Herman
On Fri, Dec 9, 2011 at 3:39 PM, Thomas Rast <trast@student.ethz.ch> wrote:
> Well, just to make sure we're all left in a confused mess of partly
> conflicting patches, here's another angle on the same thing:
>
> Jeff King wrote:
>> Wow, that's horrible. Leaving aside the parallelism, it's just terrible
>> that reading from the cache is 20 times slower than the worktree. I get
>> similar results on my quad-core machine.
>
> By poking around in sha1_file.c I got that down to about 10. It's not
> great yet, but it seems a start.
>
> The goal would be to improve it to the point where a patch lookup that
> already has all relevant packs open and windows mapped can proceed
> without locking. I'm not sure that's doable short of duplicating the
> whole pack state (including fds and windows) across threads, but I'll
> give it some more thought before going that route.
Another potential user for parallel pack access is fsck. Although fsck
access pattern may be different from grep, fsck would open and read
through all packs.
--
Duy
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [POC PATCH 5/5] sha1_file: make the pack machinery thread-safe
2011-12-09 8:39 ` [POC PATCH 5/5] sha1_file: make the pack machinery thread-safe Thomas Rast
@ 2012-04-09 14:43 ` Nguyen Thai Ngoc Duy
2012-04-10 12:29 ` Thomas Rast
0 siblings, 1 reply; 11+ messages in thread
From: Nguyen Thai Ngoc Duy @ 2012-04-09 14:43 UTC (permalink / raw)
To: Thomas Rast; +Cc: git, René Scharfe, Junio C Hamano, Eric Herman
On Fri, Dec 9, 2011 at 3:39 PM, Thomas Rast <trast@student.ethz.ch> wrote:
> More precisely speaking, this pushes the locking down from
> read_object() into bits of the pack machinery that cannot (yet) run in
> parallel.
>
> There are several hacks here:
>
> a) prepare_packed_git() must be called before any parallel accesses
> happen. It now unconditionally opens and maps all index files.
>
> b) similarly, prepare_replace_object() must be called before any
> parallel read_sha1_file() happens
>
> This simplification lets us avoid locking outright to guard the index
> accesses; locking is then mainly required for open_packed_git(),
> [un]use_pack(), and such.
>
> The ultimate goal would of course be to let at least _some_ pack
> accesses happen without any locking whatsoever. But grep already
> benefits from it with a nice speed boost on non-worktree greps.
(I'm running into multithread pack access problem in rev-list..)
Why not put the global pointer "struct packed_git *packed_git" to
"struct pack_context" and avoid locking entirely? Resource usage is
like we run <n> different processes, I think, which is not too bad. We
may want to share a few static pack_* variables such as
pack_open_fds.. to avoid hitting system limits too fast.
--
Duy
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [POC PATCH 5/5] sha1_file: make the pack machinery thread-safe
2012-04-09 14:43 ` Nguyen Thai Ngoc Duy
@ 2012-04-10 12:29 ` Thomas Rast
2012-04-10 13:39 ` Nguyen Thai Ngoc Duy
0 siblings, 1 reply; 11+ messages in thread
From: Thomas Rast @ 2012-04-10 12:29 UTC (permalink / raw)
To: Nguyen Thai Ngoc Duy
Cc: Thomas Rast, git, René Scharfe, Junio C Hamano, Eric Herman
Nguyen Thai Ngoc Duy <pclouds@gmail.com> writes:
> On Fri, Dec 9, 2011 at 3:39 PM, Thomas Rast <trast@student.ethz.ch> wrote:
>> More precisely speaking, this pushes the locking down from
>> read_object() into bits of the pack machinery that cannot (yet) run in
>> parallel.
>>
>> There are several hacks here:
>>
>> a) prepare_packed_git() must be called before any parallel accesses
>> happen. It now unconditionally opens and maps all index files.
>>
>> b) similarly, prepare_replace_object() must be called before any
>> parallel read_sha1_file() happens
>>
>> This simplification lets us avoid locking outright to guard the index
>> accesses; locking is then mainly required for open_packed_git(),
>> [un]use_pack(), and such.
>>
>> The ultimate goal would of course be to let at least _some_ pack
>> accesses happen without any locking whatsoever. But grep already
>> benefits from it with a nice speed boost on non-worktree greps.
>
> (I'm running into multithread pack access problem in rev-list..)
>
> Why not put the global pointer "struct packed_git *packed_git" to
> "struct pack_context" and avoid locking entirely? Resource usage is
> like we run <n> different processes, I think, which is not too bad. We
> may want to share a few static pack_* variables such as
> pack_open_fds.. to avoid hitting system limits too fast.
I was hesitating to do that because I think it's not the best solution
yet. At least for 64bit systems, I thought of doing some or all of:
* opening/mapping the pack indexes immediately to avoid locking there
(perhaps the POC already does this, I haven't looked again). If you
have many packs this isn't cheap because the index must be verified.
* mapping small packs immediately
* mapping "the" big pack immediately (many repos will have a huge pack
from the initial clone)
Put another way, my current concern is that on 64bit systems it's
incredibly easy to share (who cares about a few GBs of mmap()?), whereas
on 32bit systems it probably matters much more, but there we also suffer
more from not sharing.
Am I making any sense?
--
Thomas Rast
trast@{inf,student}.ethz.ch
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [POC PATCH 5/5] sha1_file: make the pack machinery thread-safe
2012-04-10 12:29 ` Thomas Rast
@ 2012-04-10 13:39 ` Nguyen Thai Ngoc Duy
0 siblings, 0 replies; 11+ messages in thread
From: Nguyen Thai Ngoc Duy @ 2012-04-10 13:39 UTC (permalink / raw)
To: Thomas Rast
Cc: Thomas Rast, git, René Scharfe, Junio C Hamano, Eric Herman
On Tue, Apr 10, 2012 at 7:29 PM, Thomas Rast <trast@inf.ethz.ch> wrote:
> Nguyen Thai Ngoc Duy <pclouds@gmail.com> writes:
>
>> On Fri, Dec 9, 2011 at 3:39 PM, Thomas Rast <trast@student.ethz.ch> wrote:
>>> More precisely speaking, this pushes the locking down from
>>> read_object() into bits of the pack machinery that cannot (yet) run in
>>> parallel.
>>>
>>> There are several hacks here:
>>>
>>> a) prepare_packed_git() must be called before any parallel accesses
>>> happen. It now unconditionally opens and maps all index files.
>>>
>>> b) similarly, prepare_replace_object() must be called before any
>>> parallel read_sha1_file() happens
>>>
>>> This simplification lets us avoid locking outright to guard the index
>>> accesses; locking is then mainly required for open_packed_git(),
>>> [un]use_pack(), and such.
>>>
>>> The ultimate goal would of course be to let at least _some_ pack
>>> accesses happen without any locking whatsoever. But grep already
>>> benefits from it with a nice speed boost on non-worktree greps.
>>
>> (I'm running into multithread pack access problem in rev-list..)
>>
>> Why not put the global pointer "struct packed_git *packed_git" to
>> "struct pack_context" and avoid locking entirely? Resource usage is
>> like we run <n> different processes, I think, which is not too bad. We
>> may want to share a few static pack_* variables such as
>> pack_open_fds.. to avoid hitting system limits too fast.
>
> I was hesitating to do that because I think it's not the best solution
> yet. At least for 64bit systems, I thought of doing some or all of:
>
> * opening/mapping the pack indexes immediately to avoid locking there
> (perhaps the POC already does this, I haven't looked again). If you
> have many packs this isn't cheap because the index must be verified.
Sharing mmapped pack indexes makes sense. We do full mmap there so it
eats address space in 32 bit systems (but still not a lot, linux-2.6
pack index is about 60MB).
I tried but could not find the index verifying code (i.e. recalculate
sha-1 and match with stored one) anywhere, so I suppose opening packs
and indexes is cheap.
> * mapping small packs immediately
We need to partition file handle space to avoid running out of file handles.
> * mapping "the" big pack immediately (many repos will have a huge pack
> from the initial clone)
We have sliding pack windows exactly for that: accessing >4GB packs in
32bit systems. So address space should not be an issue here.
> Put another way, my current concern is that on 64bit systems it's
> incredibly easy to share (who cares about a few GBs of mmap()?), whereas
> on 32bit systems it probably matters much more, but there we also suffer
> more from not sharing.
Having said all that, lock-free pack access does not work for me yet.
I keep get crashes deep in cache_or_unpack_entry :(
--
Duy
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2012-04-10 13:39 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-12-09 8:39 [POC PATCH 0/5] Threaded loose object and pack access Thomas Rast
2011-12-09 8:39 ` [POC PATCH 1/5] Turn grep's use_threads into a global flag Thomas Rast
2011-12-09 8:39 ` [POC PATCH 2/5] grep: push locking into read_sha1_* Thomas Rast
2011-12-09 8:39 ` [POC PATCH 3/5] sha1_file_name_buf(): sha1_file_name in caller's buffer Thomas Rast
2011-12-09 8:39 ` [POC PATCH 4/5] sha1_file: stuff various pack reading variables into a struct Thomas Rast
2011-12-09 8:39 ` [POC PATCH 5/5] sha1_file: make the pack machinery thread-safe Thomas Rast
2012-04-09 14:43 ` Nguyen Thai Ngoc Duy
2012-04-10 12:29 ` Thomas Rast
2012-04-10 13:39 ` Nguyen Thai Ngoc Duy
2011-12-09 8:45 ` [POC PATCH 0/5] Threaded loose object and pack access Thomas Rast
2011-12-10 15:51 ` Nguyen Thai Ngoc Duy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).