1
0
Fork 0
mirror of https://github.com/git/git.git synced 2024-10-28 04:49:43 +01:00

packfile: fix race condition on unpack_entry()

The third phase of unpack_entry() performs the following sequence in a
loop, until all the deltas enumerated in phase one are applied and the
entry is fully reconstructed:

1. Add the current base entry to the delta base cache
2. Unpack the next delta
3. Patch the unpacked delta on top of the base

When the optional object reading lock is enabled, the above steps will
be performed while holding the lock. However, step 2. momentarily
releases it so that inflation can be performed in parallel for increased
performance. Because the `base` buffer inserted in the cache at 1. is
not duplicated, another thread can potentially free() it while the lock
is released at 2. (e.g. when there is no space left in the cache to
insert another entry). In this case, the later attempt to dereference
`base` at 3. will cause a segmentation fault. This problem was observed
during a multithreaded git-grep execution on a repository with large
objects.

To fix the race condition (and later segmentation fault), let's reorder
the aforementioned steps so that `base` is only added to the cache at
the end. This will prevent the buffer from being released by another
thread while it is still in use. An alternative solution which would not
require the reordering would be to duplicate `base` before inserting it
in the cache. However, as Phil Hord mentioned, memcpy()'ing large bases
can negatively affect performance: in his experiments, this alternative
approach slowed git-grep down by 10% to 20%.

Reported-by: Phil Hord <phil.hord@gmail.com>
Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This commit is contained in:
Matheus Tavares 2020-09-28 21:01:52 -03:00 committed by Junio C Hamano
parent 47ae905ffb
commit 74b052f8c2

View file

@ -1764,12 +1764,10 @@ void *unpack_entry(struct repository *r, struct packed_git *p, off_t obj_offset,
void *external_base = NULL;
unsigned long delta_size, base_size = size;
int i;
off_t base_obj_offset = obj_offset;
data = NULL;
if (base)
add_delta_base_cache(p, obj_offset, base, base_size, type);
if (!base) {
/*
* We're probably in deep shit, but let's try to fetch
@ -1807,24 +1805,33 @@ void *unpack_entry(struct repository *r, struct packed_git *p, off_t obj_offset,
"at offset %"PRIuMAX" from %s",
(uintmax_t)curpos, p->pack_name);
data = NULL;
free(external_base);
continue;
} else {
data = patch_delta(base, base_size, delta_data,
delta_size, &size);
/*
* We could not apply the delta; warn the user, but
* keep going. Our failure will be noticed either in
* the next iteration of the loop, or if this is the
* final delta, in the caller when we return NULL.
* Those code paths will take care of making a more
* explicit warning and retrying with another copy of
* the object.
*/
if (!data)
error("failed to apply delta");
}
data = patch_delta(base, base_size,
delta_data, delta_size,
&size);
/*
* We could not apply the delta; warn the user, but keep going.
* Our failure will be noticed either in the next iteration of
* the loop, or if this is the final delta, in the caller when
* we return NULL. Those code paths will take care of making
* a more explicit warning and retrying with another copy of
* the object.
* We delay adding `base` to the cache until the end of the loop
* because unpack_compressed_entry() momentarily releases the
* obj_read_mutex, giving another thread the chance to access
* the cache. Therefore, if `base` was already there, this other
* thread could free() it (e.g. to make space for another entry)
* before we are done using it.
*/
if (!data)
error("failed to apply delta");
if (!external_base)
add_delta_base_cache(p, base_obj_offset, base, base_size, type);
free(delta_data);
free(external_base);