2005-06-27 05:27:56 +02:00
|
|
|
/*
|
|
|
|
* csum-file.c
|
|
|
|
*
|
|
|
|
* Copyright (C) 2005 Linus Torvalds
|
|
|
|
*
|
|
|
|
* Simple file write infrastructure for writing SHA1-summed
|
|
|
|
* files. Useful when you write a file that you want to be
|
|
|
|
* able to verify hasn't been messed with afterwards.
|
|
|
|
*/
|
|
|
|
#include "cache.h"
|
|
|
|
#include "csum-file.h"
|
|
|
|
|
2006-08-14 22:32:01 +02:00
|
|
|
static void sha1flush(struct sha1file *f, unsigned int count)
|
2005-06-27 05:27:56 +02:00
|
|
|
{
|
|
|
|
void *buf = f->buffer;
|
|
|
|
|
|
|
|
for (;;) {
|
2005-12-20 01:18:28 +01:00
|
|
|
int ret = xwrite(f->fd, buf, count);
|
2005-06-27 05:27:56 +02:00
|
|
|
if (ret > 0) {
|
2006-06-18 17:18:09 +02:00
|
|
|
buf = (char *) buf + ret;
|
2005-06-27 05:27:56 +02:00
|
|
|
count -= ret;
|
|
|
|
if (count)
|
|
|
|
continue;
|
2006-08-14 22:32:01 +02:00
|
|
|
return;
|
2005-06-27 05:27:56 +02:00
|
|
|
}
|
|
|
|
if (!ret)
|
2005-06-27 07:01:46 +02:00
|
|
|
die("sha1 file '%s' write error. Out of diskspace", f->name);
|
|
|
|
die("sha1 file '%s' write error (%s)", f->name, strerror(errno));
|
2005-06-27 05:27:56 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-06-27 07:01:46 +02:00
|
|
|
int sha1close(struct sha1file *f, unsigned char *result, int update)
|
2005-06-27 05:27:56 +02:00
|
|
|
{
|
|
|
|
unsigned offset = f->offset;
|
|
|
|
if (offset) {
|
|
|
|
SHA1_Update(&f->ctx, f->buffer, offset);
|
|
|
|
sha1flush(f, offset);
|
|
|
|
}
|
|
|
|
SHA1_Final(f->buffer, &f->ctx);
|
2005-06-27 07:01:46 +02:00
|
|
|
if (result)
|
2006-08-23 08:49:00 +02:00
|
|
|
hashcpy(result, f->buffer);
|
2005-06-27 07:01:46 +02:00
|
|
|
if (update)
|
|
|
|
sha1flush(f, 20);
|
|
|
|
if (close(f->fd))
|
|
|
|
die("%s: sha1 file error on close (%s)", f->name, strerror(errno));
|
2005-08-08 20:46:13 +02:00
|
|
|
free(f);
|
2005-06-27 05:27:56 +02:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int sha1write(struct sha1file *f, void *buf, unsigned int count)
|
|
|
|
{
|
compute a CRC32 for each object as stored in a pack
The most important optimization for performance when repacking is the
ability to reuse data from a previous pack as is and bypass any delta
or even SHA1 computation by simply copying the raw data from one pack
to another directly.
The problem with this is that any data corruption within a copied object
would go unnoticed and the new (repacked) pack would be self-consistent
with its own checksum despite containing a corrupted object. This is a
real issue that already happened at least once in the past.
In some attempt to prevent this, we validate the copied data by inflating
it and making sure no error is signaled by zlib. But this is still not
perfect as a significant portion of a pack content is made of object
headers and references to delta base objects which are not deflated and
therefore not validated when repacking actually making the pack data reuse
still not as safe as it could be.
Of course a full SHA1 validation could be performed, but that implies
full data inflating and delta replaying which is extremely costly, which
cost the data reuse optimization was designed to avoid in the first place.
So the best solution to this is simply to store a CRC32 of the raw pack
data for each object in the pack index. This way any object in a pack can
be validated before being copied as is in another pack, including header
and any other non deflated data.
Why CRC32 instead of a faster checksum like Adler32? Quoting Wikipedia:
Jonathan Stone discovered in 2001 that Adler-32 has a weakness for very
short messages. He wrote "Briefly, the problem is that, for very short
packets, Adler32 is guaranteed to give poor coverage of the available
bits. Don't take my word for it, ask Mark Adler. :-)" The problem is
that sum A does not wrap for short messages. The maximum value of A for
a 128-byte message is 32640, which is below the value 65521 used by the
modulo operation. An extended explanation can be found in RFC 3309,
which mandates the use of CRC32 instead of Adler-32 for SCTP, the
Stream Control Transmission Protocol.
In the context of a GIT pack, we have lots of small objects, especially
deltas, which are likely to be quite small and in a size range for which
Adler32 is dimed not to be sufficient. Another advantage of CRC32 is the
possibility for recovery from certain types of small corruptions like
single bit errors which are the most probable type of corruptions.
OK what this patch does is to compute the CRC32 of each object written to
a pack within pack-objects. It is not written to the index yet and it is
obviously not validated when reusing pack data yet either.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-09 07:06:31 +02:00
|
|
|
if (f->do_crc)
|
|
|
|
f->crc32 = crc32(f->crc32, buf, count);
|
2005-06-27 05:27:56 +02:00
|
|
|
while (count) {
|
|
|
|
unsigned offset = f->offset;
|
|
|
|
unsigned left = sizeof(f->buffer) - offset;
|
|
|
|
unsigned nr = count > left ? left : count;
|
|
|
|
|
|
|
|
memcpy(f->buffer + offset, buf, nr);
|
|
|
|
count -= nr;
|
|
|
|
offset += nr;
|
2006-06-18 17:18:09 +02:00
|
|
|
buf = (char *) buf + nr;
|
2005-06-27 05:27:56 +02:00
|
|
|
left -= nr;
|
|
|
|
if (!left) {
|
|
|
|
SHA1_Update(&f->ctx, f->buffer, offset);
|
|
|
|
sha1flush(f, offset);
|
|
|
|
offset = 0;
|
|
|
|
}
|
|
|
|
f->offset = offset;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
struct sha1file *sha1create(const char *fmt, ...)
|
|
|
|
{
|
|
|
|
struct sha1file *f;
|
|
|
|
unsigned len;
|
|
|
|
va_list arg;
|
|
|
|
int fd;
|
|
|
|
|
2005-06-27 07:01:46 +02:00
|
|
|
f = xmalloc(sizeof(*f));
|
|
|
|
|
2005-06-27 05:27:56 +02:00
|
|
|
va_start(arg, fmt);
|
2005-06-27 07:01:46 +02:00
|
|
|
len = vsnprintf(f->name, sizeof(f->name), fmt, arg);
|
2005-06-27 05:27:56 +02:00
|
|
|
va_end(arg);
|
|
|
|
if (len >= PATH_MAX)
|
|
|
|
die("you wascally wabbit, you");
|
2005-06-27 07:01:46 +02:00
|
|
|
f->namelen = len;
|
|
|
|
|
2005-07-06 10:21:46 +02:00
|
|
|
fd = open(f->name, O_CREAT | O_EXCL | O_WRONLY, 0666);
|
2005-06-27 05:27:56 +02:00
|
|
|
if (fd < 0)
|
2005-06-27 07:01:46 +02:00
|
|
|
die("unable to open %s (%s)", f->name, strerror(errno));
|
2005-06-27 05:27:56 +02:00
|
|
|
f->fd = fd;
|
|
|
|
f->error = 0;
|
|
|
|
f->offset = 0;
|
compute a CRC32 for each object as stored in a pack
The most important optimization for performance when repacking is the
ability to reuse data from a previous pack as is and bypass any delta
or even SHA1 computation by simply copying the raw data from one pack
to another directly.
The problem with this is that any data corruption within a copied object
would go unnoticed and the new (repacked) pack would be self-consistent
with its own checksum despite containing a corrupted object. This is a
real issue that already happened at least once in the past.
In some attempt to prevent this, we validate the copied data by inflating
it and making sure no error is signaled by zlib. But this is still not
perfect as a significant portion of a pack content is made of object
headers and references to delta base objects which are not deflated and
therefore not validated when repacking actually making the pack data reuse
still not as safe as it could be.
Of course a full SHA1 validation could be performed, but that implies
full data inflating and delta replaying which is extremely costly, which
cost the data reuse optimization was designed to avoid in the first place.
So the best solution to this is simply to store a CRC32 of the raw pack
data for each object in the pack index. This way any object in a pack can
be validated before being copied as is in another pack, including header
and any other non deflated data.
Why CRC32 instead of a faster checksum like Adler32? Quoting Wikipedia:
Jonathan Stone discovered in 2001 that Adler-32 has a weakness for very
short messages. He wrote "Briefly, the problem is that, for very short
packets, Adler32 is guaranteed to give poor coverage of the available
bits. Don't take my word for it, ask Mark Adler. :-)" The problem is
that sum A does not wrap for short messages. The maximum value of A for
a 128-byte message is 32640, which is below the value 65521 used by the
modulo operation. An extended explanation can be found in RFC 3309,
which mandates the use of CRC32 instead of Adler-32 for SCTP, the
Stream Control Transmission Protocol.
In the context of a GIT pack, we have lots of small objects, especially
deltas, which are likely to be quite small and in a size range for which
Adler32 is dimed not to be sufficient. Another advantage of CRC32 is the
possibility for recovery from certain types of small corruptions like
single bit errors which are the most probable type of corruptions.
OK what this patch does is to compute the CRC32 of each object written to
a pack within pack-objects. It is not written to the index yet and it is
obviously not validated when reusing pack data yet either.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-09 07:06:31 +02:00
|
|
|
f->do_crc = 0;
|
2005-06-27 05:27:56 +02:00
|
|
|
SHA1_Init(&f->ctx);
|
|
|
|
return f;
|
|
|
|
}
|
|
|
|
|
2005-06-28 20:10:06 +02:00
|
|
|
struct sha1file *sha1fd(int fd, const char *name)
|
|
|
|
{
|
|
|
|
struct sha1file *f;
|
|
|
|
unsigned len;
|
|
|
|
|
|
|
|
f = xmalloc(sizeof(*f));
|
|
|
|
|
|
|
|
len = strlen(name);
|
|
|
|
if (len >= PATH_MAX)
|
|
|
|
die("you wascally wabbit, you");
|
|
|
|
f->namelen = len;
|
|
|
|
memcpy(f->name, name, len+1);
|
|
|
|
|
|
|
|
f->fd = fd;
|
|
|
|
f->error = 0;
|
|
|
|
f->offset = 0;
|
compute a CRC32 for each object as stored in a pack
The most important optimization for performance when repacking is the
ability to reuse data from a previous pack as is and bypass any delta
or even SHA1 computation by simply copying the raw data from one pack
to another directly.
The problem with this is that any data corruption within a copied object
would go unnoticed and the new (repacked) pack would be self-consistent
with its own checksum despite containing a corrupted object. This is a
real issue that already happened at least once in the past.
In some attempt to prevent this, we validate the copied data by inflating
it and making sure no error is signaled by zlib. But this is still not
perfect as a significant portion of a pack content is made of object
headers and references to delta base objects which are not deflated and
therefore not validated when repacking actually making the pack data reuse
still not as safe as it could be.
Of course a full SHA1 validation could be performed, but that implies
full data inflating and delta replaying which is extremely costly, which
cost the data reuse optimization was designed to avoid in the first place.
So the best solution to this is simply to store a CRC32 of the raw pack
data for each object in the pack index. This way any object in a pack can
be validated before being copied as is in another pack, including header
and any other non deflated data.
Why CRC32 instead of a faster checksum like Adler32? Quoting Wikipedia:
Jonathan Stone discovered in 2001 that Adler-32 has a weakness for very
short messages. He wrote "Briefly, the problem is that, for very short
packets, Adler32 is guaranteed to give poor coverage of the available
bits. Don't take my word for it, ask Mark Adler. :-)" The problem is
that sum A does not wrap for short messages. The maximum value of A for
a 128-byte message is 32640, which is below the value 65521 used by the
modulo operation. An extended explanation can be found in RFC 3309,
which mandates the use of CRC32 instead of Adler-32 for SCTP, the
Stream Control Transmission Protocol.
In the context of a GIT pack, we have lots of small objects, especially
deltas, which are likely to be quite small and in a size range for which
Adler32 is dimed not to be sufficient. Another advantage of CRC32 is the
possibility for recovery from certain types of small corruptions like
single bit errors which are the most probable type of corruptions.
OK what this patch does is to compute the CRC32 of each object written to
a pack within pack-objects. It is not written to the index yet and it is
obviously not validated when reusing pack data yet either.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-09 07:06:31 +02:00
|
|
|
f->do_crc = 0;
|
2005-06-28 20:10:06 +02:00
|
|
|
SHA1_Init(&f->ctx);
|
|
|
|
return f;
|
|
|
|
}
|
|
|
|
|
Custom compression levels for objects and packs
Add config variables pack.compression and core.loosecompression ,
and switch --compression=level to pack-objects.
Loose objects will be compressed using core.loosecompression if set,
else core.compression if set, else Z_BEST_SPEED.
Packed objects will be compressed using --compression=level if seen,
else pack.compression if set, else core.compression if set,
else Z_DEFAULT_COMPRESSION. This is the "pack compression level".
Loose objects added to a pack undeltified will be recompressed
to the pack compression level if it is unequal to the current
loose compression level by the preceding rules, or if the loose
object was written while core.legacyheaders = true. Newly
deltified loose objects are always compressed to the current
pack compression level.
Previously packed objects added to a pack are recompressed
to the current pack compression level exactly when their
deltification status changes, since the previous pack data
cannot be reused.
In either case, the --no-reuse-object switch from the first
patch below will always force recompression to the current pack
compression level, instead of assuming the pack compression level
hasn't changed and pack data can be reused when possible.
This applies on top of the following patches from Nicolas Pitre:
[PATCH] allow for undeltified objects not to be reused
[PATCH] make "repack -f" imply "pack-objects --no-reuse-object"
Signed-off-by: Dana L. How <danahow@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-09 22:56:50 +02:00
|
|
|
int sha1write_compressed(struct sha1file *f, void *in, unsigned int size, int level)
|
2005-06-27 05:27:56 +02:00
|
|
|
{
|
|
|
|
z_stream stream;
|
|
|
|
unsigned long maxsize;
|
|
|
|
void *out;
|
|
|
|
|
|
|
|
memset(&stream, 0, sizeof(stream));
|
Custom compression levels for objects and packs
Add config variables pack.compression and core.loosecompression ,
and switch --compression=level to pack-objects.
Loose objects will be compressed using core.loosecompression if set,
else core.compression if set, else Z_BEST_SPEED.
Packed objects will be compressed using --compression=level if seen,
else pack.compression if set, else core.compression if set,
else Z_DEFAULT_COMPRESSION. This is the "pack compression level".
Loose objects added to a pack undeltified will be recompressed
to the pack compression level if it is unequal to the current
loose compression level by the preceding rules, or if the loose
object was written while core.legacyheaders = true. Newly
deltified loose objects are always compressed to the current
pack compression level.
Previously packed objects added to a pack are recompressed
to the current pack compression level exactly when their
deltification status changes, since the previous pack data
cannot be reused.
In either case, the --no-reuse-object switch from the first
patch below will always force recompression to the current pack
compression level, instead of assuming the pack compression level
hasn't changed and pack data can be reused when possible.
This applies on top of the following patches from Nicolas Pitre:
[PATCH] allow for undeltified objects not to be reused
[PATCH] make "repack -f" imply "pack-objects --no-reuse-object"
Signed-off-by: Dana L. How <danahow@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-09 22:56:50 +02:00
|
|
|
deflateInit(&stream, level);
|
2005-06-27 05:27:56 +02:00
|
|
|
maxsize = deflateBound(&stream, size);
|
|
|
|
out = xmalloc(maxsize);
|
|
|
|
|
|
|
|
/* Compress it */
|
|
|
|
stream.next_in = in;
|
|
|
|
stream.avail_in = size;
|
|
|
|
|
|
|
|
stream.next_out = out;
|
|
|
|
stream.avail_out = maxsize;
|
|
|
|
|
|
|
|
while (deflate(&stream, Z_FINISH) == Z_OK)
|
|
|
|
/* nothing */;
|
|
|
|
deflateEnd(&stream);
|
|
|
|
|
|
|
|
size = stream.total_out;
|
|
|
|
sha1write(f, out, size);
|
|
|
|
free(out);
|
|
|
|
return size;
|
|
|
|
}
|
|
|
|
|
compute a CRC32 for each object as stored in a pack
The most important optimization for performance when repacking is the
ability to reuse data from a previous pack as is and bypass any delta
or even SHA1 computation by simply copying the raw data from one pack
to another directly.
The problem with this is that any data corruption within a copied object
would go unnoticed and the new (repacked) pack would be self-consistent
with its own checksum despite containing a corrupted object. This is a
real issue that already happened at least once in the past.
In some attempt to prevent this, we validate the copied data by inflating
it and making sure no error is signaled by zlib. But this is still not
perfect as a significant portion of a pack content is made of object
headers and references to delta base objects which are not deflated and
therefore not validated when repacking actually making the pack data reuse
still not as safe as it could be.
Of course a full SHA1 validation could be performed, but that implies
full data inflating and delta replaying which is extremely costly, which
cost the data reuse optimization was designed to avoid in the first place.
So the best solution to this is simply to store a CRC32 of the raw pack
data for each object in the pack index. This way any object in a pack can
be validated before being copied as is in another pack, including header
and any other non deflated data.
Why CRC32 instead of a faster checksum like Adler32? Quoting Wikipedia:
Jonathan Stone discovered in 2001 that Adler-32 has a weakness for very
short messages. He wrote "Briefly, the problem is that, for very short
packets, Adler32 is guaranteed to give poor coverage of the available
bits. Don't take my word for it, ask Mark Adler. :-)" The problem is
that sum A does not wrap for short messages. The maximum value of A for
a 128-byte message is 32640, which is below the value 65521 used by the
modulo operation. An extended explanation can be found in RFC 3309,
which mandates the use of CRC32 instead of Adler-32 for SCTP, the
Stream Control Transmission Protocol.
In the context of a GIT pack, we have lots of small objects, especially
deltas, which are likely to be quite small and in a size range for which
Adler32 is dimed not to be sufficient. Another advantage of CRC32 is the
possibility for recovery from certain types of small corruptions like
single bit errors which are the most probable type of corruptions.
OK what this patch does is to compute the CRC32 of each object written to
a pack within pack-objects. It is not written to the index yet and it is
obviously not validated when reusing pack data yet either.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-09 07:06:31 +02:00
|
|
|
void crc32_begin(struct sha1file *f)
|
|
|
|
{
|
|
|
|
f->crc32 = crc32(0, Z_NULL, 0);
|
|
|
|
f->do_crc = 1;
|
|
|
|
}
|
2005-06-27 05:27:56 +02:00
|
|
|
|
compute a CRC32 for each object as stored in a pack
The most important optimization for performance when repacking is the
ability to reuse data from a previous pack as is and bypass any delta
or even SHA1 computation by simply copying the raw data from one pack
to another directly.
The problem with this is that any data corruption within a copied object
would go unnoticed and the new (repacked) pack would be self-consistent
with its own checksum despite containing a corrupted object. This is a
real issue that already happened at least once in the past.
In some attempt to prevent this, we validate the copied data by inflating
it and making sure no error is signaled by zlib. But this is still not
perfect as a significant portion of a pack content is made of object
headers and references to delta base objects which are not deflated and
therefore not validated when repacking actually making the pack data reuse
still not as safe as it could be.
Of course a full SHA1 validation could be performed, but that implies
full data inflating and delta replaying which is extremely costly, which
cost the data reuse optimization was designed to avoid in the first place.
So the best solution to this is simply to store a CRC32 of the raw pack
data for each object in the pack index. This way any object in a pack can
be validated before being copied as is in another pack, including header
and any other non deflated data.
Why CRC32 instead of a faster checksum like Adler32? Quoting Wikipedia:
Jonathan Stone discovered in 2001 that Adler-32 has a weakness for very
short messages. He wrote "Briefly, the problem is that, for very short
packets, Adler32 is guaranteed to give poor coverage of the available
bits. Don't take my word for it, ask Mark Adler. :-)" The problem is
that sum A does not wrap for short messages. The maximum value of A for
a 128-byte message is 32640, which is below the value 65521 used by the
modulo operation. An extended explanation can be found in RFC 3309,
which mandates the use of CRC32 instead of Adler-32 for SCTP, the
Stream Control Transmission Protocol.
In the context of a GIT pack, we have lots of small objects, especially
deltas, which are likely to be quite small and in a size range for which
Adler32 is dimed not to be sufficient. Another advantage of CRC32 is the
possibility for recovery from certain types of small corruptions like
single bit errors which are the most probable type of corruptions.
OK what this patch does is to compute the CRC32 of each object written to
a pack within pack-objects. It is not written to the index yet and it is
obviously not validated when reusing pack data yet either.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-09 07:06:31 +02:00
|
|
|
uint32_t crc32_end(struct sha1file *f)
|
|
|
|
{
|
|
|
|
f->do_crc = 0;
|
|
|
|
return f->crc32;
|
|
|
|
}
|