1
0
Fork 0
mirror of https://github.com/git/git.git synced 2024-10-31 06:17:56 +01:00
Commit graph

231 commits

Author SHA1 Message Date
Nicolas Pitre
0ef95f72f8 pack-objects: free preferred base memory after usage
When adding objects for preferred delta base, the content from tree
objects leading to given paths is kept in a cache. This has the
potential to grow significantly, especially with large directories as
the whole tree object content is loaded in memory, even if in practice
the number of those objects is limited to the 256 cache entries plus the
$window root tree objects.  Still, that can't hurt freeing that up after
object enumeration is done, and before more memory is needed for delta
search.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-09-05 22:27:08 -07:00
Linus Torvalds
48fb7deb5b Fix big left-shifts of unsigned char
Shifting 'unsigned char' or 'unsigned short' left can result in sign
extension errors, since the C integer promotion rules means that the
unsigned char/short will get implicitly promoted to a signed 'int' due to
the shift (or due to other operations).

This normally doesn't matter, but if you shift things up sufficiently, it
will now set the sign bit in 'int', and a subsequent cast to a bigger type
(eg 'long' or 'unsigned long') will now sign-extend the value despite the
original expression being unsigned.

One example of this would be something like

	unsigned long size;
	unsigned char c;

	size += c << 24;

where despite all the variables being unsigned, 'c << 24' ends up being a
signed entity, and will get sign-extended when then doing the addition in
an 'unsigned long' type.

Since git uses 'unsigned char' pointers extensively, we actually have this
bug in a couple of places.

I may have missed some, but this is the result of looking at

	git grep '[^0-9 	][ 	]*<<[ 	][a-z]' -- '*.c' '*.h'
	git grep '<<[   ]*24'

which catches at least the common byte cases (shifting variables by a
variable amount, and shifting by 24 bits).

I also grepped for just 'unsigned char' variables in general, and
converted the ones that most obviously ended up getting implicitly cast
immediately anyway (eg hash_name(), encode_85()).

In addition to just avoiding 'unsigned char', this patch also tries to use
a common idiom for the delta header size thing. We had three different
variations on it: "& 0x7fUL" in one place (getting the sign extension
right), and "& ~0x80" and "& 0x7f" in two other places (not getting it
right). Apart from making them all just avoid using "unsigned char" at
all, I also unified them to then use a simple "& 0x7f".

I considered making a sparse extension which warns about doing implicit
casts from unsigned types to signed types, but it gets rather complex very
quickly, so this is just a hack.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-06-18 09:22:46 -07:00
Mike Ralphson
3ea3c215c0 Fix typos / spelling in comments
Signed-off-by: Mike Ralphson <mike@abacus.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-22 19:02:12 -07:00
Junio C Hamano
9824a388e5 Merge branch 'lt/pack-object-memuse'
* lt/pack-object-memuse:
  show_object(): push path_name() call further down
  process_{tree,blob}: show objects without buffering

Conflicts:
	builtin-pack-objects.c
	builtin-rev-list.c
	list-objects.c
	list-objects.h
	upload-pack.c
2009-04-18 14:46:17 -07:00
Linus Torvalds
cf2ab916af show_object(): push path_name() call further down
In particular, pushing the "path_name()" call _into_ the show() function
would seem to allow

 - more clarity into who "owns" the name (ie now when we free the name in
   the show_object callback, it's because we generated it ourselves by
   calling path_name())

 - not calling path_name() at all, either because we don't care about the
   name in the first place, or because we are actually happy walking the
   linked list of "struct name_path *" and the last component.

Now, I didn't do that latter optimization, because it would require some
more coding, but especially looking at "builtin-pack-objects.c", we really
don't even want the whole pathname, we really would be better off with the
list of path components.

Why? We use that name for two things:
 - add_preferred_base_object(), which actually _wants_ to traverse the
   path, and now does it by looking for '/' characters!
 - for 'name_hash()', which only cares about the last 16 characters of a
   name, so again, generating the full name seems to be just unnecessary
   work.

Anyway, so I didn't look any closer at those things, but it did convince
me that the "show_object()" calling convention was crazy, and we're
actually better off doing _less_ in list-objects.c, and giving people
access to the internal data structures so that they can decide whether
they want to generate a path-name or not.

This patch does that, and then for people who did use the name (even if
they might do something more clever in the future), it just does the
straightforward "name = path_name(path, component); .. free(name);" thing.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-12 17:28:31 -07:00
Linus Torvalds
8d2dfc49b1 process_{tree,blob}: show objects without buffering
Here's a less trivial thing, and slightly more dubious one.

I was looking at that "struct object_array objects", and wondering why we
do that. I have honestly totally forgotten. Why not just call the "show()"
function as we encounter the objects? Rather than add the objects to the
object_array, and then at the very end going through the array and doing a
'show' on all, just do things more incrementally.

Now, there are possible downsides to this:

 - the "buffer using object_array" _can_ in theory result in at least
   better I-cache usage (two tight loops rather than one more spread out
   one). I don't think this is a real issue, but in theory..

 - this _does_ change the order of the objects printed. Instead of doing a
   "process_tree(revs, commit->tree, &objects, NULL, "");" in the loop
   over the commits (which puts all the root trees _first_ in the object
   list, this patch just adds them to the list of pending objects, and
   then we'll traverse them in that order (and thus show each root tree
   object together with the objects we discover under it)

   I _think_ the new ordering actually makes more sense, but the object
   ordering is actually a subtle thing when it comes to packing
   efficiency, so any change in order is going to have implications for
   packing. Good or bad, I dunno.

 - There may be some reason why we did it that odd way with the object
   array, that I have simply forgotten.

Anyway, now that we don't buffer up the objects before showing them
that may actually result in lower memory usage during that whole
traverse_commit_list() phase.

This is seriously not very deeply tested. It makes sense to me, it seems
to pass all the tests, it looks ok, but...

Does anybody remember why we did that "object_array" thing? It used to be
an "object_list" a long long time ago, but got changed into the array due
to better memory usage patterns (those linked lists of obejcts are
horrible from a memory allocation standpoint). But I wonder why we didn't
do this back then. Maybe there's a reason for it.

Or maybe there _used_ to be a reason, and no longer is.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-12 17:28:31 -07:00
Junio C Hamano
6e353a5e5d Merge branch 'cc/bisect-filter'
* cc/bisect-filter: (21 commits)
  rev-list: add "int bisect_show_flags" in "struct rev_list_info"
  rev-list: remove last static vars used in "show_commit"
  list-objects: add "void *data" parameter to show functions
  bisect--helper: string output variables together with "&&"
  rev-list: pass "int flags" as last argument of "show_bisect_vars"
  t6030: test bisecting with paths
  bisect: use "bisect--helper" and remove "filter_skipped" function
  bisect: implement "read_bisect_paths" to read paths in "$GIT_DIR/BISECT_NAMES"
  bisect--helper: implement "git bisect--helper"
  bisect: use the new generic "sha1_pos" function to lookup sha1
  rev-list: call new "filter_skip" function
  patch-ids: use the new generic "sha1_pos" function to lookup sha1
  sha1-lookup: add new "sha1_pos" function to efficiently lookup sha1
  rev-list: pass "revs" to "show_bisect_vars"
  rev-list: make "show_bisect_vars" non static
  rev-list: move code to show bisect vars into its own function
  rev-list: move bisect related code into its own file
  rev-list: make "bisect_list" variable local to "cmd_rev_list"
  refs: add "for_each_ref_in" function to refactor "for_each_*_ref" functions
  quote: add "sq_dequote_to_argv" to put unwrapped args in an argv array
  ...
2009-04-12 16:46:40 -07:00
Junio C Hamano
a54c4edc51 Merge branch 'maint'
* maint:
  GIT 1.6.2.3
  State the effect of filter-branch on graft explicitly
  process_{tree,blob}: Remove useless xstrdup calls

Conflicts:
	GIT-VERSION-GEN
2009-04-12 16:01:25 -07:00
Junio C Hamano
1966af8176 Merge branch 'maint-1.6.1' into maint
* maint-1.6.1:
  State the effect of filter-branch on graft explicitly
  process_{tree,blob}: Remove useless xstrdup calls
2009-04-12 15:34:53 -07:00
Junio C Hamano
bc69776aa1 Merge branch 'maint-1.6.0' into maint-1.6.1
* maint-1.6.0:
  State the effect of filter-branch on graft explicitly
  process_{tree,blob}: Remove useless xstrdup calls
2009-04-12 15:20:29 -07:00
Linus Torvalds
213152688c process_{tree,blob}: Remove useless xstrdup calls
On Wed, 8 Apr 2009, Björn Steinbrink wrote:
>
> The name of the processed object was duplicated for passing it to
> add_object(), but that already calls path_name, which allocates a new
> string anyway. So the memory allocated by the xstrdup calls just went
> nowhere, leaking memory.

Ack, ack.

There's another easy 5% or so for the built-in object walker: once we've
created the hash from the name, the name isn't interesting any more, and
so something trivial like this can help a bit.

Does it matter? Probably not on its own. But a few more memory saving
tricks and it might all make a difference.

		Linus

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-12 14:30:31 -07:00
Dan McGee
b6c29915d2 Update delta compression message to be less misleading
In the case of a small repository, pack-objects is smart enough to not
start more threads than necessary. However, the output to the user always
reports the value of the pack.threads configuration and not the real
number of threads to be used.

Signed-off-by: Dan McGee <dpmcgee@gmail.com>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-11 22:21:59 -07:00
Junio C Hamano
c3067cbfb3 Merge branch 'jc/maint-1.6.0-keep-pack' into maint
* jc/maint-1.6.0-keep-pack:
  pack-objects: don't loosen objects available in alternate or kept packs
  t7700: demonstrate repack flaw which may loosen objects unnecessarily
  Remove --kept-pack-only option and associated infrastructure
  pack-objects: only repack or loosen objects residing in "local" packs
  git-repack.sh: don't use --kept-pack-only option to pack-objects
  t7700-repack: add two new tests demonstrating repacking flaws
  is_kept_pack(): final clean-up
  Simplify is_kept_pack()
  Consolidate ignore_packed logic more
  has_sha1_kept_pack(): take "struct rev_info"
  has_sha1_pack(): refactor "pretend these packs do not exist" interface
  git-repack: resist stray environment variable

Conflicts:
	t/t7700-repack.sh
2009-04-08 23:21:10 -07:00
Christian Couder
11c211fa06 list-objects: add "void *data" parameter to show functions
The goal of this patch is to get rid of the "static struct rev_info
revs" static variable in "builtin-rev-list.c".

To do that, we need to pass the revs to the "show_commit" function
in "builtin-rev-list.c" and this in turn means that the
"traverse_commit_list" function in "list-objects.c" must be passed
functions pointers to functions with 2 parameters instead of one.

So we have to change all the callers and all the functions passed
to "traverse_commit_list".

Anyway this makes the code more clean and more generic, so it
should be a good thing in the long run.

Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-07 22:12:38 -07:00
Junio C Hamano
3c91bf6805 Merge branch 'jc/maint-1.6.0-keep-pack'
* jc/maint-1.6.0-keep-pack:
  pack-objects: don't loosen objects available in alternate or kept packs
  t7700: demonstrate repack flaw which may loosen objects unnecessarily
  Remove --kept-pack-only option and associated infrastructure
  pack-objects: only repack or loosen objects residing in "local" packs
  git-repack.sh: don't use --kept-pack-only option to pack-objects
  t7700-repack: add two new tests demonstrating repacking flaws

Conflicts:
	t/t7700-repack.sh
2009-04-01 22:34:19 -07:00
Junio C Hamano
89fbda2425 Merge branch 'maint'
* maint:
  Increase the size of the die/warning buffer to avoid truncation
  close_sha1_file(): make it easier to diagnose errors
  avoid possible overflow in delta size filtering computation
2009-03-24 19:45:57 -07:00
Junio C Hamano
b0de555410 Merge branch 'maint-1.6.1' into maint
* maint-1.6.1:
  close_sha1_file(): make it easier to diagnose errors
  avoid possible overflow in delta size filtering computation
2009-03-24 15:31:21 -07:00
Junio C Hamano
2a5643da73 Merge branch 'maint-1.6.0' into maint-1.6.1
* maint-1.6.0:
  close_sha1_file(): make it easier to diagnose errors
  avoid possible overflow in delta size filtering computation
2009-03-24 15:31:15 -07:00
Nicolas Pitre
720fe22d50 avoid possible overflow in delta size filtering computation
On a 32-bit system, the maximum possible size for an object is less than
4GB, while 64-bit systems may cope with larger objects.  Due to this
limitation, variables holding object sizes are using an unsigned long
type (32 bits on 32-bit systems, or 64 bits on 64-bit systems).

When large objects are encountered, and/or people play with large delta
depth values, it is possible for the maximum allowed delta size
computation to overflow, especially on a 32-bit system.  When this
occurs, surviving result bits may represent a value much smaller than
what it is supposed to be, or even zero.  This prevents some objects
from being deltified although they do get deltified when a smaller depth
limit is used.  Fix this by always performing a 64-bit multiplication.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-24 14:37:30 -07:00
Junio C Hamano
2990034f1e Merge branch 'jc/maint-1.6.0-pack-directory' into maint-1.6.1
* jc/maint-1.6.0-pack-directory:
  Fix odb_mkstemp() on AIX
  Make sure objects/pack exists before creating a new pack

Conflicts:
	wrapper.c
2009-03-21 22:53:36 -07:00
Brandon Casey
094085e336 pack-objects: don't loosen objects available in alternate or kept packs
If pack-objects is called with the --unpack-unreachable option then it
will unpack (i.e. loosen) all unreferenced objects from local not-kept
packs, including those that also exist in packs residing in an alternate
object database or a locally kept pack.  The only user of this option is
git-repack.

In this case, repack will follow the call to pack-objects with a call to
prune-packed, which will delete these newly loosened objects, making the
act of loosening a waste of time.  The unnecessary loosening can be
avoided by checking whether an object exists in a non-local pack or a
locally kept pack before loosening it.

This fixes the 'local packed unreachable obs that exist in alternate ODB
are not loosened' test in t7700.

Signed-off-by: Brandon Casey <drafnel@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-21 21:58:44 -07:00
Brandon Casey
4d6acb7041 Remove --kept-pack-only option and associated infrastructure
This option to pack-objects/rev-list was created to improve the -A and -a
options of repack.  It was found to be lacking in that it did not provide
the ability to differentiate between local and non-local kept packs, and
found to be unnecessary since objects residing in local kept packs can be
filtered out by the --honor-pack-keep option.

Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-20 13:32:33 -07:00
Brandon Casey
79bc4c7155 pack-objects: only repack or loosen objects residing in "local" packs
These two features were invented for use by repack when repack will delete
the local packs that have been made redundant.  The packs accessible
through alternates are not deleted by repack, so the objects contained in
them are still accessible after the local packs are deleted.  They do not
need to be repacked into the new pack or loosened.  For the case of
loosening they would immediately be deleted by the subsequent prune-packed
that is called by repack anyway.

This fixes the test
'packed unreachable obs in alternate ODB are not loosened' in t7700.

Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-20 13:32:33 -07:00
Junio C Hamano
aec813062b Merge branch 'jc/maint-1.6.0-keep-pack'
* jc/maint-1.6.0-keep-pack:
  is_kept_pack(): final clean-up
  Simplify is_kept_pack()
  Consolidate ignore_packed logic more
  has_sha1_kept_pack(): take "struct rev_info"
  has_sha1_pack(): refactor "pretend these packs do not exist" interface
  git-repack: resist stray environment variable
2009-03-11 13:49:56 -07:00
Junio C Hamano
69e020ae00 is_kept_pack(): final clean-up
Now is_kept_pack() is just a member lookup into a structure, we can write
it as such.

Also rewrite the sole caller of has_sha1_kept_pack() to switch on the
criteria the callee uses (namely, revs->kept_pack_only) between calling
has_sha1_kept_pack() and has_sha1_pack(), so that these two callees do not
have to take a pointer to struct rev_info as an argument.

This removes the header file dependency issue temporarily introduced by
the earlier commit, so we revert changes associated to that as well.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-02-28 01:06:06 -08:00
Junio C Hamano
03a9683d22 Simplify is_kept_pack()
This removes --unpacked=<packfile> parameter from the revision parser, and
rewrites its use in git-repack to pass a single --kept-pack-only option
instead.

The new --kept-pack-only option means just that.  When this option is
given, is_kept_pack() that used to say "not on the --unpacked=<packfile>
list" now says "the packfile has corresponding .keep file".

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-02-28 01:06:06 -08:00
Junio C Hamano
386cb77210 Consolidate ignore_packed logic more
This refactors three loops that check if a given packfile is on the
ignore_packed list into a function is_kept_pack().  The function returns
false for a pack on the list, and true for a pack not on the list, because
this list is solely used by "git repack" to pass list of packfiles that do
not have corresponding .keep files, i.e. a packfile not on the list is
"kept".

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-02-28 01:06:06 -08:00
Junio C Hamano
bb0cebd7d0 Merge branch 'jc/maint-1.6.0-pack-directory'
* jc/maint-1.6.0-pack-directory:
  Make sure objects/pack exists before creating a new pack
2009-02-25 14:50:05 -08:00
Junio C Hamano
6e180cdcec Make sure objects/pack exists before creating a new pack
In a repository created with git older than f49fb35 (git-init-db: create
"pack" subdirectory under objects, 2005-06-27), objects/pack/ directory is
not created upon initialization.  It was Ok because subdirectories are
created as needed inside directories init-db creates, and back then,
packfiles were recent invention.

After the said commit, new codepaths started relying on the presense of
objects/pack/ directory in the repository.  This was exacerbated with
8b4eb6b (Do not perform cross-directory renames when creating packs,
2008-09-22) that moved the location temporary pack files are created from
objects/ directory to objects/pack/ directory, because moving temporary to
the final location was done carefully with lazy leading directory creation.

Many packfile related operations in such an old repository can fail
mysteriously because of this.

This commit introduces two helper functions to make things work better.

 - odb_mkstemp() is a specialized version of mkstemp() to refactor the
   code and teach it to create leading directories as needed;

 - odb_pack_keep() refactors the code to create a ".keep" file while
   create leading directories as needed.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-02-25 14:39:42 -08:00
Junio C Hamano
141b6b83d7 Merge branch 'lt/maint-wrap-zlib' into maint
* lt/maint-wrap-zlib:
  Wrap inflate and other zlib routines for better error reporting

Conflicts:
	http-push.c
	http-walker.c
	sha1_file.c
2009-02-05 18:01:00 -08:00
Junio C Hamano
36dd939393 Merge branch 'lt/maint-wrap-zlib'
* lt/maint-wrap-zlib:
  Wrap inflate and other zlib routines for better error reporting

Conflicts:
	http-push.c
	http-walker.c
	sha1_file.c
2009-01-21 16:55:17 -08:00
Linus Torvalds
39c68542fc Wrap inflate and other zlib routines for better error reporting
R. Tyler Ballance reported a mysterious transient repository corruption;
after much digging, it turns out that we were not catching and reporting
memory allocation errors from some calls we make to zlib.

This one _just_ wraps things; it doesn't do the "retry on low memory
error" part, at least not yet. It is an independent issue from the
reporting.  Some of the errors are expected and passed back to the caller,
but we die when zlib reports it failed to allocate memory for now.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-01-11 02:13:06 -08:00
Nicolas Pitre
bf87489624 pack-objects: don't use too many threads with few objects
If there are few objects to deltify, they might be split amongst threads
so that there is simply no other objects left to delta against within
the same thread.  Let's use the same 2*window treshold as used for the
final load balancing to allow extra threads to be created.

This fixes the benign t5300 test failure.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Tested-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-12-13 18:55:55 -08:00
Nicolas Pitre
43cc2b4266 autodetect number of CPUs by default when using threads
... and display the actual number of threads used when locally
repacking.  A remote server still won't tell you how many threads it
uses during a fetch though.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-12-12 19:47:36 -08:00
Junio C Hamano
de0db42278 Merge branch 'maint'
* maint:
  fsck: reduce stack footprint
  make sure packs to be replaced are closed beforehand
2008-12-11 00:36:31 -08:00
Nicolas Pitre
c74faea19e make sure packs to be replaced are closed beforehand
Especially on Windows where an opened file cannot be replaced, make
sure pack-objects always close packs it is about to replace. Even on
non Windows systems, this could save potential bad results if ever
objects were to be read from the new pack file using offset from the old
index.

This should fix t5303 on Windows.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Tested-by: Johannes Sixt <j6t@kdbg.org> (MinGW)
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-12-10 17:56:05 -08:00
Junio C Hamano
0fd9d7e66d Merge branch 'bc/maint-keep-pack' into maint
* bc/maint-keep-pack:
  repack: only unpack-unreachable if we are deleting redundant packs
  t7700: test that 'repack -a' packs alternate packed objects
  pack-objects: extend --local to mean ignore non-local loose objects too
  sha1_file.c: split has_loose_object() into local and non-local counterparts
  t7700: demonstrate mishandling of loose objects in an alternate ODB
  builtin-gc.c: use new pack_keep bitfield to detect .keep file existence
  repack: do not fall back to incremental repacking with [-a|-A]
  repack: don't repack local objects in packs with .keep file
  pack-objects: new option --honor-pack-keep
  packed_git: convert pack_local flag into a bitfield and add pack_keep
  t7700: demonstrate mishandling of objects in packs with a .keep file
2008-12-02 23:00:04 -08:00
Junio C Hamano
7b51b77dbc Merge branch 'np/pack-safer'
* np/pack-safer:
  t5303: fix printf format string for portability
  t5303: work around printf breakage in dash
  pack-objects: don't leak pack window reference when splitting packs
  extend test coverage for latest pack corruption resilience improvements
  pack-objects: allow "fixing" a corrupted pack without a full repack
  make find_pack_revindex() aware of the nasty world
  make check_object() resilient to pack corruptions
  make packed_object_info() resilient to pack corruptions
  make unpack_object_header() non fatal
  better validation on delta base object offsets
  close another possibility for propagating pack corruption
2008-11-12 22:26:35 -08:00
Junio C Hamano
ecbbfb15a4 Merge branch 'bc/maint-keep-pack'
* bc/maint-keep-pack:
  t7700: test that 'repack -a' packs alternate packed objects
  pack-objects: extend --local to mean ignore non-local loose objects too
  sha1_file.c: split has_loose_object() into local and non-local counterparts
  t7700: demonstrate mishandling of loose objects in an alternate ODB
  builtin-gc.c: use new pack_keep bitfield to detect .keep file existence
  repack: do not fall back to incremental repacking with [-a|-A]
  repack: don't repack local objects in packs with .keep file
  pack-objects: new option --honor-pack-keep
  packed_git: convert pack_local flag into a bitfield and add pack_keep
  t7700: demonstrate mishandling of objects in packs with a .keep file
2008-11-12 22:00:43 -08:00
Junio C Hamano
6cd3729eae Merge branch 'maint'
* maint:
  Start 1.6.0.5 cycle
  Fix pack.packSizeLimit and --max-pack-size handling
  checkout: Fix "initial checkout" detection
  Remove the period after the git-check-attr summary

Conflicts:
	RelNotes
2008-11-12 15:03:57 -08:00
Nicolas Pitre
a1e4760fcf Fix pack.packSizeLimit and --max-pack-size handling
If the limit was sufficiently low, having a single object written
could bust the limit (by design), but caused the remaining allowed
size to go negative for subsequent objects, which for an unsigned
variable is a rather huge limit.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-12 14:55:03 -08:00
Brandon Casey
daae062595 pack-objects: extend --local to mean ignore non-local loose objects too
With this patch, --local means pack only local objects that are not already
packed.

Additionally, this fixes t7700 testing whether loose objects in an alternate
object database are repacked.

Signed-off-by: Brandon Casey <drafnel@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-12 10:29:22 -08:00
Brandon Casey
e96fb9b8f9 pack-objects: new option --honor-pack-keep
This adds a new option to pack-objects which will cause it to ignore an
object which appears in a local pack which has a .keep file, even if it
was specified for packing.

This option will be used by the porcelain repack.

Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-12 10:28:10 -08:00
Nicolas Pitre
59dd9ed183 pack-objects: don't leak pack window reference when splitting packs
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02 15:22:35 -08:00
Nicolas Pitre
64bd76b1de pack-objects: allow "fixing" a corrupted pack without a full repack
When the pack data to be reused is found to be bad, let's fall back to
full object access through the generic path which has its own strategies
to find alternate object sources in that case.  This allows for "fixing"
a corrupted pack simply by copying either another pack containing the
object(s) found to be bad, or the loose object itself, into the object
store and launch a repack without the need for -f.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02 15:22:35 -08:00
Nicolas Pitre
08698b1e32 make find_pack_revindex() aware of the nasty world
It currently calls die() whenever given offset is not found thinking
that such thing should never happen.  But this offset may come from a
corrupted pack whych _could_ happen and not be found.  Callers should
deal with this possibility gracefully instead.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02 15:22:35 -08:00
Nicolas Pitre
03d660150c make check_object() resilient to pack corruptions
The check_object() function tries to get away with the least amount of
pack access possible when it already has partial information on given
object rather than calling the more costly packed_object_info().

When things don't look right, it should just give up and fall back to
packed_object_info() directly instead of die()'ing.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02 15:22:35 -08:00
Nicolas Pitre
09ded04b7e make unpack_object_header() non fatal
It is possible to have pack corruption in the object header.  Currently
unpack_object_header() simply die() on them instead of letting the caller
deal with that gracefully.

So let's have unpack_object_header() return an error instead, and find
a better name for unpack_object_header_gently() in that context.  All
callers of unpack_object_header() are ready for it.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02 15:22:34 -08:00
Nicolas Pitre
d8f325563d better validation on delta base object offsets
In one case, it was possible to have a bad offset equal to 0 effectively
pointing a delta onto itself and crashing git after too many recursions.
In the other cases, a negative offset could result due to off_t being
signed.  Catch those.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02 15:22:34 -08:00
Nicolas Pitre
0e8189e270 close another possibility for propagating pack corruption
Abstract
--------

With index v2 we have a per object CRC to allow quick and safe reuse of
pack data when repacking.  This, however, doesn't currently prevent a
stealth corruption from being propagated into a new pack when _not_
reusing pack data as demonstrated by the modification to t5302 included
here.

The Context
-----------

The Git database is all checksummed with SHA1 hashes.  Any kind of
corruption can be confirmed by verifying this per object hash against
corresponding data.  However this can be costly to perform systematically
and therefore this check is often not performed at run time when
accessing the object database.

First, the loose object format is entirely compressed with zlib which
already provide a CRC verification of its own when inflating data.  Any
disk corruption would be caught already in this case.

Then, packed objects are also compressed with zlib but only for their
actual payload.  The object headers and delta base references are not
deflated for obvious performance reasons, however this leave them
vulnerable to potentially undetected disk corruptions.  Object types
are often validated against the expected type when they're requested,
and deflated size must always match the size recorded in the object header,
so those cases are pretty much covered as well.

Where corruptions could go unnoticed is in the delta base reference.
Of course, in the OBJ_REF_DELTA case,  the odds for a SHA1 reference to
get corrupted so it actually matches the SHA1 of another object with the
same size (the delta header stores the expected size of the base object
to apply against) are virtually zero.  In the OBJ_OFS_DELTA case, the
reference is a pack offset which would have to match the start boundary
of a different base object but still with the same size, and although this
is relatively much more "probable" than in the OBJ_REF_DELTA case, the
probability is also about zero in absolute terms.  Still, the possibility
exists as demonstrated in t5302 and is certainly greater than a SHA1
collision, especially in the OBJ_OFS_DELTA case which is now the default
when repacking.

Again, repacking by reusing existing pack data is OK since the per object
CRC provided by index v2 guards against any such corruptions. What t5302
failed to test is a full repack in such case.

The Solution
------------

As unlikely as this kind of stealth corruption can be in practice, it
certainly isn't acceptable to propagate it into a freshly created pack.
But, because this is so unlikely, we don't want to pay the run time cost
associated with extra validation checks all the time either.  Furthermore,
consequences of such corruption in anything but repacking should be rather
visible, and even if it could be quite unpleasant, it still has far less
severe consequences than actively creating bad packs.

So the best compromize is to check packed object CRC when unpacking
objects, and only during the compression/writing phase of a repack, and
only when not streaming the result.  The cost of this is minimal (less
than 1% CPU time), and visible only with a full repack.

Someone with a stats background could provide an objective evaluation of
this, but I suspect that it's bad RAM that has more potential for data
corruptions at this point, even in those cases where this extra check
is not performed.  Still, it is best to prevent a known hole for
corruption when recreating object data into a new pack.

What about the streamed pack case?  Well, any client receiving a pack
must always consider that pack as untrusty and perform full validation
anyway, hence no such stealth corruption could be propagated to remote
repositoryes already.  It is therefore worthless doing local validation
in that case.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02 15:22:15 -08:00