Reworded the section about git-cvsserver needing to update the
database.
Signed-off-by: Frank Lichtenheld <frank@lichtenheld.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
CVS allows you to add a removed file (where the
removal is not yet committed) which will
cause the server to send the latest revision of the
file and to delete the "removed" status.
Copy this behaviour.
Signed-off-by: Frank Lichtenheld <frank@lichtenheld.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is meant to be a saner replacement for "git-cherry".
When used with "A...B", this filters out commits whose patch
text has the same patch-id as a commit on the other side. It
would probably most useful to use with --left-right.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This implements the patch-id computation and recording library,
patch-ids.c, and rewrites the get_patch_ids() function used in
cherry and format-patch to use it, so that they do not pollute
the object namespace. Earlier code threw non-objects into the
in-core object database, and hoped for not getting bitten by
SHA-1 collisions. While it may be practically Ok, it still was
an ugly hack.
Signed-off-by: Junio C Hamano <junkio@cox.net>
When used with '--boundary A...B', this shows the -/</> marker
you would get with --left-right option to 'git-log' family.
When symmetric diff is not used, everybody is shown to be on the
"right" branch.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This function used to call locate_object_entry_hash() _twice_ per added
object while only once should suffice. Let's reorganize that code a bit.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is a fairly complete list of tests for various aspects of pack
index versions 1 and 2.
Tests on index v2 include 32-bit and 64-bit offsets, as well as a nice
demonstration of the flawed repacking integrity checks that index
version 2 intend to solve over index version 1 with the per object CRC.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Reliance on /dev/urandom produces test vectors that are, well, random.
This can cause problems impossible to track down when the data is
different from one test invokation to another.
The goal is not to have random data to test, but rather to have a
convenient way to create sets of large files with non compressible and
non deltifiable data in a reproducible way.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* builtin-grep.c (strtoul_ui): Move function definition from here, to...
* git-compat-util.h (strtoul_ui): ...here, with an added "base" parameter.
* builtin-grep.c (cmd_grep): Update use of strtoul_ui to include base, "10".
* builtin-update-index.c (read_index_info): Diagnose an invalid mode integer
that is out of range or merely larger than INT_MAX.
(cmd_update_index): Use strtoul_ui, not sscanf.
* convert-objects.c (write_subdirectory): Likewise.
Signed-off-by: Jim Meyering <jim@meyering.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is the promised cleaned-up version of teaching directory traversal
(ie the "read_directory()" logic) about subprojects. That makes "git add"
understand to add/update subprojects.
It now knows to look at the index file to see if a directory is marked as
a subproject, and use that as information as whether it should be recursed
into or not.
It also generally cleans up the handling of directory entries when
traversing the working tree, by splitting up the decision-making process
into small functions of their own, and adding a fair number of comments.
Finally, it teaches "add_file_to_cache()" that directory names can have
slashes at the end, since the directory traversal adds them to make the
difference between a file and a directory clear (it always did that, but
my previous too-ugly-to-apply subproject patch had a totally different
path for subproject directories and avoided the slash for that case).
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Add testcase for format-patch --subject-prefix support.
Signed-off-by: Robin H. Johnson <robbat2@gentoo.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Add a new option to git-format-patch, entitled --subject-prefix that allows
control of the subject prefix '[PATCH]'. Using this option, the text 'PATCH' is
replaced with whatever input is provided to the option. This allows easily
generating patches like '[PATCH 2.6.21-rc3]' or properly numbered series like
'[-mm3 PATCH N/M]'. This patch provides the implementation and documentation.
Signed-off-by: Robin H. Johnson <robbat2@gentoo.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
* maint:
GIT 1.5.1.1
cvsserver: Fix handling of diappeared files on update
fsck: do not complain on detached HEAD.
(encode_85, decode_85): Mark source buffer pointer as "const".
This fixes a total thinko in my original series: subprojects do *not* sort
like directories, because the index is sorted purely by full pathname, and
since a subproject shows up in the index as a normal NUL-terminated
string, it never has the issues with sorting with the '/' at the end.
So if you have a subproject "proj" and a file "proj.c", the subproject
sorts alphabetically before the file in the index (and must thus also sort
that way in a tree object, since trees sort as the index).
In contrast, it you have two files "proj/file" and "proj.c", the "proj.c"
will sort alphabetically before "proj/file" in the index. The index
itself, of course, does not actually contain an entry "proj/", but in the
*tree* that gets written out, the tree entry "proj" will sort after the
file entry "proj.c", which is the only real magic sorting rule.
In other words: the magic sorting rule only affects tree entries, and
*only* affects tree entries that point to other trees (ie are of the type
S_IFDIR).
Anyway, that thinko just means that we should remove the special case to
make S_ISDIRLNK entries sort like S_ISDIR entries. They don't. They sort
like normal files.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Only send a modified response if the client sent a
"Modified" entry. This fixes the case where the
file was locally deleted on the client without
being removed from CVS. In this case the client
will only have sent the Entry for the file but nothing
else.
Signed-off-by: Frank Lichtenheld <frank@lichtenheld.de>
Acked-by: Martin Langhoff <martin@catalyst.net.nz>
Acked-by: Daniel Barkalow <barkalow@iabervon.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Detached HEAD is just a normal state of a repository. Do not
say anything about it.
Do not give worrying "error:" messages when we let the user know
that the HEAD points at nothing (i.e. yet to be born branch),
nor we do not have any default refs to start following the
objects chain. Reword them as "notice:".
Signed-off-by: Junio C Hamano <junkio@cox.net>
Introduce new configuration variable $default_projects_order
that can be used to specify the default order of projects on
the index page if no 'o' parameter is given.
Allow a new value 'none' for order that will cause the projects
to be in the order we learned about them. In case of reading the
list of projects from a file, this should be the order as they are
listed in the file. In case of reading the list of projects from
a directory this will probably give random results depending on the
filesystem in use.
Signed-off-by: Frank Lichtenheld <frank@lichtenheld.de>
Acked-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Make it possible to use the forks feature even when
reading the list of projects from a file, by creating
a list of known prefixes as we go. Forks have to be
listed after the main project in order to be recognised
as such.
Signed-off-by: Frank Lichtenheld <frank@lichtenheld.de>
Acked-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This teaches the really fundamental core SHA1 object handling routines
about gitlinks. We can compare trees with gitlinks in them (although we
can not actually generate patches for them yet - just raw git diffs),
and they show up as commits in "git ls-tree".
We also know to compare gitlinks as if they were directories (ie the
normal "sort as trees" rules apply).
[jc: amended a cut&paste error]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Since the subprojects don't necessarily even exist in the current tree,
much less in the current git repository (they are totally independent
repositories), we do not want to try to follow the chain from one git
repository to another through a gitlink.
This involves teaching fsck to ignore references to gitlink objects from
a tree and from the current index.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This just adds the basic helper functions to recognize and work with git
tree entries that are links to other git repositories ("subprojects").
They still aren't actually connected up to any of the code-paths, but
now all the infrastructure is in place.
The next commit will start actually adding actual subproject support.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This new function resolves a ref in *another* git repository. It's
named for its intended use: to look up the git link to a subproject.
It's not actually wired up to anything yet, but we're getting closer to
having fundamental plumbing support for "links" from one git directory
to another, which is the basis of subproject support.
[jc: amended a FILE* leak]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When we are fetching from a repository that is on a local
filesystem, first check if we have all the objects that we are
going to fetch available locally, by not just checking the tips
of what we are fetching, but with a full reachability analysis
to our existing refs. In such a case, we do not have to run
git-fetch-pack which would send many needless objects. This is
especially true when the other repository is an alternate of the
current repository (e.g. perhaps the repository was created by
running "git clone -l -s" from there).
The useless objects transferred used to be discarded when they
were expanded by git-unpack-objects called from git-fetch-pack,
but recent git-fetch-pack prefers to keep the data it receives
from the other end without exploding them into loose objects,
resulting in a pack full of duplicated data when fetching from
your own alternate.
This also uses fetch--tool pick-rref on dumb transport side to
remove a shell loop to do the same.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This script helper takes list of fully qualified refnames and
results from ls-remote and grabs only the lines for the named
refs from the latter.
Signed-off-by: Junio C Hamano <junkio@cox.net>
We have fairly extensive coverage of read-tree 3-way machinery,
and many Porcelain-ish tests use git-merge front-end tests, but
we did not have good basic test for merge-recursive, which made
it very hard to hack on it.
I used this during the recent work to teach D/F conflicts to
merge-recursive.
Signed-off-by: Junio C Hamano <junkio@cox.net>
When a path D that originally was blob in the ancestor was
modified on our branch while it was removed on the other branch,
we keep stages 1 and 2, and leave our version in the working
tree. If the other branch created a path D/F, however, that
path can cleanly be resolved in the index (after all, the
ancestor nor we do not have it and only the other side added),
but it cannot be checked out. The issue is the same when the
other branch had D and we had renamed it to D/F, or the ancestor
had D/F instead of D (so there are four combinations).
Do not stop the merge, but leave both D and D/F paths in the
index so that the user can clear things up.
Signed-off-by: Junio C Hamano <junkio@cox.net>
When update-trees::threeway_merge() decides that a path that
exists in the current index (and HEAD) is to be removed, it
leaves a stage 0 entry whose mode bits are set to 0. The code
mistook it as "this stage wants the blob here", and proceeded
to call update_file_flags() which ended up trying to put the
mode=0 entry in the index, got very confused, and ended up
barfing with "do not know what to do with 000000".
Since threeway_merge() does not handle case #10 (one side
removes while the other side does not do anything), this is not
a problem while we refuse to merge branches that have D/F
conflicts, but when we start resolving them, we would need to be
able to remove cache entries, and at that point it starts to
matter.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This fixes three buglets in threeway_merge() regarding D/F
conflict entries.
* After finishing with path D and handling path D/F, some stages
have D/F conflict entry which are obviously non-NULL. For the
purpose of determining if the path D/F is missing in the
ancestor, they should not be taken into account.
* D/F conflict entry is a marker to say "this stage does _not_
have the path", so do not send them to keep_entry().
Signed-off-by: Junio C Hamano <junkio@cox.net>
Case #10 is not handled with unpack-trees.c:threeway_merge()
internally, unless under the agressive rule, and it is not a
bug. As the test expects, ND (one side did not do anything,
other side deleted) case was meant to be handled by the caller's
policy (e.g. git-merge-one-file or git-merge-recursive).
Signed-off-by: Junio C Hamano <junkio@cox.net>
Some oneline descriptions are just too long. In shortlog, it looks much
nicer when they are wrapped. Since print_wrapped_text() is UTF-8 aware,
it also works with those descriptions.
[jc: with minimum fixes]
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This replaces the inflate validation with a CRC validation when reusing
data from a pack which uses index version 2. That makes repacking much
safer against corruptions, and it should be a bit faster too.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This is necessary for testing the new capabilities in some automated
way without having an actual 4GB+ pack.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Initially the conversion was made using nth_packed_object_sha1() which
made this file completely index version agnostic. Unfortunately the
overhead was quite significant so I went back to raw index walking but
with selectable base and step values which brought back similar
performances as the original.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
When index v2 is encountered, the CRC32 of each object is also displayed
in parenthesis at the end of the line.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
With this patch, packs larger than 4GB are usable, even on a 32-bit machine
(at least on Linux). If off_t is not large enough to deal with a large
pack then die() is called instead of attempting to use the pack and
producing garbage.
This was tested with a 8GB pack specially created for the occasion on
a 32-bit machine.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Like previous patch but for index-pack.
[ There is quite some code duplication between pack-objects and index-pack
for generating a pack index (and fast-import as well I suppose). This
should be reworked into a common function eventually. But not now. ]
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Pack index version 2 goes as follows:
- 8 bytes of header with signature and version.
- 256 entries of 4-byte first-level fan-out table.
- Table of sorted 20-byte SHA1 records for each object in pack.
- Table of 4-byte CRC32 entries for raw pack object data.
- Table of 4-byte offset entries for objects in the pack if offset is
representable with 31 bits or less, otherwise it is an index in the next
table with top bit set.
- Table of 8-byte offset entries indexed from previous table for offsets
which are 32 bits or more (optional).
- 20-byte SHA1 checksum of sorted object names.
- 20-byte SHA1 checksum of the above.
The object SHA1 table is all contiguous so future pack format that would
contain this table directly won't require big changes to the code. It is
also tighter for slightly better cache locality when looking up entries.
Support for large packs exceeding 31 bits in size won't impose an index
size bloat for packs within that range that don't need a 64-bit offset.
And because newer objects which are likely to be the most frequently used
are located at the beginning of the pack, they won't pay the 64-bit offset
lookup at run time either even if the pack is large.
Right now an index version 2 is created only when the biggest offset in a
pack reaches 31 bits. It might be a good idea to always use index version
2 eventually to benefit from the CRC32 it contains when reusing pack data
while repacking.
[jc: with the "oops" fix to keep track of the last offset correctly]
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The most important optimization for performance when repacking is the
ability to reuse data from a previous pack as is and bypass any delta
or even SHA1 computation by simply copying the raw data from one pack
to another directly.
The problem with this is that any data corruption within a copied object
would go unnoticed and the new (repacked) pack would be self-consistent
with its own checksum despite containing a corrupted object. This is a
real issue that already happened at least once in the past.
In some attempt to prevent this, we validate the copied data by inflating
it and making sure no error is signaled by zlib. But this is still not
perfect as a significant portion of a pack content is made of object
headers and references to delta base objects which are not deflated and
therefore not validated when repacking actually making the pack data reuse
still not as safe as it could be.
Of course a full SHA1 validation could be performed, but that implies
full data inflating and delta replaying which is extremely costly, which
cost the data reuse optimization was designed to avoid in the first place.
So the best solution to this is simply to store a CRC32 of the raw pack
data for each object in the pack index. This way any object in a pack can
be validated before being copied as is in another pack, including header
and any other non deflated data.
Why CRC32 instead of a faster checksum like Adler32? Quoting Wikipedia:
Jonathan Stone discovered in 2001 that Adler-32 has a weakness for very
short messages. He wrote "Briefly, the problem is that, for very short
packets, Adler32 is guaranteed to give poor coverage of the available
bits. Don't take my word for it, ask Mark Adler. :-)" The problem is
that sum A does not wrap for short messages. The maximum value of A for
a 128-byte message is 32640, which is below the value 65521 used by the
modulo operation. An extended explanation can be found in RFC 3309,
which mandates the use of CRC32 instead of Adler-32 for SCTP, the
Stream Control Transmission Protocol.
In the context of a GIT pack, we have lots of small objects, especially
deltas, which are likely to be quite small and in a size range for which
Adler32 is dimed not to be sufficient. Another advantage of CRC32 is the
possibility for recovery from certain types of small corruptions like
single bit errors which are the most probable type of corruptions.
OK what this patch does is to compute the CRC32 of each object written to
a pack within pack-objects. It is not written to the index yet and it is
obviously not validated when reusing pack data yet either.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Change a few size and offset variables to more appropriate type, then
add overflow tests on those offsets. This prevents any bad data to be
generated/processed if off_t happens to not be large enough to handle
some big packs.
Better be safe than sorry.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This patch introduces the MSB() macro to obtain the desired number of
most significant bits from a given variable independently of the variable
type.
It is then used to better implement the overflow test on the OBJ_OFS_DELTA
base offset variable with the property of always working correctly
regardless of the type/size of that variable.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The coming index format change doesn't allow for the number of objects
to be determined from the size of the index file directly. Instead, Let's
initialize a field in the packed_git structure with the object count when
the index is validated since the count is always known at that point.
While at it let's reorder some struct packed_git fields to avoid padding
due to needed 64-bit alignment for some of them.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This just makes sure that when we do a read_directory(), we check
that the filename fits in the buffer we allocated (with a bit of
slop)
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The diff helpers used to do the magic mode canonicalization and all the
other special mode handling by hand ("trust executable bit" and "has
symlink support" handling).
That's bogus. Use "ce_mode_from_stat()" that does this all for us.
This is also going to be required when we add support for links to other
git repositories.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>