The disambiguation machinery has two callers: get_short_sha1
and for_each_abbrev. Both need to repeat much of the same
setup: declaring buffers, sanity-checking lengths, preparing
the prefixes, etc. Let's pull that into a single init
function so we can avoid repeating ourselves.
Pulling the buffers into the "struct disambiguate_state"
isn't strictly necessary, but it does make things simpler
for the callers, who no longer have to worry about sizing
them correctly (i.e., it's an implicit requirement that
the caller provide 20- and 40-byte buffers).
And while we're touching this code, we can convert any
magic-number sizes to the more modern GIT_SHA1_* constants.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The treeish disambiguation function tries to peel tags, but
it does so by calling:
deref_tag(lookup_object(sha1), ...);
This will only work if we have previously looked at the tag
and created a "struct tag" for it. Since parsing revision
arguments typically happens before anything else, this is
usually not the case, and we would fail to peel the tag (we
are lucky that deref_tag() gracefully handles the NULL and
does not segfault).
Instead, we can use parse_object(). Note that this is the
same fix done by 94d75d1 (get_short_sha1(): correctly
disambiguate type-limited abbreviation, 2013-07-01), but
that commit fixed only the committish disambiguator, and
left the bug in the treeish one.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The get_sha1() function is actually implementation by many
sub-functions, but we do not always pass our flags around to
all of those functions. As a result, we may forget that our
caller asked us to resolve with GET_SHA1_QUIETLY and output
messages. The two triggerable cases are:
1. Resolving treeish:path will resolve the "treeish"
portion using GET_SHA1_TREEISH, dropping all other
flags.
2. The peel_onion() function did not take flags at all
but recurses to get_sha1_1(), which does.
The solution for both is to bitwise-OR their new flags with
the existing ones (after dropping any mutually exclusive
disambiguation flags).
This bug can trigger with "git rev-parse --quiet", which
asks for quiet resolution. But it can also happen in a more
vanilla code path when we do a follow-up ONLY_TO_DIE
invocation of get_sha1(), and that's what the tests check.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the revision code cannot parse an argument like
"HEAD:foo", it will call maybe_die_on_misspelt_object_name(),
which re-runs get_sha1() with an extra ONLY_TO_DIE flag. We
then spend more effort to generate a better error message.
Unfortunately, a side effect is that our second call may
repeat the same error messages from the original get_sha1()
call. You can see this with:
$ git show 0017
error: short SHA1 0017 is ambiguous.
error: short SHA1 0017 is ambiguous.
fatal: ambiguous argument '0017': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
where the second "error:" line comes from the ONLY_TO_DIE
call.
To fix this, we can make ONLY_TO_DIE imply QUIETLY. This is
a little odd, because the whole point of ONLY_TO_DIE is to
output error messages. But what we want to do is tell the
rest of the get_sha1() code (particularly get_sha1_1()) that
the _regular_ messages should be quiet, but the only-to-die
ones should not.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The get_sha1() family of functions takes a flags field, but
some of the flags are mutually exclusive. In particular, we
can only handle one disambiguating function, and the flags
quietly override each other. Let's instead detect these as
programming bugs.
Technically some of the flags are supersets of the others,
so treating COMMITTISH|TREEISH as just COMMITTISH is not
wrong, but it's a good sign the caller is confused. And
certainly asking for BLOB|TREE does not work.
We can do the check easily with some bit-twiddling, and as a
bonus, the bit-mask of disambiguators will come in handy in
a future patch.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When opening a loose object file, we often do this sequence:
- prepare a short buffer for the object header (on stack)
- call unpack_sha1_header() and have early part of the object data
inflated, enough to fill the buffer
- parse that data in the short buffer, assuming that the first part
of the object is <typename> SP <length> NUL
Because the parsing function parse_sha1_header_extended() is not
given the number of bytes inflated into the header buffer, it you
craft a file whose early part inflates a garbage sequence without SP
or NUL, and replace a loose object with it, it will end up reading
past the end of the inflated data.
To correct this, do the following four things:
- rename unpack_sha1_header() to unpack_sha1_short_header() and
have unpack_sha1_header_to_strbuf() keep calling that as its
helper function. This will detect and report zlib errors, but is
not aware of the format of a loose object (as before).
- introduce unpack_sha1_header() that calls the same helper
function, and when zlib reports it inflated OK into the buffer,
check if the inflated data has NUL. This would ensure that
parsing function will terminate within the buffer that holds the
inflated header.
- update unpack_sha1_header_to_strbuf() to check if the resulting
buffer has NUL for the same effect.
- update parse_sha1_header_extended() to make sure that its loop to
find the SP that terminates the <typename> stops at NUL.
Essentially, this makes unpack_*() functions that are asked to
unpack a loose object header to be a bit more strict and detect an
input that cannot possibly be a valid object header, even before the
parsing function kicks in.
Reported-by: Gustavo Grieco <gustavo.grieco@imag.fr>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The streaming read interface from a loose object called
parse_sha1_header() but discarded its return value, without noticing
a potential error.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Mark strings for translation in lib/index.tcl that were seemingly
left behind by 700e560 ("git-gui: Mark forgotten strings for
translation.", 2008-09-04) which marks string in do_revert_selection
procedure.
These strings are passed to unstage_help and add_helper procedures.
Signed-off-by: Vasco Almeida <vascomalmeida@sapo.pt>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Starting with v2.5.0 git merge can handle FETCH_HEAD internally and
warns when it's called like 'git merge <message> HEAD <commit>' because
that syntax is deprecated. Use this feature in git-gui and get rid of
that warning.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Tested-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a semantic patch for converting certain calls of memcpy(3) to
COPY_ARRAY() and apply that transformation to the code base. The result
is
shorter and safer code. For now only consider calls where source and
destination have the same type, or in other words: easy cases.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add COPY_ARRAY, a safe and convenient helper for copying arrays,
complementing ALLOC_ARRAY and REALLOC_ARRAY. Users just specify source,
destination and the number of elements; the size of an element is
inferred automatically.
It checks if the multiplication of size and element count overflows.
The inferred size is passed first to st_mult, which allows the division
there to be done at compilation time.
As a basic type safety check it makes sure the sizes of source and
destination elements are the same. That's evaluated at compilation time
as well.
COPY_ARRAY is safe to use with NULL as source pointer iff 0 elements are
to be copied. That convention is used in some cases for initializing
arrays. Raw memcpy(3) does not support it -- compilers are allowed to
assume that only valid pointers are passed to it and can optimize away
NULL checks after such a call.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "highlight" binary can, in some cases, determine the language type
by the means of file contents, for example the shebang in the first line
for some scripting languages. Make use of this autodetection for files
which syntax is not known by gitweb. In that case, pass the blob
contents to "highlight --force"; the parameter is needed to make it
always generate HTML output (which includes HTML-escaping).
Although we now run highlight on files which do not end up highlighted,
performance is virtually unaffected because when we call highlight, it
is used for escaping HTML. In the case that highlight is used, gitweb
calls sanitize() instead of esc_html(), and the latter is significantly
slower (it does more, being roughly a superset of sanitize()). Simple
benchmark comparing performance of 'blob' view of files without syntax
highlighting in gitweb before and after this change indicates ±1%
difference in request time for all file types. Benchmark was performed
on local instance on Debian, using Apache/2.4.23 web server and CGI.
Document the feature and improve syntax highlight documentation, add
test to ensure gitweb doesn't crash when language detection is used.
Signed-off-by: Ian Kelling <ian@iankelling.org>
Acked-by: Jakub Narębski <jnareb@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function needs_work_tree_config() that is called from
create_default_files() is supposed to be fed the path to ".git" that
looks as if it is at the top of the working tree, and decide if that
location matches the actual worktree being used. This comparison allows
"git init" to decide if core.worktree needs to be recorded in the
working tree.
In the current code, however, we feed the return value from
get_git_dir(), which can be totally different from what the function
expects when "gitdir" file is involved. Instead of giving the path to
the ".git" at the top of the working tree, we end up feeding the actual
path that the file points at.
This original location of ".git" however is only known to init_db().
Make init_db() save it and have it passed to create_default_files() as a
new parameter, which passes the correct location down to
needs_work_tree_config() to fix this.
Noticed-by: Max Nordlund <max.nordlund@sqore.com>
Helped-by: Michael J Gruber <git@drmicha.warpmail.net>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is a pure code move, necessary to kill the global variable git_link
later (and also helps a bit in the next patch).
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The next commit requires that set_git_dir_init() must be called before
init_db(). Let's make sure nobody can do otherwise.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When 'git init' is called from a linked worktree, we treat '.git'
dir (which is $GIT_COMMON_DIR/worktrees/something) as the main
'.git' (i.e. $GIT_COMMON_DIR) and populate the whole repository skeleton
in there. It does not harm anything (*) but it is still wrong.
Since 'git init' calls set_git_dir() at preparation time, which
indirectly calls get_common_dir() and correctly detects multiple
worktree setup, all git_path_buf() calls in create_default_files() will
return correct paths in both single and multiple worktree setups. The
only thing left is copy_templates(), which targets $GIT_DIR, not
$GIT_COMMON_DIR.
Fix that with get_git_common_dir(). This function will return $GIT_DIR
in single-worktree setup, so we don't have to make a special case for
multiple-worktree here.
(*) It does in fact, thanks to another bug. More on that later.
Noticed-by: Max Nordlund <max.nordlund@sqore.com>
Helped-by: Michael J Gruber <git@drmicha.warpmail.net>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The MAX_IN_VAIN mechanism was introduced in commit f061e5f ("fetch-pack:
give up after getting too many "ack continue"", 2006-05-24) to stop ref
negotiation if a number of consecutive "have"s have been sent with no
corresponding new acks. This is to stop the client from digging too deep
in an irrelevant side branch in vain without ever finding a common
ancestor. A use case (as described in that commit) is the scenario in
which the local repository has more roots than the remote repository.
However, during a negotiation in which stateless RPCs are used,
MAX_IN_VAIN will (almost) never trigger (in the more-roots scenario
above and others) because in each new request, the client has to inform
the server of objects it already has and knows the server has (to remind
the server of the state), which the server then acks.
Make fetch-pack only consider, as new acks for the purpose of
MAX_IN_VAIN, acks for objects for which the client has never received an
ack before in this session.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We call getaddrinfo() to try to convert a short hostname
into a fully-qualified one (to use it as an email domain).
If there isn't a canonical name, getaddrinfo() will
generally return either a NULL addrinfo list, or one in
which ai->ai_canonname is a copy of the original name.
However, if the result of gethostname() looks like an IP
address, then getaddrinfo() behaves differently on some
systems. On OS X, it will return a "struct addrinfo" with a
NULL ai_canonname, and we segfault feeding it to strchr().
This is hard to test reliably because it involves not only a
system where we we have to fallback to gethostname() to come
up with an ident, but also where the hostname is a number
with no dots. But I was able to replicate the bug by faking
a hostname, like:
diff --git a/ident.c b/ident.c
index e20a772..b790d28 100644
--- a/ident.c
+++ b/ident.c
@@ -128,6 +128,7 @@ static void add_domainname(struct strbuf *out, int *is_bogus)
*is_bogus = 1;
return;
}
+ xsnprintf(buf, sizeof(buf), "1");
if (strchr(buf, '.'))
strbuf_addstr(out, buf);
else if (canonical_name(buf, out) < 0) {
and running "git var GIT_AUTHOR_IDENT" on an OS X system.
Before this patch it segfaults, and after we correctly
complain of the bogus "user@1.(none)" address (though this
bogus address would be suitable for non-object uses like
writing reflogs).
Reported-by: Jonas Thiel <jonas.lierschied@gmx.de>
Diagnosed-by: John Keeping <john@keeping.me.uk>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a static initializer for struct checkout and use it throughout the
code base. It's shorter, avoids a memset(3) call and makes sure the
base_dir member is initialized to a valid (empty) string.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Back when this guide was written, cvsimport was the only
game in town. These days it is probably not the best option.
Rather than go into details, let's point people to the note
at the top of cvsimport which gives other options.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The old page gives a 404 now. Searching for "cvsps" via
Google returns a GitHub project page as the top hit.
Reported-by: Dan Pritts
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
parsecvs maintenance was taken over by ESR, and the name
changed to cvs-fast-export as it learned to support that
output format. Let's point to cvs-fast-export, as it should
have additional bug-fixes and be more convenient to use.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When cloning with "--recursive", we'd generally expect
submodules to show progress reports if the main clone did,
too.
In older versions of git, this mostly worked out of the
box. Since we show progress by default when stderr is a tty,
and since the child clones inherit the parent stderr, then
both processes would come to the same decision by default.
If the parent clone was asked for "--quiet", we passed down
"--quiet" to the child. However, if stderr was not a tty and
the user specified "--progress", we did not propagate this
to the child.
That's a minor bug, but things got much worse when we
switched recently to submodule--helper's update_clone
command. With that change, the stderr of the child clones
are always connected to a pipe, and we never output
progress at all.
This patch teaches git-submodule and git-submodule--helper
how to pass down an explicit "--progress" flag when cloning.
The clone command then decides to propagate that flag based
on the cloning decision made earlier (which takes into
account isatty(2) of the parent process, existing --progress
or --quiet flags, etc). Since the child processes always run
without a tty on stderr, we don't have to worry about
passing an explicit "--no-progress"; it's the default for
them.
This fixes the recent loss of progress during recursive
clones. And as a bonus, it makes:
git clone --recursive --progress ... 2>&1 | cat
work by triggering progress explicitly in the children.
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The verify_packfile() does not explicitly open the packfile;
instead, it starts with a sha1 checksum over the whole pack,
and relies on use_pack() to open the packfile as a side
effect.
If the pack cannot be opened for whatever reason (either
because its header information is corrupted, or perhaps
because a simultaneous repack deleted it), then use_pack()
will die(), as it has no way to return an error. This is not
ideal, as verify_packfile() otherwise tries to gently return
an error (this lets programs like git-fsck go on to check
other packs).
Instead, let's check is_pack_valid() up front, and return an
error if it fails. This will open the pack as a side effect,
and then use_pack() will later rely on our cached
descriptor, and avoid calling die().
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The TravisCI macOS build is broken because homebrew (a macOS dependency
manager) changed its internal directory structure [1]. This is a problem
because we modify the Perforce dependencies in the homebrew repository
before installing them.
Fix it by asking homebrew for its path instead of hardcoding it.
[1] 0a09ae30f8
Signed-off-by: Lars Schneider <larsxschneider@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git gc --aggressive" used to limit the delta-chain length to 250,
which is way too deep for gaining additional space savings and is
detrimental for runtime performance. The limit has been reduced to
50.
* jk/reduce-gc-aggressive-depth:
gc: default aggressive depth to 50
Even when "git pull --rebase=preserve" (and the underlying "git
rebase --preserve") can complete without creating any new commit
(i.e. fast-forwards), it still insisted on having a usable ident
information (read: user.email is set correctly), which was less
than nice. As the underlying commands used inside "git rebase"
would fail with a more meaningful error message and advice text
when the bogus ident matters, this extra check was removed.
* jk/rebase-i-drop-ident-check:
rebase-interactive: drop early check for valid ident
More i18n.
* va/i18n:
i18n: update-index: mark warnings for translation
i18n: show-branch: mark plural strings for translation
i18n: show-branch: mark error messages for translation
i18n: receive-pack: mark messages for translation
notes: spell first word of error messages in lowercase
i18n: notes: mark error messages for translation
i18n: merge-recursive: mark verbose message for translation
i18n: merge-recursive: mark error messages for translation
i18n: config: mark error message for translation
i18n: branch: mark option description for translation
i18n: blame: mark error messages for translation
"git format-patch --base=..." feature that was recently added
showed the base commit information after "-- " e-mail signature
line, which turned out to be inconvenient. The base information
has been moved above the signature line.
* jt/format-patch-base-info-above-sig:
format-patch: show base info before email signature
Performance tests done via "t/perf" did not use the same set of
build configuration if the user relied on autoconf generated
configuration.
* ks/perf-build-with-autoconf:
t/perf/run: copy config.mak.autogen & friends to build area
"git diff -W" output needs to extend the context backward to
include the header line of the current function and also forward to
include the body of the entire current function up to the header
line of the next one. This process may have to merge to adjacent
hunks, but the code forgot to do so in some cases.
* rs/xdiff-merge-overlapping-hunks-for-W-context:
xdiff: fix merging of hunks with -W context and -u context
There were numerous corner cases in which the configuration files
are read and used or not read at all depending on the directory a
Git command was run, leading to inconsistent behaviour. The code
to set-up repository access at the beginning of a Git process has
been updated to fix them.
* jk/setup-sequence-update:
t1007: factor out repeated setup
init: reset cached config when entering new repo
init: expand comments explaining config trickery
config: only read .git/config from configured repos
test-config: setup git directory
t1302: use "git -C"
pager: handle early config
pager: use callbacks instead of configset
pager: make pager_program a file-local static
pager: stop loading git_default_config()
pager: remove obsolete comment
diff: always try to set up the repository
diff: handle --no-index prefixes consistently
diff: skip implicit no-index check when given --no-index
patch-id: use RUN_SETUP_GENTLY
hash-object: always try to set up the git repository
The http transport (with curl-multi option, which is the default
these days) failed to remove curl-easy handle from a curlm session,
which led to unnecessary API failures.
* ew/http-do-not-forget-to-call-curl-multi-remove-handle:
http: always remove curl easy from curlm session on release
http: consolidate #ifdefs for curl_multi_remove_handle
http: warn on curl_multi_add_handle failures
Some codepaths in "git pack-objects" were not ready to use an
existing pack bitmap; now they are and as the result they have
become faster.
* ks/pack-objects-bitmap:
pack-objects: use reachability bitmap index when generating non-stdout pack
pack-objects: respect --local/--honor-pack-keep/--incremental when bitmap is in use
"git log --cherry-pick" used to include merge commits as candidates
to be matched up with other commits, resulting a lot of wasted time.
The patch-id generation logic has been updated to ignore merges to
avoid the wastage.
* jk/patch-ids-no-merges:
patch-ids: refuse to compute patch-id for merge commit
patch-ids: turn off rename detection
Recently we updated the code to manage the in-core cache that holds
objects that have recently been used to reconstitute other objects
that are stored as deltas against them, but the update used an
incorrect API function to manage the list of these objects. This
has been fixed.
* jk/delta-base-cache:
add_delta_base_cache: use list_for_each_safe
Even though "git hash-objects", which is a tool to take an
on-filesystem data stream and put it into the Git object store,
allowed to perform the "outside-world-to-Git" conversions (e.g.
end-of-line conversions and application of the clean-filter), and
it had the feature on by default from very early days, its reverse
operation "git cat-file", which takes an object from the Git object
store and externalize for the consumption by the outside world,
lacked an equivalent mechanism to run the "Git-to-outside-world"
conversion. The command learned the "--filters" option to do so.
* js/cat-file-filters:
cat-file: support --textconv/--filters in batch mode
cat-file --textconv/--filters: allow specifying the path separately
cat-file: introduce the --filters option
cat-file: fix a grammo in the man page