The only code that cares about the value of the global variable
saved_env_before_alias after the previous fix is handle_builtin()
that turns into a glorified no-op when the variable is true, so the
logic could safely be lifted to its caller, i.e. the caller can
refrain from calling it when the variable is set.
This variable tells us if save_env_before_alias() was called (with
or without matching restore_env()), but the sole caller of the
function, handle_alias(), always calls it as the first thing, so we
can consider that the variable essentially keeps track of the fact
that handle_alias() has ever been called.
It turns out that handle_builtin() and handle_alias() are called
only from one function in a way that the value of the variable
matters, which is run_argv(), and it already keeps track of the
fact that it already called handle_alias().
So we can simplify the whole thing by:
- Change handle_builtin() to always make a direct call to the
builtin implementation it finds, and make sure the caller
refrains from calling it if handle_alias() has ever been
called;
- Remove saved_env_before_alias variable, and instead use the
local "done_alias" variable maintained inside run_argv() to
make the same decision.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We made sure that save_env_before_alias() does not skip saving the
environment when asked to (which led to use-after-free of orig_cwd
in restore_env() in the buggy version) with the previous step.
Protect against future breakage where somebody adds new callers of
these functions in an unbalanced fashion.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When help.autocorrect is in effect, an attempt to auto-execute an
uniquely corrected result of a misspelt alias will result in an
irrelevant error message. The codepath that causes this calls
save_env_before_alias() and restore_env() in handle_alias(), and
that happens twice. A global variable orig_cwd is allocated to hold
the return value of getcwd() in save_env_before_alias(), which is
then used in restore_env() to go back to that directory and finally
free(3)'d there.
However, save_env_before_alias() is not prepared to be called twice.
It returns early when it knows it has already been called, leaving
orig_cwd undefined, which is then checked in the second call to
restore_env(), and by that time, the memory that used to hold the
contents of orig_cwd is either freed or reused to hold something
else, and this is fed to chdir(2), causing it to fail. Even if it
did not fail (i.e. reading of the already free'd piece of memory
yielded a directory path that we can chdir(2) to), it then gets
free(3)'d.
Fix this by making sure save_env() does do the saving when called.
While at it, add a minimal test for help.autocorrect facility.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When git executes a sub-command, we print a warning if the
command dies due to a signal, but make an exception for
"uninteresting" cases like SIGINT and SIGQUIT (since the
user presumably just hit ^C).
We should make a similar exception for SIGPIPE, because it's
an expected and uninteresting return in most cases; it
generally means the user quit the pager before git had
finished generating all output. This used to be very hard
to trigger in practice, because:
1. We only complain if we see a real SIGPIPE death, not
the shell-induced 141 exit code. This means that
anything we run via the shell does not trigger the
warning, which includes most non-trivial aliases.
2. The common case for SIGPIPE is the user quitting the
pager before git has finished generating all output.
But if the user triggers a pager with "-p", we redirect
the git wrapper's stderr to that pager, too. Since the
pager is dead, it means that the message goes nowhere.
3. You can see it if you run your own pager, like
"git foo | head". But that only happens if "foo" is a
non-builtin (so it doesn't work with "log", for
example).
However, it may become more common after 86d26f2, which
teaches alias to re-exec builtins rather than running them
in the same process. This case doesn't trigger (1), as we
don't need a shell to run a git command. It doesn't trigger
(2), because the pager is not started by the original git,
but by the inner re-exec of git. And it doesn't trigger (3),
because builtins are treated more like non-builtins in this
case.
Given how flaky this message already is (e.g., you cannot
even know whether you will see it, as git optimizes out some
shell invocations behind the scenes based on the contents of
the command!), and that it is unlikely to ever provide
useful information, let's suppress it for all cases of
SIGPIPE.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The unfortunate commit d95138e (setup: set env $GIT_WORK_TREE when
work tree is set, like $GIT_DIR - 2015-06-26) exposes another problem,
besides git-clone that's described in the previous commit. If
GIT_WORK_TREE (or even GIT_DIR) is exported to an alias script, it may
mislead git commands in the script where the repo is. Granted, most
scripts work on the repo where the alias is summoned from. But nowhere
do we forbid the script to visit another repository.
The revert of d95138e in the previous commit is sufficient as a
fix. However, to protect us from accidentally leaking GIT_*
environment variables again, we restore certain sensitive env before
calling the external script.
GIT_PREFIX is let through because there's another setup side effect
that we simply accepted so far: current working directory is
moved. Maybe in future we can introduce a new alias format that
guarantees no cwd move, then we can unexport GIT_PREFIX.
Reported-by: Gabriel Ganne <gabriel.ganne@gmail.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit d95138e [1] attempted to fix a .git file problem by
setting GIT_WORK_TREE whenever GIT_DIR is set. It sounded harmless
because we handle GIT_DIR and GIT_WORK_TREE side by side for most
commands, with two exceptions: git-init and git-clone.
"git clone" is not happy with d95138e. This command ignores GIT_DIR
but respects GIT_WORK_TREE [2] [3] which means it used to run fine
from a hook, where GIT_DIR was set but GIT_WORK_TREE was not (*).
With d95138e, GIT_WORK_TREE is set all the time and git-clone
interprets that as "I give you order to put the worktree here",
usually against the user's intention.
The solution in d95138e is reverted earlier, and instead we reuse
the solution from c056261 [4]. It fixed another setup-messed-
up-by-alias by saving and restoring env and spawning a new process,
but for git-clone and git-init only.
Now we conclude that setup-messed-up-by-alias is always evil. So the
env restoration is done for _all_ commands, including external ones,
whenever aliases are involved. It fixes what d95138e tried to fix,
without upsetting git-clone-inside-hooks.
The test from d95138e remains to verify it's not broken by this. A new
test is added to make sure git-clone-inside-hooks remains happy.
(*) GIT_WORK_TREE was not set _most of the time_. In some cases
GIT_WORK_TREE is set and git-clone will behave differently. The
use of GIT_WORK_TREE to direct git-clone to put work tree
elsewhere looks like a mistake because it causes surprises this
way. But that's a separate story.
[1] d95138e (setup: set env $GIT_WORK_TREE when work tree is set, like
$GIT_DIR - 2015-06-26)
[2] 2beebd2 (clone: create intermediate directories of destination
repo - 2008-06-25)
[3] 20ccef4 (make git-clone GIT_WORK_TREE aware - 2007-07-06)
[4] c056261 (git potty: restore environments after alias expansion -
2014-06-08)
Reported-by: Anthony Sottile <asottile@umich.edu>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous commit enforces MAX_XDIFF_SIZE at the
interfaces to xdiff: xdi_diff (which calls xdl_diff) and
ll_xdl_merge (which calls xdl_merge).
But we have another direct call to xdl_merge in
merge-file.c. If it were written today, this probably would
just use the ll_merge machinery. But it predates that code,
and uses slightly different options to xdl_merge (e.g.,
ZEALOUS_ALNUM).
We could try to abstract out an xdi_merge to match the
existing xdi_diff, but even that is difficult. Rather than
simply report error, we try to treat large files as binary,
and that distinction would happen outside of xdi_merge.
The simplest fix is to just replicate the MAX_XDIFF_SIZE
check in merge-file.c.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The xdiff code is not prepared to handle extremely large
files. It uses "int" in many places, which can overflow if
we have a very large number of lines or even bytes in our
input files. This can cause us to produce incorrect diffs,
with no indication that the output is wrong. Or worse, we
may even underallocate a buffer whose size is the result of
an overflowing addition.
We're much better off to tell the user that we cannot diff
or merge such a large file. This patch covers both cases,
but in slightly different ways:
1. For merging, we notice the large file and cleanly fall
back to a binary merge (which is effectively "we cannot
merge this").
2. For diffing, we make the binary/text distinction much
earlier, and in many different places. For this case,
we'll use the xdi_diff as our choke point, and reject
any diff there before it hits the xdiff code.
This means in most cases we'll die() immediately after.
That's not ideal, but in practice we shouldn't
generally hit this code path unless the user is trying
to do something tricky. We already consider files
larger than core.bigfilethreshold to be binary, so this
code would only kick in when that is circumvented
(either by bumping that value, or by using a
.gitattribute to mark a file as diffable).
In other words, we can avoid being "nice" here, because
there is already nice code that tries to do the right
thing. We are adding the suspenders to the nice code's
belt, so notice when it has been worked around (both to
protect the user from malicious inputs, and because it
is better to die() than generate bogus output).
The maximum size was chosen after experimenting with feeding
large files to the xdiff code. It's just under a gigabyte,
which leaves room for two obvious cases:
- a diff3 merge conflict result on files of maximum size X
could be 3*X plus the size of the markers, which would
still be only about 3G, which fits in a 32-bit int.
- some of the diff code allocates arrays of one int per
record. Even if each file consists only of blank lines,
then a file smaller than 1G will have fewer than 1G
records, and therefore the int array will fit in 4G.
Since the limit is arbitrary anyway, I chose to go under a
gigabyte, to leave a safety margin (e.g., we would not want
to overflow by allocating "(records + 1) * sizeof(int)" or
similar.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we call into xdiff to perform a diff, we generally lose
the return code completely. Typically by ignoring the return
of our xdi_diff wrapper, but sometimes we even propagate
that return value up and then ignore it later. This can
lead to us silently producing incorrect diffs (e.g., "git
log" might produce no output at all, not even a diff header,
for a content-level diff).
In practice this does not happen very often, because the
typical reason for xdiff to report failure is that it
malloc() failed (it uses straight malloc, and not our
xmalloc wrapper). But it could also happen when xdiff
triggers one our callbacks, which returns an error (e.g.,
outf() in builtin/rerere.c tries to report a write failure
in this way). And the next patch also plans to add more
failure modes.
Let's notice an error return from xdiff and react
appropriately. In most of the diff.c code, we can simply
die(), which matches the surrounding code (e.g., that is
what we do if we fail to load a file for diffing in the
first place). This is not that elegant, but we are probably
better off dying to let the user know there was a problem,
rather than simply generating bogus output.
We could also just die() directly in xdi_diff, but the
callers typically have a bit more context, and can provide a
better message (and if we do later decide to pass errors up,
we're one step closer to doing so).
There is one interesting case, which is in diff_grep(). Here
if we cannot generate the diff, there is nothing to match,
and we silently return "no hits". This is actually what the
existing code does already, but we make it a little more
explicit.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
By default, libcurl will follow circular http redirects
forever. Let's put a cap on this so that somebody who can
trigger an automated fetch of an arbitrary repository (e.g.,
for CI) cannot convince git to loop infinitely.
The value chosen is 20, which is the same default that
Firefox uses.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previously, libcurl would follow redirection to any protocol
it was compiled for support with. This is desirable to allow
redirection from HTTP to HTTPS. However, it would even
successfully allow redirection from HTTP to SFTP, a protocol
that git does not otherwise support at all. Furthermore
git's new protocol-whitelisting could be bypassed by
following a redirect within the remote helper, as it was
only enforced at transport selection time.
This patch limits redirects within libcurl to HTTP, HTTPS,
FTP and FTPS. If there is a protocol-whitelist present, this
list is limited to those also allowed by the whitelist. As
redirection happens from within libcurl, it is impossible
for an HTTP redirect to a protocol implemented within
another remote helper.
When the curl version git was compiled with is too old to
support restrictions on protocol redirection, we warn the
user if GIT_ALLOW_PROTOCOL restrictions were requested. This
is a little inaccurate, as even without that variable in the
environment, we would still restrict SFTP, etc, and we do
not warn in that case. But anything else means we would
literally warn every time git accesses an http remote.
This commit includes a test, but it is not as robust as we
would hope. It redirects an http request to ftp, and checks
that curl complained about the protocol, which means that we
are relying on curl's specific error message to know what
happened. Ideally we would redirect to a working ftp server
and confirm that we can clone without protocol restrictions,
and not with them. But we do not have a portable way of
providing an ftp server, nor any other protocol that curl
supports (https is the closest, but we would have to deal
with certificates).
[jk: added test and version warning]
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The current callers only want to die when their transport is
prohibited. But future callers want to query the mechanism
without dying.
Let's break out a few query functions, and also save the
results in a static list so we don't have to re-parse for
each query.
Based-on-a-patch-by: Blake Burkhart <bburky@bburky.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some protocols (like git-remote-ext) can execute arbitrary
code found in the URL. The URLs that submodules use may come
from arbitrary sources (e.g., .gitmodules files in a remote
repository). Let's restrict submodules to fetching from a
known-good subset of protocols.
Note that we apply this restriction to all submodule
commands, whether the URL comes from .gitmodules or not.
This is more restrictive than we need to be; for example, in
the tests we run:
git submodule add ext::...
which should be trusted, as the URL comes directly from the
command line provided by the user. But doing it this way is
simpler, and makes it much less likely that we would miss a
case. And since such protocols should be an exception
(especially because nobody who clones from them will be able
to update the submodules!), it's not likely to inconvenience
anyone in practice.
Reported-by: Blake Burkhart <bburky@bburky.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we are cloning an untrusted remote repository into a
sandbox, we may also want to fetch remote submodules in
order to get the complete view as intended by the other
side. However, that opens us up to attacks where a malicious
user gets us to clone something they would not otherwise
have access to (this is not necessarily a problem by itself,
but we may then act on the cloned contents in a way that
exposes them to the attacker).
Ideally such a setup would sandbox git entirely away from
high-value items, but this is not always practical or easy
to set up (e.g., OS network controls may block multiple
protocols, and we would want to enable some but not others).
We can help this case by providing a way to restrict
particular protocols. We use a whitelist in the environment.
This is more annoying to set up than a blacklist, but
defaults to safety if the set of protocols git supports
grows). If no whitelist is specified, we continue to default
to allowing all protocols (this is an "unsafe" default, but
since the minority of users will want this sandboxing
effect, it is the only sensible one).
A note on the tests: ideally these would all be in a single
test file, but the git-daemon and httpd test infrastructure
is an all-or-nothing proposition rather than a test-by-test
prerequisite. By putting them all together, we would be
unable to test the file-local code on machines without
apache.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The experimental untracked-cache feature were buggy when paths with
a few levels of subdirectories are involved.
* dt/untracked-subdir:
untracked cache: fix entry invalidation
untracked-cache: fix subdirectory handling
t7063: use --force-untracked-cache to speed up a bit
untracked-cache: support sparse checkout
f8da6801 (am --skip: support skipping while on unborn branch,
2015-06-06) introduced a performance regression to "git am --skip",
where it used "read-tree" to reconstruct the index from scratch
without reusing the cached stat information.
This is a backport of the corresponding patch to the builtin am in 2.6:
3ecc704 (am --skip/--abort: merge HEAD/ORIG_HEAD tree into index,
2015-08-19).
Reportedly, it can make a huge difference on Windows, in one case a `git
rebase --skip` took 1m40s without, and 5s with, this patch.
cf. https://github.com/git-for-windows/git/issues/365
Reported-and-suggested-by: Kim Gybels <kgybels@infogroep.be>
Acked-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we show "branch@{0}", we format into a fixed-size
buffer using sprintf. This can overflow if you have long
branch names. We can fix it by using a temporary strbuf.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This function assumes that the relative_base path passed
into it is no larger than PATH_MAX, and writes into a
fixed-size buffer. However, this path may not have actually
come from the filesystem; for example, add_submodule_odb
generates a path using a strbuf and passes it in. This is
hard to trigger in practice, though, because the long
submodule directory would have to exist on disk before we
would try to open its info/alternates file.
We can easily avoid the bug, though, by simply creating the
filename on the heap.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we are loading a notes tree into our internal hash
table, we also collect any files that are clearly non-notes.
We format the name of the file into a PATH_MAX buffer, but
unlike true notes (which cannot be larger than a fanned-out
sha1 hash), these tree entries can be arbitrarily long,
overflowing our buffer.
We can fix this by switching to a strbuf. It doesn't even
cost us an extra allocation, as we can simply hand ownership
of the buffer over to the non-note struct.
This is of moderate security interest, as you might fetch
notes trees from an untrusted remote. However, we do not do
so by default, so you would have to manually fetch into the
notes namespace.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When unpack-trees wants to know whether a path will
overwrite anything in the working tree, we use lstat() to
see if there is anything there. But if we are going to write
"foo/bar", we can't just lstat("foo/bar"); we need to look
for leading prefixes (e.g., "foo"). So we use the lstat cache
to find the length of the leading prefix, and copy the
filename up to that length into a temporary buffer (since
the original name is const, we cannot just stick a NUL in
it).
The copy we make goes into a PATH_MAX-sized buffer, which
will overflow if the prefix is longer than PATH_MAX. How
this happens is a little tricky, since in theory PATH_MAX is
the biggest path we will have read from the filesystem. But
this can happen if:
- the compiled-in PATH_MAX does not accurately reflect
what the filesystem is capable of
- the leading prefix is not _quite_ what is on disk; it
contains the next element from the name we are checking.
So if we want to write "aaa/bbb/ccc/ddd" and "aaa/bbb"
exists, the prefix of interest is "aaa/bbb/ccc". If
"aaa/bbb" approaches PATH_MAX, then "ccc" can overflow
it.
So this can be triggered, but it's hard to do. In
particular, you cannot just "git clone" a bogus repo. The
verify_absent checks happen before unpack-trees writes
anything to the filesystem, so there are never any leading
prefixes during the initial checkout, and the bug doesn't
trigger. And by definition, these files are larger than
PATH_MAX, so writing them will fail, and clone will
complain (though it may write a partial path, which will
cause a subsequent "git checkout" to hit the bug).
We can fix it by creating the temporary path on the heap.
The extra malloc overhead is not important, as we are
already making at least one stat() call (and probably more
for the prefix discovery).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git init empty && git -C empty log" said "bad default revision 'HEAD'",
which was found to be a bit confusing to new users.
* jk/log-missing-default-HEAD:
log: diagnose empty HEAD more clearly
The "interpret-trailers" helper mistook a multi-paragraph title of
a commit log message with a colon in it as the end of the trailer
block.
* cc/trailers-corner-case-fix:
trailer: support multiline title
trailer: retitle a test and correct an in-comment message
trailer: ignore first line of message
When re-priming the cache-tree opportunistically while committing
the in-core index as-is, we mistakenly invalidated the in-core
index too aggressively, causing the experimental split-index code
to unnecessarily rewrite the on-disk index file(s).
* dt/commit-preserve-base-index-upon-opportunistic-cache-tree-update:
commit: don't rewrite shared index unnecessarily
"git archive" did not use zip64 extension when creating an archive
with more than 64k entries, which nobody should need, right ;-)?
* rs/archive-zip-many:
archive-zip: support more than 65535 entries
archive-zip: use a local variable to store the creator version
t5004: test ZIP archives with many entries