We did not parse username followed by literal IPv6 address in SSH
transport URLs, e.g. ssh://user@[2001:db8::1]:22/repo.git
correctly.
* tb/connect-ipv6-parse-fix:
t5500: show user name and host in diag-url
t5601: add more test cases for IPV6
connect.c: allow ssh://user@[2001:db8::1]/repo.git
`ce` is allocated in make_cache_entry and should be freed if it is not
used any more. refresh_cache_entry as a wrapper around refresh_cache_ent
will either return
- the `ce` given as the parameter, when it was up-to-date;
- a new updated cache entry which is allocated to new memory; or
- a NULL when refreshing failed.
In the latter two cases, the original cache-entry `ce` is not used
and needs to be freed. The rule can be expressed as "if the return
value from refresh is different from the original ce, ce is no
longer used."
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We allocate a cache-entry pretty early in the function and then
decide either not to do anything when we are pretending to add, or
add it and then get an error (another possibility is obviously to
succeed).
When pretending or failing to add, we forgot to free the
cache-entry.
Noticed during a discussion on Stefan's patch to change the coding
style without fixing the issue ;-)
Signed-off-by: Junio C Hamano <gitster@pobox.com>
preq is NULL as the condition the line before dictates. And the cleanup
function release_http_pack_request is not null pointer safe.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These string_list instances were allocated by get_renames() and
get_unmerged for the sole use of this caller, and the function is
responsible for freeing them, not just their contents.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
oldlines is allocated earlier in the function and also freed on the
successful code path.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Because run_command both spawns and wait()s for the command
before returning control to the caller, any reads from the
pipes we open must necessarily happen after wait() returns.
This can lead to deadlock, as the child process may block
on writing to us while we are blocked waiting for it to
exit.
Worse, it only happens when the child fills the pipe
buffer, which means that the problem may come and go
depending on the platform and the size of the output
produced by the child.
Let's detect and flag this dangerous construct so that we
can catch potential bugs early in the test suite rather than
having them happen in the field.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we read from a trailer.*.command sub-program, the
current code uses run_command followed by a pipe read, which
can result in deadlock (though in practice you would have to
have a large trailer for this to be a problem). The current
code also leaks the file descriptor for the pipe to the
sub-command.
Instead, let's use capture_command, which makes this simpler
(and we can get rid of our custom helper).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In is_submodule_commit_present, we call run_command followed
by a pipe read, which is prone to deadlock. It is unlikely
to happen in this case, as rev-list should never produce
more than a single line of output, but it does not hurt to
avoid an anti-pattern (and using the helper simplifies the
setup and cleanup).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we spawn "git submodule status" to read its output, we
use run_command() followed by strbuf_read() read from the
pipe. This can deadlock if the subprocess output is larger
than the system pipe buffer.
Furthermore, if start_command() fails, we'll try to read
from a bogus descriptor (probably "-1" or a descriptor we
just closed, but it is a bad idea for us to make assumptions
about how start_command implements its error handling). And
if start_command succeeds, we leak the file descriptor for
the pipe to the child.
All of these can be solved by using the capture_command
helper.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Something as simple as reading the stdout from a command
turns out to be rather hard to do right. Doing:
cmd.out = -1;
run_command(&cmd);
strbuf_read(&buf, cmd.out, 0);
can result in deadlock if the child process produces a large
amount of output. What happens is:
1. The parent spawns the child with its stdout connected
to a pipe, of which the parent is the sole reader.
2. The parent calls wait(), blocking until the child exits.
3. The child writes to stdout. If it writes more data than
the OS pipe buffer can hold, the write() call will
block.
This is a deadlock; the parent is waiting for the child to
exit, and the child is waiting for the parent to call
read().
So we might try instead:
start_command(&cmd);
strbuf_read(&buf, cmd.out, 0);
finish_command(&cmd);
But that is not quite right either. We are examining cmd.out
and running finish_command whether start_command succeeded
or not, which is wrong. Moreover, these snippets do not do
any error handling. If our read() fails, we must make sure
to still call finish_command (to reap the child process).
And both snippets failed to close the cmd.out descriptor,
which they must do (provided start_command succeeded).
Let's introduce a run-command helper that can make this a
bit simpler for callers to get right.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We do that almost everywhere, because it's faster for large number of
refs, see a31e62629 (completion: optimize refs completion, 2011-10-15).
These were the last two places where we still used __gitcomp() for
completing refs.
Signed-off-by: SZEDER Gábor <szeder@ira.uka.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We call strbuf_read(), and want to know whether we got any
output. To do so, we assign the result to a size_t, and
check whether it is non-zero.
But strbuf_read returns a signed ssize_t. If it encounters
an error, it will return -1, and we'll end up treating this
the same as if we had gotten output. Instead, we can just
check whether our buffer has anything in it (which is what
we care about anyway, and is the same thing since we know
the buffer was empty to begin with).
Note that the "len" variable actually has two roles in this
function. Now that we've eliminated the first, we can push the
declaration closer to the point of use for the second one.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is a holdover from the original implementation in
ac8d5af (builtin-status: submodule summary support,
2008-04-12), which just had the sub-process output to our
descriptor; we had to make sure we had flushed any data that
we produced before it started writing.
Since 3ba7407 (submodule summary: ignore --for-status
option, 2013-09-06), however, we pipe the sub-process output
back to ourselves. So there's no longer any need to flush
(it does not hurt, but it may leave readers wondering why we
do it).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
`old` is not used outside the loop and would get lost
once we reach the goto.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This frees `ce` would be leaking in the error path.
Additionally a free is moved towards the return. This helps code
readability as we often have this pattern of freeing resources just
before return/exit and not in between the code.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add missing &&, detected by the --chain-lint option
Signed-off-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The split index extension uses ewah bitmaps to mark index entries as
deleted, instead of removing them from the index directly. This can
result in an on-disk index, in which entries of stage #0 and higher
stages appear, which are removed later when the index bases are merged.
15999d0 read_index_from(): catch out of order entries when reading an
index file introduces a check which checks if the entries are in order
after each index entry is read in do_read_index. This check may however
fail when a split index is read.
Fix this by moving checking the index after we know there is no split
index or after the split index bases are successfully merged instead.
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Output from "git log --decorate" mentions HEAD when it points at a
tip of an branch differently from a detached HEAD.
This is a potentially backward-incompatible change.
* mg/log-decorate-HEAD:
log: decorate HEAD with branch name
"git log --decorate" did not reset colors correctly around the
branch names.
* jc/decorate-leaky-separator-color:
log --decorate: do not leak "commit" color into the next item
Documentation/config.txt: simplify boolean description in the syntax section
Documentation/config.txt: describe 'color' value type in the "Values" section
Documentation/config.txt: have a separate "Values" section
Documentation/config.txt: describe the structure first and then meaning
Documentation/config.txt: explain multi-valued variables once
Documentation/config.txt: avoid unnecessary negation
Workarounds for certain build of GPG that triggered false breakage
in a test.
* mg/verify-commit:
t7510: do not fail when gpg warns about insecure memory
"git imap-send" learned to optionally talk with an IMAP server via
libcURL; because there is no other option when Git is built with
NO_OPENSSL option, use that codepath by default under such
configuration.
* km/imap-send-libcurl-options:
imap-send: use cURL automatically when NO_OPENSSL defined
We now detect number of CPUs on older BSD-derived systems.
* km/bsd-sysctl:
thread-utils.c: detect CPU count on older BSD-like systems
configure: support HAVE_BSD_SYSCTL option
Portability fixes and workarounds for shell scripts have been added
to help BSD-derived systems.
* km/bsd-shells:
t5528: do not fail with FreeBSD shell
help.c: use SHELL_PATH instead of hard-coded "/bin/sh"
git-compat-util.h: move SHELL_PATH default into header
git-instaweb: use @SHELL_PATH@ instead of /bin/sh
git-instaweb: allow running in a working tree subdirectory
Code in "git daemon" to parse out and hold hostnames used in
request interpolation has been simplified.
* rs/daemon-hostname-in-strbuf:
daemon: deglobalize hostname information
daemon: use strbuf for hostname info
"git branch" on a detached HEAD always said "(detached from xyz)",
even when "git status" would report "detached at xyz". The HEAD is
actually at xyz and haven't been moved since it was detached in
such a case, but the user cannot read what the current value of
HEAD is when "detached from" is used.
* mg/detached-head-report:
branch: name detached HEAD analogous to status
wt-status: refactor detached HEAD analysis
"git -C '' subcmd" refused to work in the current directory, unlike
"cd ''" which silently behaves as a no-op.
* kn/git-cd-to-empty:
git: treat "git -C '<path>'" as a no-op when <path> is empty
The versionsort.prerelease configuration variable can be used to
specify that v1.0-pre1 comes before v1.0.
* nd/versioncmp-prereleases:
config.txt: update versioncmp.prereleaseSuffix
versionsort: support reorder prerelease suffixes
When we delete a ref, we have to rewrite the entire
packed-refs file. We take this opportunity to "curate" the
packed-refs file and drop any entries that are crufty or
broken.
Dropping broken entries (e.g., with bogus names, or ones
that point to missing objects) is actively a bad idea, as it
means that we lose any notion that the data was there in the
first place. Aside from the general hackiness that we might
lose any information about ref "foo" while deleting an
unrelated ref "bar", this may seriously hamper any attempts
by the user at recovering from the corruption in "foo".
They will lose the sha1 and name of "foo"; the exact pointer
may still be useful even if they recover missing objects
from a different copy of the repository. But worse, once the
ref is gone, there is no trace of the corruption. A
follow-up "git prune" may delete objects, even though it
would otherwise bail when seeing corruption.
We could just drop the "broken" bits from
curate_packed_refs, and continue to drop the "crufty" bits:
refs whose loose counterpart exists in the filesystem. This
is not wrong to do, and it does have the advantage that we
may write out a slightly smaller packed-refs file. But it
has two disadvantages:
1. It is a potential source of races or mistakes with
respect to these refs that are otherwise unrelated to
the operation. To my knowledge, there aren't any active
problems in this area, but it seems like an unnecessary
risk.
2. We have to spend time looking up the matching loose
refs for every item in the packed-refs file. If you
have a large number of packed refs that do not change,
that outweighs the benefit from writing out a smaller
packed-refs file (it doesn't get smaller, and you do a
bunch of directory traversal to find that out).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we are repacking with "-ad", we will drop any unreachable
objects. Likewise, using "-Ad --unpack-unreachable=<time>"
will drop any old, unreachable objects. In these cases, we
want to make sure the reachability we compute with "--all"
is complete. We can do this by passing GIT_REF_PARANOIA=1 in
the environment to pack-objects.
Note that "-Ad" is safe already, because it only loosens
unreachable objects. It is up to "git prune" to avoid
deleting them.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Prune should know about broken objects at the tips of refs,
so that we can feed them to our traversal rather than
ignoring them. It's better for us to abort the operation on
the broken object than it is to start deleting objects with
an incomplete view of the reachability namespace.
Note that for missing objects, aborting is the best we can
do. For a badly-named ref, we technically could use its sha1
as a reachability tip. However, the iteration code just
feeds us a null sha1, so there would be a reasonable amount
of code involved to pass down our wishes. It's not really
worth trying to do better, because this is a case that
should happen extremely rarely, and the message we provide:
fatal: unable to parse object: refs/heads/bogus:name
is probably enough to point the user in the right direction.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most operations that iterate over refs are happy to ignore
broken cruft. However, some operations should be performed
with knowledge of these broken refs, because it is better
for the operation to choke on a missing object than it is to
silently pretend that the ref did not exist (e.g., if we are
computing the set of reachable tips in order to prune
objects).
These processes could just call for_each_rawref, except that
ref iteration is often hidden behind other interfaces. For
instance, for a destructive "repack -ad", we would have to
inform "pack-objects" that we are destructive, and then it
would in turn have to tell the revision code that our
"--all" should include broken refs.
It's much simpler to just set a global for "dangerous"
operations that includes broken refs in all iterations.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we are doing a destructive operation like "git prune",
we want to be extra careful that the set of reachable tips
we compute is valid. If there is any corruption or oddity,
we are better off aborting the operation and letting the
user figure things out rather than plowing ahead and
possibly deleting some data that cannot be recovered.
The tests here include:
1. Pruning objects mentioned only be refs with invalid
names. This used to abort prior to d0f810f (refs.c:
allow listing and deleting badly named refs,
2014-09-03), but since then we silently ignore the tip.
Likewise, we test repacking that can drop objects
(either "-ad", which drops anything unreachable,
or "-Ad --unpack-unreachable=<time>", which tries to
optimize out a loose object write that would be
directly pruned).
2. Pruning objects when some refs point to missing
objects. We don't know whether any dangling objects
would have been reachable from the missing objects. We
are better to keep them around, as they are better than
nothing for helping the user recover history.
3. Packed refs that point to missing objects can sometimes
be dropped. By itself, this is more of an annoyance
(you do not have the object anyway; even if you can
recover it from elsewhere, all you are losing is a
placeholder for your state at the time of corruption).
But coupled with (2), if we drop the ref and then go
on to prune, we may lose unrecoverable objects.
Note that we use test_might_fail for some of the operations.
In some cases, it would be appropriate to abort the
operation, and in others, it might be acceptable to continue
but taking the information into account. The tests don't
care either way, and check only for data loss.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The different index versions have different sha-1 checksums. Those
checksums are checked in t1700, which makes it fail when the test suite
is run with TEST_GIT_INDEX_VERSION=4. Fix it.
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
All of these cases are moderate since they would most probably not
lead to missed failing tests; either they would fail otherwise, or
fail a rm in test_when_finished only.
Signed-off-by: Michael J Gruber <git@drmicha.warpmail.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This test is special for several reasons:
It ends with a "true" statement, which should be a no-op.
It is not because the &&-chain is broken right before it.
Also, looking at what the test intended to test according to
7f578c5 (git-svn: --follow-parent now works on sub-directories of larger
branches, 2007-01-24)
it is not clear how it would achieve that with the given steps.
Amend the test to include the second svn id to be tested for, and
change the tested refs to the ones which are to be expected, and which
make the test pass.
Signed-off-by: Michael J Gruber <git@drmicha.warpmail.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This use of "||" fools --chain-lint into thinking the
&&-chain is broken (and indeed, it is somewhat broken; a
failure of update-index in these tests would show the patch
file, even if we never got to the part of the test where we
fed the patch to git-apply).
The extra blocks were there to include more debugging
output, but it hardly seems worth it; the user should know
which command failed (because git-apply will produce error
messages) and can look in the trash directory themselves.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The ":" noop command always returns true, so it is fine to
include these lines in an &&-chain (and it appeases
--chain-lint).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This test uses single quotes inside the single-quoted test
snippet, which effectively makes the contents unquoted.
Since they don't need quoted anyway, this isn't a problem,
but let's switch them to double-quotes to make it more
obviously correct.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some of the symlink tests check an either-or case using the
"||". This is not wrong, but fools --chain-lint into
thinking the &&-chain is broken (in fact, there is no &&
chain here).
We can solve this by wrapping the "||" inside a {} block.
This is a bit more verbose, but this construct is rare, and
the {} block helps call attention to it.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The confirmation tests in t9001 all save the value of
sendemail.confirm, do something to it, then restore it at
the end, in a way that breaks the &&-chain (they are not
wrong, because they save the $? value, but it fools
--chain-lint).
Instead, they can all use test_when_finished, and we can
even make the code simpler by factoring out the shared
lines.
Note that we can _almost_ use test_config here, except that:
1. We do not restore the config with test_unconfig, but by
setting it back to some prior value.
2. We are not always setting a config variable. Sometimes
the change to be undone is unsetting it entirely.
We could teach test_config to handle these cases, but it's
not worth the complexity for a single call-site.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We can use test_must_fail and test_path_* to avoid some
hand-rolled if statements. This makes the code shorter, and
makes it more obvious when we are breaking the &&-chain.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These say roughly the same thing as the hand-rolled
messages. We do lose the "merge did not complete" debug
message, but merge and write-tree are prefectly capable of
writing useful error messages when they fail.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>