In the discover_refs function, we use a strbuf named
"buffer" for multiple purposes. First we build the info/refs
URL in it, and then detach that to a bare pointer. Then, we
use the same strbuf to store the result of fetching the
refs.
Let's instead keep a separate refs_url strbuf. This is less
confusing, as the "buffer" strbuf is now used for only one
thing.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
If a caller asks the http_get_* functions to go to a
particular URL and we end up elsewhere due to a redirect,
the effective_url field can tell us where we went.
It would be nice to remember this redirect and short-cut
further requests for two reasons:
1. It's more efficient. Otherwise we spend an extra http
round-trip to the server for each subsequent request,
just to get redirected.
2. If we end up with an http 401 and are going to ask for
credentials, it is to feed them to the redirect target.
If the redirect is an http->https upgrade, this means
our credentials may be provided on the http leg, just
to end up redirected to https. And if the redirect
crosses server boundaries, then curl will drop the
credentials entirely as it follows the redirect.
However, it, it is not enough to simply record the effective
URL we saw and use that for subsequent requests. We were
originally fed a "base" url like:
http://example.com/foo.git
and we want to figure out what the new base is, even though
the URLs we see may be:
original: http://example.com/foo.git/info/refs
effective: http://example.com/bar.git/info/refs
Subsequent requests will not be for "info/refs", but for
other paths relative to the base. We must ask the caller to
pass in the original base, and we must pass the redirected
base back to the caller (so that it can generate more URLs
from it). Furthermore, we need to feed the new base to the
credential code, so that requests to credential helpers (or
to the user) match the URL we will be requesting.
This patch teaches http_request_reauth to do this munging.
Since it is the caller who cares about making more URLs, it
seems at first glance that callers could simply check
effective_url themselves and handle it. However, since we
need to update the credential struct before the second
re-auth request, we have to do it inside http_request_reauth.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
When we ask curl to access a URL, it may follow one or more
redirects to reach the final location. We have no idea
this has happened, as curl takes care of the details and
simply returns the final content to us.
The final URL that we ended up with can be accessed via
CURLINFO_EFFECTIVE_URL. Let's make that optionally available
to callers of http_get_*, so that they can make further
decisions based on the redirection.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
When we are handling a curl response code in http_request or
in the remote-curl RPC code, we use the handle_curl_result
helper to translate curl's response into an easy-to-use
code. When we see an HTTP 401, we do one of two things:
1. If we already had a filled-in credential, we mark it as
rejected, and then return HTTP_NOAUTH to indicate to
the caller that we failed.
2. If we didn't, then we ask for a new credential and tell
the caller HTTP_REAUTH to indicate that they may want
to try again.
Rejecting in the first case makes sense; it is the natural
result of the request we just made. However, prompting for
more credentials in the second step does not always make
sense. We do not know for sure that the caller is going to
make a second request, and nor are we sure that it will be
to the same URL. Logically, the prompt belongs not to the
request we just finished, but to the request we are (maybe)
about to make.
In practice, it is very hard to trigger any bad behavior.
Currently, if we make a second request, it will always be to
the same URL (even in the face of redirects, because curl
handles the redirects internally). And we almost always
retry on HTTP_REAUTH these days. The one exception is if we
are streaming a large RPC request to the server (e.g., a
pushed packfile), in which case we cannot restart. It's
extremely unlikely to see a 401 response at this stage,
though, as we would typically have seen it when we sent a
probe request, before streaming the data.
This patch drops the automatic prompt out of case 2, and
instead requires the caller to do it. This is a few extra
lines of code, and the bug it fixes is unlikely to come up
in practice. But it is conceptually cleaner, and paves the
way for better handling of credentials across redirects.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Over time, the http_get_strbuf function has grown several
optional parameters. We now have a bitfield with multiple
boolean options, as well as an optional strbuf for returning
the content-type of the response. And a future patch in this
series is going to add another strbuf option.
Treating these as separate arguments has a few downsides:
1. Most call sites need to add extra NULLs and 0s for the
options they aren't interested in.
2. The http_get_* functions are actually wrappers around
2 layers of low-level implementation functions. We have
to pass these options through individually.
3. The http_get_strbuf wrapper learned these options, but
nobody bothered to do so for http_get_file, even though
it is backed by the same function that does understand
the options.
Let's consolidate the options into a single struct. For the
common case of the default options, we'll allow callers to
simply pass a NULL for the options struct.
The resulting code is often a few lines longer, but it ends
up being easier to read (and to change as we add new
options, since we do not need to update each call site).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
When we retrieve the content-type of an http response, curl
gives us a pointer to internal storage, which we then copy
into a strbuf. Let's factor out the get-and-copy routine,
which can be used for getting other curl info.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Besides being ugly, the extra parentheses are idiomatic for
suppressing compiler warnings when we are assigning within a
conditional. We aren't doing that here, and they just
confuse the reader.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
* km/svn-1.8-serf-only:
Git.pm: revert _temp_cache use of temp_is_locked
git-svn: allow git-svn fetching to work using serf
Git.pm: add new temp_is_locked function
Mediawiki introduces a new API for queries w/ more than 500 results in
version 1.21. That change triggered an infinite loop while cloning a
mediawiki with such a page.
The latest API renamed and moved the "continuing" information in the
response, necessary to build the next query. The code failed to retrieve
that information but still detected that it was in a "continuing
query". As a result, it launched the same query over and over again.
If a "continuing" information is detected in the response (old or new),
the next query is updated accordingly. If not, we quit assuming it's not
a continuing query.
Reported-by: Benjamin Cathey
Signed-off-by: Benoit Person <benoit.person@gmail.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Some people still use rather old versions of bash, which cannot grok
some constructs like 'printf -v varname' the prompt and completion
code started to use recently.
* bc/completion-for-bash-3.0:
contrib/git-prompt.sh: handle missing 'printf -v' more gracefully
t9902-completion.sh: old Bash still does not support array+=('') notation
git-completion.bash: use correct Bash/Zsh array length syntax
Fixes a minor bug in "git rebase -i" (there could be others, as the
root cause is pretty generic) where the code feeds a random, data
dependeant string to 'echo' and expects it to come out literally.
* mm/no-shell-escape-in-die-message:
die_with_status: use "printf '%s\n'", not "echo"
Output from "git log --full-diff -- <pathspec>" looked strange,
because comparison was done with the previous ancestor that touched
the specified <pathspec>, causing the patches for paths outside the
pathspec to show more than the single commit has changed.
* tr/log-full-diff-keep-true-parents:
log: use true parents for diff when walking reflogs
log: use true parents for diff even when rewriting
The auto-tag-following code in "git fetch" tries to reuse the same
transport twice when the serving end does not cooperate and does
not give tags that point to commits that are asked for as part of
the primary transfer. Unfortunately, Git-aware transport helper
interface is not designed to be used more than once, hence this
does not work over smart-http transfer.
* jc/transport-do-not-use-connect-twice-in-fetch:
builtin/fetch.c: Fix a sparse warning
fetch: work around "transport-take-over" hack
fetch: refactor code that fetches leftover tags
fetch: refactor code that prepares a transport
fetch: rename file-scope global "transport" to "gtransport"
t5802: add test for connect helper
Send a large request to read(2)/write(2) as a smaller but still
reasonably large chunks, which would improve the latency when the
operation needs to be killed and incidentally works around broken
64-bit systems that cannot take a 2GB write or read in one go.
* sp/clip-read-write-to-8mb:
Revert "compat/clipped-write.c: large write(2) fails on Mac OS X/XNU"
xread, xwrite: limit size of IO to 8MB
322bb6e (2011 Aug 11) introduced a new subshell at the end of a test
case but omitted a '&&' to join the two; fix this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If SSL verification is enabled in git send-email, we could attempt to call a
method on an undefined value if the verification failed, since $smtp would end
up being undef. Look up the error string in a way that will produce a helpful
error message and not cause further errors.
Signed-off-by: Brian M. Carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since a1549e10, git-rebase--am.sh uses the shell's "return" statement, to
mean "return from the current file inclusion", which is POSIXly correct,
but badly interpreted on FreeBSD, which returns from the current
function, hence skips the finish_rebase statement that follows the file
inclusion.
Make the use of "return" portable by using the file inclusion as the last
statement of a function.
Reported-by: Christoph Mallon <christoph.mallon@gmx.de>
Signed-off-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use "das Tag" to avoid confusion with the German word "Tag" (day).
Reported-by: Dirk Heinrichs <dirk.heinrichs@altum.de>
Signed-off-by: Ralf Thielow <ralf.thielow@gmail.com>
The recent "short-cut clone connectivity check" topic broke a shallow
repository when a fetch operation tries to auto-follow tags.
* nd/fetch-pack-shallow-fix:
fetch-pack: do not remove .git/shallow file when --depth is not specified
You need at least four dashes in a line to have it recognized as listing
block delimiter by asciidoc.
Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
An if clause must not be empty; add a "colon" command.
Signed-off-by: Thorsten Glaser <t.glaser@tarent.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Sparse issues an "'prepare_transport' was not declared. Should it
be static?" warning. In order to suppress the warning, since this
symbol only requires file scope, we simply add the static modifier
to it's declaration.
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The read_mailmap_buf function reads each line of the mailmap
using strchrnul, like:
const char *end = strchrnul(buf, '\n');
unsigned long linelen = end - buf + 1;
But that's off-by-one when we actually hit the NUL byte; our
line does not have a terminator, and so is only "end - buf"
bytes long. As a result, when we subtract the linelen from
the total len, we end up with (unsigned long)-1 bytes left
in the buffer, and we start reading random junk from memory.
We could fix it with:
unsigned long linelen = end - buf + !!*end;
but let's take a step back for a moment. It's questionable
in the first place for a function that takes a buffer and
length to be using strchrnul. But it works because we only
have one caller (and are only likely to ever have this one),
which is handing us data from read_sha1_file. Which means
that it's always NUL-terminated.
Instead of tightening the assumptions to make the
buffer/length pair work for a caller that doesn't actually
exist, let's let loosen the assumptions to what the real
caller has: a modifiable, NUL-terminated string.
This makes the code simpler and shorter (because we don't
have to correlate strchrnul with the length calculation),
correct (because the code with the off-by-one just goes
away), and more efficient (we can drop the extra allocation
we needed to create NUL-terminated strings for each line,
and just terminate in place).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
According to C99, section 7.1.4:
Any function declared in a header may be additionally
implemented as a function-like macro defined in the
header.
Therefore calling our struct member function pointer "fgetc"
may run afoul of unwanted macro expansion when we call:
char c = cf->fgetc(cf);
This turned out to be a problem on uclibc, which defines
fgetc as a macro and causes compilation failure.
The standard suggests fixing this in a few ways:
1. Using extra parentheses to inhibit the function-like
macro expansion. E.g., "(cf->fgetc)(cf)". This is
undesirable as it's ugly, and each call site needs to
remember to use it (and on systems without the macro,
forgetting will compile just fine).
2. Using #undef (because a conforming implementation must
also be providing fgetc as a function). This is
undesirable because presumably the implementation was
using the macro for a performance benefit, and we are
dropping that optimization.
Instead, we can simply use non-colliding names.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
fetch_pack() can remove .git/shallow file when a shallow repository
becomes a full one again. This behavior is triggered incorrectly when
tags are also fetched because fetch_pack() will be called twice. At
the first fetch_pack() call:
- shallow_lock is set up
- alternate_shallow_file points to shallow_lock.filename, which is
"shallow.lock"
- commit_lock_file is called, which sets shallow_lock.filename to "".
alternate_shallow_file also becomes "" because it points to the
same memory.
At the second call, setup_alternate_shallow() is not called and
alternate_shallow_file remains "". It's mistaken as unshallow case and
.git/shallow is removed. The end result is a broken repository.
Fix this by always initializing alternate_shallow_file when
fetch_pack() is called. As an extra measure, check if args->depth > 0
before commit/rollback shallow file.
Reported-by: Kacper Kornet <kornet@camk.edu.pl>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Old Bash (3.0) which is distributed with RHEL 4.X and other ancient
platforms that are still in wide use, do not have a printf that
supports -v. Neither does Zsh (which is already handled in the code).
As suggested by Junio, let's test whether printf supports the -v
option and store the result. Then later, we can use it to
determine whether 'printf -v' can be used, or whether printf
must be called in a subshell.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Old Bash (3.0) which is distributed with RHEL 4.X and other ancient
platforms that are still in wide use, does not understand the
array+=() notation. Let's use an explicit assignment to the new array
element which works everywhere, like:
array[${#array[@]}+1]=''
The right-hand side '' is not strictly necessary, but in this case
I think it is more clear.
Signed-off-by: Brandon Casey <drafnel@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The syntax for retrieving the number of elements in an array is:
${#name[@]}
Signed-off-by: Brandon Casey <drafnel@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The deflate loop in bulk-checkin::stream_to_pack expects to get all bytes
from a file that it requests to read in a single function call. But it
used xread(), which does not give that guarantee. Replace it by
read_in_full().
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This reverts commit 6c642a8786.
The previous commit introduced a size limit on IO chunks on all
platforms. The compat clipped_write() is not needed anymore.
Signed-off-by: Steffen Prohaska <prohaska@zib.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Checking out 2GB or more through an external filter (see test) fails
on Mac OS X 10.8.4 (12E55) for a 64-bit executable with:
error: read from external filter cat failed
error: cannot feed the input to external filter cat
error: cat died of signal 13
error: external filter cat failed 141
error: external filter cat failed
The reason is that read() immediately returns with EINVAL when asked
to read more than 2GB. According to POSIX [1], if the value of
nbyte passed to read() is greater than SSIZE_MAX, the result is
implementation-defined. The write function has the same restriction
[2]. Since OS X still supports running 32-bit executables, the
32-bit limit (SSIZE_MAX = INT_MAX = 2GB - 1) seems to be also
imposed on 64-bit executables under certain conditions. For write,
the problem has been addressed earlier [6c642a].
Address the problem for read() and write() differently, by limiting
size of IO chunks unconditionally on all platforms in xread() and
xwrite(). Large chunks only cause problems, like causing latencies
when killing the process, even if OS X was not buggy. Doing IO in
reasonably sized smaller chunks should have no negative impact on
performance.
The compat wrapper clipped_write() introduced earlier [6c642a] is
not needed anymore. It will be reverted in a separate commit. The
new test catches read and write problems.
Note that 'git add' exits with 0 even if it prints filtering errors
to stderr. The test, therefore, checks stderr. 'git add' should
probably be changed (sometime in another commit) to exit with
nonzero if filtering fails. The test could then be changed to use
test_must_fail.
Thanks to the following people for suggestions and testing:
Johannes Sixt <j6t@kdbg.org>
John Keeping <john@keeping.me.uk>
Jonathan Nieder <jrnieder@gmail.com>
Kyle J. McKay <mackyle@gmail.com>
Linus Torvalds <torvalds@linux-foundation.org>
Torsten Bögershausen <tboegi@web.de>
[1] http://pubs.opengroup.org/onlinepubs/009695399/functions/read.html
[2] http://pubs.opengroup.org/onlinepubs/009695399/functions/write.html
[6c642a] commit 6c642a8786
compate/clipped-write.c: large write(2) fails on Mac OS X/XNU
Signed-off-by: Steffen Prohaska <prohaska@zib.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git fails due to a segmentation fault if a submodule path is empty.
Here is an example .gitmodules that will cause a segmentation fault:
[submodule "foo-module"]
path
url = http://host/repo.git
$ git status
Segmentation fault (core dumped)
This is because the parsing of "submodule.*.path" is not prepared to
see a value-less "true" and assumes that the value is always
non-NULL (parsing of "ignore" has the same problem).
Fix it by checking the NULL-ness of value and complain with
config_error_nonbool().
Signed-off-by: Jharrod LaFon <jlafon@eyesopen.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As we had to revert two topics at the last minute, let's have
another (hopefully short) round of rc to make sure the final release
will be sound.
Signed-off-by: Junio C Hamano <gitster@pobox.com>