1
0
Fork 0
mirror of https://github.com/git/git.git synced 2024-11-05 00:37:55 +01:00
Commit graph

317 commits

Author SHA1 Message Date
Junio C Hamano
79cc6432ec Merge branch 'rj/http-code-cleanup'
There was an unused file-scope static variable left in http.c when
building for versions of libCURL that is older than 7.19.4, which
has been fixed.

* rj/http-code-cleanup:
  http: fix an unused variable warning for 'curl_no_proxy'
2018-03-21 11:30:12 -07:00
Ramsay Jones
b8fd6008ec http: fix an unused variable warning for 'curl_no_proxy'
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15 13:11:52 -07:00
Junio C Hamano
169c9c0169 Merge branch 'bw/c-plus-plus'
Avoid using identifiers that clash with C++ keywords.  Even though
it is not a goal to compile Git with C++ compilers, changes like
this help use of code analysis tools that targets C++ on our
codebase.

* bw/c-plus-plus: (37 commits)
  replace: rename 'new' variables
  trailer: rename 'template' variables
  tempfile: rename 'template' variables
  wrapper: rename 'template' variables
  environment: rename 'namespace' variables
  diff: rename 'template' variables
  environment: rename 'template' variables
  init-db: rename 'template' variables
  unpack-trees: rename 'new' variables
  trailer: rename 'new' variables
  submodule: rename 'new' variables
  split-index: rename 'new' variables
  remote: rename 'new' variables
  ref-filter: rename 'new' variables
  read-cache: rename 'new' variables
  line-log: rename 'new' variables
  imap-send: rename 'new' variables
  http: rename 'new' variables
  entry: rename 'new' variables
  diffcore-delta: rename 'new' variables
  ...
2018-03-06 14:54:07 -08:00
Brandon Williams
ee6e065351 http: rename 'new' variables
Rename C++ keyword in order to bring the codebase closer to being able
to be compiled with a C++ compiler.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-22 10:08:05 -08:00
Junio C Hamano
5327463725 Merge branch 'jt/http-redact-cookies'
The http tracing code, often used to debug connection issues,
learned to redact potentially sensitive information from its output
so that it can be more safely sharable.

* jt/http-redact-cookies:
  http: support omitting data from traces
  http: support cookie redaction when tracing
2018-02-13 13:39:11 -08:00
Junio C Hamano
9238941618 Merge branch 'cc/sha1-file-name'
Code clean-up.

* cc/sha1-file-name:
  sha1_file: improve sha1_file_name() perfs
  sha1_file: remove static strbuf from sha1_file_name()
2018-02-13 13:39:10 -08:00
Jonathan Tan
8ba18e6fa4 http: support omitting data from traces
GIT_TRACE_CURL provides a way to debug what is being sent and received
over HTTP, with automatic redaction of sensitive information. But it
also logs data transmissions, which significantly increases the log file
size, sometimes unnecessarily. Add an option "GIT_TRACE_CURL_NO_DATA" to
allow the user to omit such data transmissions.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-19 13:06:57 -08:00
Jonathan Tan
83411783c3 http: support cookie redaction when tracing
When using GIT_TRACE_CURL, Git already redacts the "Authorization:" and
"Proxy-Authorization:" HTTP headers. Extend this redaction to a
user-specified list of cookies, specified through the
"GIT_REDACT_COOKIES" environment variable.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-19 13:06:50 -08:00
Christian Couder
ea6577303f sha1_file: remove static strbuf from sha1_file_name()
Using a static buffer in sha1_file_name() is error prone
and the performance improvements it gives are not needed
in many of the callers.

So let's get rid of this static buffer and, if necessary
or helpful, let's use one in the caller.

Suggested-by: Jeff Hostetler <git@jeffhostetler.com>
Helped-by: Kevin Daudt <me@ikke.info>
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-17 12:21:32 -08:00
Junio C Hamano
0956eaa621 Merge branch 'rs/use-argv-array-in-child-process'
Code cleanup.

* rs/use-argv-array-in-child-process:
  send-pack: use internal argv_array of struct child_process
  http: use internal argv_array of struct child_process
2018-01-05 13:28:10 -08:00
Junio C Hamano
fc4a226bf6 Merge branch 'ws/curl-http-proxy-over-https'
Git has been taught to support an https:// URL used for http.proxy
when using recent versions of libcurl.

* ws/curl-http-proxy-over-https:
  http: support CURLPROXY_HTTPS
2017-12-28 14:08:50 -08:00
René Scharfe
c7191fa510 http: use internal argv_array of struct child_process
Avoid a strangely magic array size (it's slightly too big) and explicit
index numbers by building the command line for index-pack using the
embedded argv_array of the child_process.  Add the flag -o and its
argument with argv_array_pushl() to make it obvious that they belong
together.  The resulting code is shorter and easier to extend.

Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-22 13:33:48 -08:00
Wei Shuyu
82b6803aee http: support CURLPROXY_HTTPS
HTTP proxy over SSL is supported by curl since 7.52.0.
This is very useful for networks with protocol whitelist.

Signed-off-by: Wei Shuyu <wsy@dogben.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-19 10:20:14 -08:00
Junio C Hamano
4c6dad0059 Merge branch 'bw/protocol-v1'
A new mechanism to upgrade the wire protocol in place is proposed
and demonstrated that it works with the older versions of Git
without harming them.

* bw/protocol-v1:
  Documentation: document Extra Parameters
  ssh: introduce a 'simple' ssh variant
  i5700: add interop test for protocol transition
  http: tell server that the client understands v1
  connect: tell server that the client understands v1
  connect: teach client to recognize v1 server response
  upload-pack, receive-pack: introduce protocol version 1
  daemon: recognize hidden request arguments
  protocol: introduce protocol extension mechanisms
  pkt-line: add packet_write function
  connect: in ref advertisement, shallows are last
2017-12-06 09:23:44 -08:00
Brandon Williams
19113a26b6 http: tell server that the client understands v1
Tell a server that protocol v1 can be used by sending the http header
'Git-Protocol' with 'version=1' indicating this.

Also teach the apache http server to pass through the 'Git-Protocol'
header as an environment variable 'GIT_PROTOCOL'.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-17 10:51:29 +09:00
Jeff King
d0e9983980 curl_trace(): eliminate switch fallthrough
Our trace handler is called by curl with a curl_infotype
variable to interpret its data field. For most types we
print the data and then break out of the switch. But for
CURLINFO_TEXT, we print data and then fall through to the
"default" case, which does the exact same thing (nothing!)
that breaking out of the switch would.

This is probably a leftover from an early iteration of the
patch where the code after the switch _did_ do something
interesting that was unique to the non-text case arms.
But in its current form, this fallthrough is merely
confusing (and causes gcc's -Wimplicit-fallthrough to
complain).

Let's make CURLINFO_TEXT like the other case arms, and push
the default arm to the end where it's more obviously a
catch-all.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-22 12:49:55 +09:00
Junio C Hamano
eabdcd4ab4 Merge branch 'jt/packmigrate'
Code movement to make it easier to hack later.

* jt/packmigrate: (23 commits)
  pack: move for_each_packed_object()
  pack: move has_pack_index()
  pack: move has_sha1_pack()
  pack: move find_pack_entry() and make it global
  pack: move find_sha1_pack()
  pack: move find_pack_entry_one(), is_pack_valid()
  pack: move check_pack_index_ptr(), nth_packed_object_offset()
  pack: move nth_packed_object_{sha1,oid}
  pack: move clear_delta_base_cache(), packed_object_info(), unpack_entry()
  pack: move unpack_object_header()
  pack: move get_size_from_delta()
  pack: move unpack_object_header_buffer()
  pack: move {,re}prepare_packed_git and approximate_object_count
  pack: move install_packed_git()
  pack: move add_packed_git()
  pack: move unuse_pack()
  pack: move use_pack()
  pack: move pack-closing functions
  pack: move release_pack_memory()
  pack: move open_pack_index(), parse_pack_index()
  ...
2017-08-26 22:55:09 -07:00
Junio C Hamano
6ea13d8845 Merge branch 'tc/curl-with-backports'
Updates to the HTTP layer we made recently unconditionally used
features of libCurl without checking the existence of them, causing
compilation errors, which has been fixed.  Also migrate the code to
check feature macros, not version numbers, to cope better with
libCurl that vendor ships with backported features.

* tc/curl-with-backports:
  http: use a feature check to enable GSSAPI delegation control
  http: fix handling of missing CURLPROTO_*
2017-08-24 10:20:02 -07:00
Jonathan Tan
4f39cd821d pack: move pack name-related functions
Currently, sha1_file.c and cache.h contain many functions, both related
to and unrelated to packfiles. This makes both files very large and
causes an unclear separation of concerns.

Create a new file, packfile.c, to hold all packfile-related functions
currently in sha1_file.c. It has a corresponding header packfile.h.

In this commit, the pack name-related functions are moved. Subsequent
commits will move the other functions.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-23 15:12:06 -07:00
Tom G. Christensen
dd5df538b5 http: use a feature check to enable GSSAPI delegation control
Turn the version check into a feature check to ensure this functionality
is also enabled with vendor supported curl versions where the feature
may have been backported.

Signed-off-by: Tom G. Christensen <tgc@jupiterrise.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-11 15:12:41 -07:00
Tom G. Christensen
f18777ba6e http: fix handling of missing CURLPROTO_*
Commit aeae4db1 refactored the handling of the curl protocol
restriction support into a function but failed to add a version
check for older versions of curl that lack CURLPROTO_* support.

Add the missing check and at the same time convert it to a feature
check instead of a version based check.  This is done to ensure that
vendor supported curl versions that have had CURLPROTO_* support
backported are handled correctly.

Signed-off-by: Tom G. Christensen <tgc@jupiterrise.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-11 15:12:07 -07:00
Junio C Hamano
17b1e1d76c Merge branch 'jc/http-sslkey-and-ssl-cert-are-paths'
The http.{sslkey,sslCert} configuration variables are to be
interpreted as a pathname that honors "~[username]/" prefix, but
weren't, which has been fixed.

* jc/http-sslkey-and-ssl-cert-are-paths:
  http.c: http.sslcert and http.sslkey are both pathnames
2017-08-11 13:26:59 -07:00
Junio C Hamano
8d1549643e http.c: http.sslcert and http.sslkey are both pathnames
Back when the modern http_options() codepath was created to parse
various http.* options at 29508e1e ("Isolate shared HTTP request
functionality", 2005-11-18), and then later was corrected for
interation between the multiple configuration files in 7059cd99
("http_init(): Fix config file parsing", 2009-03-09), we parsed
configuration variables like http.sslkey, http.sslcert as plain
vanilla strings, because git_config_pathname() that understands
"~[username]/" prefix did not exist.  Later, we converted some of
them (namely, http.sslCAPath and http.sslCAInfo) to use the
function, and added variables like http.cookeyFile http.pinnedpubkey
to use the function from the beginning.  Because of that, these
variables all understand "~[username]/" prefix.

Make the remaining two variables, http.sslcert and http.sslkey, also
aware of the convention, as they are both clearly pathnames to
files.

Noticed-by: Victor Toni <victor.toni@gmail.com>
Helped-by: Charles Bailey <cbailey32@bloomberg.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-20 13:37:24 -07:00
Junio C Hamano
50f03c6676 Merge branch 'ab/free-and-null'
A common pattern to free a piece of memory and assign NULL to the
pointer that used to point at it has been replaced with a new
FREE_AND_NULL() macro.

* ab/free-and-null:
  *.[ch] refactoring: make use of the FREE_AND_NULL() macro
  coccinelle: make use of the "expression" FREE_AND_NULL() rule
  coccinelle: add a rule to make "expression" code use FREE_AND_NULL()
  coccinelle: make use of the "type" FREE_AND_NULL() rule
  coccinelle: add a rule to make "type" code use FREE_AND_NULL()
  git-compat-util: add a FREE_AND_NULL() wrapper around free(ptr); ptr = NULL
2017-06-24 14:28:41 -07:00
Ævar Arnfjörð Bjarmason
6a83d90207 coccinelle: make use of the "type" FREE_AND_NULL() rule
Apply the result of the just-added coccinelle rule. This manually
excludes a few occurrences, mostly things that resulted in many
FREE_AND_NULL() on one line, that'll be manually fixed in a subsequent
change.

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-16 12:44:03 -07:00
Brandon Williams
b2141fc1d2 config: don't include config.h by default
Stop including config.h by default in cache.h.  Instead only include
config.h in those files which require use of the config system.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-15 12:56:22 -07:00
Junio C Hamano
4c01f67d91 Merge branch 'dt/http-postbuffer-can-be-large'
Allow the http.postbuffer configuration variable to be set to a
size that can be expressed in size_t, which can be larger than
ulong on some platforms.

* dt/http-postbuffer-can-be-large:
  http.postbuffer: allow full range of ssize_t values
2017-04-23 22:07:45 -07:00
Junio C Hamano
6b51cb6181 Merge branch 'sr/http-proxy-configuration-fix'
"http.proxy" set to an empty string is used to disable the usage of
proxy.  We broke this early last year.

* sr/http-proxy-configuration-fix:
  http: fix the silent ignoring of proxy misconfiguraion
  http: honor empty http.proxy option to bypass proxy
2017-04-23 22:07:44 -07:00
David Turner
37ee680d9b http.postbuffer: allow full range of ssize_t values
Unfortunately, in order to push some large repos where a server does
not support chunked encoding, the http postbuffer must sometimes
exceed two gigabytes.  On a 64-bit system, this is OK: we just malloc
a larger buffer.

This means that we need to use CURLOPT_POSTFIELDSIZE_LARGE to set the
buffer size.

Signed-off-by: David Turner <dturner@twosigma.com>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-13 18:24:32 -07:00
Sergey Ryazanov
ae51d91105 http: fix the silent ignoring of proxy misconfiguraion
Earlier, the whole http.proxy option string was passed to curl without
any preprocessing so curl could complain about the invalid proxy
configuration.

After the commit 372370f167 ("http: use credential API to handle proxy
authentication", 2016-01-26), if the user specified an invalid HTTP
proxy option in the configuration, then the option parsing silently
fails and NULL will be passed to curl as a proxy. This forces curl to
fall back to detecting the proxy configuration from the environment,
causing the http.proxy option ignoring.

Fix this issue by checking the proxy option parsing result. If parsing
failed then print an error message and die. Such behaviour allows the
user to quickly figure the proxy misconfiguration and correct it.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Sergey Ryazanov <ryazanov.s.a@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-13 15:51:33 -07:00
Sergey Ryazanov
57415089bd http: honor empty http.proxy option to bypass proxy
Curl distinguishes between an empty proxy address and a NULL proxy
address. In the first case it completely disables proxy usage, but if
the proxy address option is NULL then curl attempts to determine the
proxy address from the http_proxy environment variable.

According to the documentation, if the http.proxy option is set to an
empty string, git should bypass proxy and connect to the server
directly:

    export http_proxy=http://network-proxy/
    cd ~/foobar-project
    git config remote.origin.proxy ""
    git fetch

Previously, proxy host was configured by one line:

    curl_easy_setopt(result, CURLOPT_PROXY, curl_http_proxy);

Commit 372370f167 ("http: use credential API to handle proxy
authentication", 2016-01-26) parses the proxy option, then extracts the
proxy host address and updates the curl configuration, making the
previous call a noop:

    credential_from_url(&proxy_auth, curl_http_proxy);
    curl_easy_setopt(result, CURLOPT_PROXY, proxy_auth.host);

But if the proxy option is empty then the proxy host field becomes NULL.
This forces curl to fall back to detecting the proxy configuration from
the environment, causing the http.proxy option to not work anymore.

Fix this issue by explicitly handling http.proxy being set the empty
string. This also makes the code a bit more clear and should help us
avoid such regressions in the future.

Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Sergey Ryazanov <ryazanov.s.a@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-13 15:51:19 -07:00
Jeff King
1a168e5c86 convert unchecked snprintf into xsnprintf
These calls to snprintf should always succeed, because their
input is small and fixed. Let's use xsnprintf to make sure
this is the case (and to make auditing for actual truncation
easier).

These could be candidates for turning into heap buffers, but
they fall into a few broad categories that make it not worth
doing:

  - formatting single numbers is simple enough that we can
    see the result should fit

  - the size of a sha1 is likewise well-known, and I didn't
    want to cause unnecessary conflicts with the ongoing
    process to convert these constants to GIT_MAX_HEXSZ

  - the interface for curl_errorstr is dictated by curl

Signed-off-by: Jeff King <peff@peff.net>
2017-03-30 14:59:50 -07:00
Junio C Hamano
68e12d7d97 Merge branch 'jt/http-base-url-update-upon-redirect' into maint
When a redirected http transport gets an error during the
redirected request, we ignored the error we got from the server,
and ended up giving a not-so-useful error message.

* jt/http-base-url-update-upon-redirect:
  http: attempt updating base URL only if no error
2017-03-16 13:56:42 -07:00
Junio C Hamano
d880bfd947 Merge branch 'jk/http-auth' into maint
Reduce authentication round-trip over HTTP when the server supports
just a single authentication method.

* jk/http-auth:
  http: add an "auto" mode for http.emptyauth
  http: restrict auth methods to what the server advertises
2017-03-16 13:56:41 -07:00
Junio C Hamano
d0f549f403 Merge branch 'jt/http-base-url-update-upon-redirect'
When a redirected http transport gets an error during the
redirected request, we ignored the error we got from the server,
and ended up giving a not-so-useful error message.

* jt/http-base-url-update-upon-redirect:
  http: attempt updating base URL only if no error
2017-03-10 13:24:23 -08:00
Junio C Hamano
92718f57c2 Merge branch 'jk/http-auth'
Reduce authentication round-trip over HTTP when the server supports
just a single authentication method.

* jk/http-auth:
  http: add an "auto" mode for http.emptyauth
  http: restrict auth methods to what the server advertises
2017-03-10 13:24:23 -08:00
Jonathan Tan
8e27391a5f http: attempt updating base URL only if no error
http.c supports HTTP redirects of the form

  http://foo/info/refs?service=git-upload-pack
  -> http://anything
  -> http://bar/info/refs?service=git-upload-pack

(that is to say, as long as the Git part of the path and the query
string is preserved in the final redirect destination, the intermediate
steps can have any URL). However, if one of the intermediate steps
results in an HTTP exception, a confusing "unable to update url base
from redirection" message is printed instead of a Curl error message
with the HTTP exception code.

This was introduced by 2 commits. Commit c93c92f ("http: update base
URLs when we see redirects", 2013-09-28) introduced a best-effort
optimization that required checking if only the "base" part of the URL
differed between the initial request and the final redirect destination,
but it performed the check before any HTTP status checking was done. If
something went wrong, the normal code path was still followed, so this
did not cause any confusing error messages until commit 6628eb4 ("http:
always update the base URL for redirects", 2016-12-06), which taught
http to die if the non-"base" part of the URL differed.

Therefore, teach http to check the HTTP status before attempting to
check if only the "base" part of the URL differed. This commit teaches
http_request_reauth to return early without updating options->base_url
upon an error; the only invoker of this function that passes a non-NULL
"options" is remote-curl.c (through "http_get_strbuf"), which only uses
options->base_url for an informational message in the situations that
this commit cares about (that is, when the return value is not HTTP_OK).

The included test checks that the redirect scheme at the beginning of
this commit message works, and that returning a 502 in the middle of the
redirect scheme produces the correct result. Note that this is different
from the test in commit 6628eb4 ("http: always update the base URL for
redirects", 2016-12-06) in that this commit tests that a Git-shaped URL
(http://.../info/refs?service=git-upload-pack) works, whereas commit
6628eb4 tests that a non-Git-shaped URL
(http://.../info/refs/foo?service=git-upload-pack) does not work (even
though Git is processing that URL) and is an error that is fatal, not
silently swallowed.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-28 11:35:53 -08:00
Jeff King
40a18fc77c http: add an "auto" mode for http.emptyauth
This variable needs to be specified to make some types of
non-basic authentication work, but ideally this would just
work out of the box for everyone.

However, simply setting it to "1" by default introduces an
extra round-trip for cases where it _isn't_ useful. We end
up sending a bogus empty credential that the server rejects.

Instead, let's introduce an automatic mode, that works like
this:

  1. We won't try to send the bogus credential on the first
     request. We'll wait to get an HTTP 401, as usual.

  2. After seeing an HTTP 401, the empty-auth hack will kick
     in only when we know there is an auth method available
     that might make use of it (i.e., something besides
     "Basic" or "Digest").

That should make it work out of the box, without incurring
any extra round-trips for people hitting Basic-only servers.

This _does_ incur an extra round-trip if you really want to
use "Basic" but your server advertises other methods (the
emptyauth hack will kick in but fail, and then Git will
actually ask for a password).

The auto mode may incur an extra round-trip over setting
http.emptyauth=true, because part of the emptyauth hack is
to feed this blank password to curl even before we've made a
single request.

Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-27 10:35:24 -08:00
Jeff King
840398feb8 http: restrict auth methods to what the server advertises
By default, we tell curl to use CURLAUTH_ANY, which does not
limit its set of auth methods. However, this results in an
extra round-trip to the server when authentication is
required. After we've fed the credential to curl, it wants
to probe the server to find its list of available methods
before sending an Authorization header.

We can shortcut this by limiting our http_auth_methods by
what the server told us it supports. In some cases (such as
when the server only supports Basic), that lets curl skip
the extra probe request.

The end result should look the same to the user, but you can
use GIT_TRACE_CURL to verify the sequence of requests:

  GIT_TRACE_CURL=1 \
  git ls-remote https://example.com/repo.git \
  2>&1 >/dev/null |
  egrep '(Send|Recv) header: (GET|HTTP|Auth)'

Before this patch, hitting a Basic-only server like
github.com results in:

  Send header: GET /repo.git/info/refs?service=git-upload-pack HTTP/1.1
  Recv header: HTTP/1.1 401 Authorization Required
  Send header: GET /repo.git/info/refs?service=git-upload-pack HTTP/1.1
  Recv header: HTTP/1.1 401 Authorization Required
  Send header: GET /repo.git/info/refs?service=git-upload-pack HTTP/1.1
  Send header: Authorization: Basic <redacted>
  Recv header: HTTP/1.1 200 OK

And after:

  Send header: GET /repo.git/info/refs?service=git-upload-pack HTTP/1.1
  Recv header: HTTP/1.1 401 Authorization Required
  Send header: GET /repo.git/info/refs?service=git-upload-pack HTTP/1.1
  Send header: Authorization: Basic <redacted>
  Recv header: HTTP/1.1 200 OK

The possible downsides are:

  - This only helps for a Basic-only server; for a server
    with multiple auth options, curl may still send a probe
    request to see which ones are available (IOW, there's no
    way to say "don't probe, I already know what the server
    will say").

  - The http_auth_methods variable is global, so this will
    apply to all further requests. That's acceptable for
    Git's usage of curl, though, which also treats the
    credentials as global. I.e., in any given program
    invocation we hit only one conceptual server (we may be
    redirected at the outset, but in that case that's whose
    auth_avail field we'd see).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-23 11:11:56 -08:00
Junio C Hamano
5ce6f51ff7 Merge branch 'jk/http-walker-limit-redirect' into maint
Update the error messages from the dumb-http client when it fails
to obtain loose objects; we used to give sensible error message
only upon 404 but we now forbid unexpected redirects that needs to
be reported with something sensible.

* jk/http-walker-limit-redirect:
  http-walker: complain about non-404 loose object errors
  http: treat http-alternates like redirects
  http: make redirects more obvious
  remote-curl: rename shadowed options variable
  http: always update the base URL for redirects
  http: simplify update_url_from_redirect
2017-01-17 14:49:29 -08:00
Junio C Hamano
9d540e9726 Merge branch 'bw/transport-protocol-policy'
Finer-grained control of what protocols are allowed for transports
during clone/fetch/push have been enabled via a new configuration
mechanism.

* bw/transport-protocol-policy:
  http: respect protocol.*.allow=user for http-alternates
  transport: add from_user parameter to is_transport_allowed
  http: create function to get curl allowed protocols
  transport: add protocol policy config option
  http: always warn if libcurl version is too old
  lib-proto-disable: variable name fix
2016-12-27 00:11:41 -08:00
Junio C Hamano
da72ee87fb Merge branch 'jk/http-walker-limit-redirect'
Update the error messages from the dumb-http client when it fails
to obtain loose objects; we used to give sensible error message
only upon 404 but we now forbid unexpected redirects that needs to
be reported with something sensible.

* jk/http-walker-limit-redirect:
  http-walker: complain about non-404 loose object errors
2016-12-19 14:45:32 -08:00
Junio C Hamano
8a2882f23e Merge branch 'jk/http-walker-limit-redirect-2.9'
Transport with dumb http can be fooled into following foreign URLs
that the end user does not intend to, especially with the server
side redirects and http-alternates mechanism, which can lead to
security issues.  Tighten the redirection and make it more obvious
to the end user when it happens.

* jk/http-walker-limit-redirect-2.9:
  http: treat http-alternates like redirects
  http: make redirects more obvious
  remote-curl: rename shadowed options variable
  http: always update the base URL for redirects
  http: simplify update_url_from_redirect
2016-12-19 14:45:32 -08:00
Brandon Williams
a768a02265 transport: add from_user parameter to is_transport_allowed
Add a from_user parameter to is_transport_allowed() to allow http to be
able to distinguish between protocol restrictions for redirects versus
initial requests.  CURLOPT_REDIR_PROTOCOLS can now be set differently
from CURLOPT_PROTOCOLS to disallow use of protocols with the "user"
policy in redirects.

This change allows callers to query if a transport protocol is allowed,
given that the caller knows that the protocol is coming from the user
(1) or not from the user (0) such as redirects in libcurl.  If unknown a
-1 should be provided which falls back to reading
`GIT_PROTOCOL_FROM_USER` to determine if the protocol came from the
user.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-15 09:29:13 -08:00
Brandon Williams
aeae4db174 http: create function to get curl allowed protocols
Move the creation of an allowed protocols whitelist to a helper
function. This will be useful when we need to compute the set of
allowed protocols differently for normal and redirect cases.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-15 09:29:13 -08:00
Brandon Williams
f962ddf6ed http: always warn if libcurl version is too old
Always warn if libcurl version is too old because:

1. Even without a protocol whitelist, newer versions of curl have all
   non-standard protocols disabled by default.
2. A future patch will introduce default "known-good" and "known-bad"
   protocols which are allowed/disallowed by 'is_transport_allowed'
   which older version of libcurl can't respect.

Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-15 09:28:37 -08:00
Jeff King
3680f16f9d http-walker: complain about non-404 loose object errors
Since commit 17966c0a6 (http: avoid disconnecting on 404s
for loose objects, 2016-07-11), we turn off curl's
FAILONERROR option and instead manually deal with failing
HTTP codes.

However, the logic to do so only recognizes HTTP 404 as a
failure. This is probably the most common result, but if we
were to get another code, the curl result remains CURLE_OK,
and we treat it as success. We still end up detecting the
failure when we try to zlib-inflate the object (which will
fail), but instead of reporting the HTTP error, we just
claim that the object is corrupt.

Instead, let's catch anything in the 300's or above as an
error (300's are redirects which are not an error at the
HTTP level, but are an indication that we've explicitly
disabled redirects, so we should treat them as such; we
certainly don't have the resulting object content).

Note that we also fill in req->errorstr, which we didn't do
before. Without FAILONERROR, curl will not have filled this
in, and it will remain a blank string. This never mattered
for the 404 case, because in the logic below we hit the
"missing_target()" branch and print nothing. But for other
errors, we'd want to say _something_, if only to fill in the
blank slot in the error message.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 12:43:34 -08:00
Junio C Hamano
43ec089eea Merge branch 'ew/http-walker' into jk/http-walker-limit-redirect
* ew/http-walker:
  list: avoid incompatibility with *BSD sys/queue.h
  http-walker: reduce O(n) ops with doubly-linked list
  http: avoid disconnecting on 404s for loose objects
  http-walker: remove unused parameter from fetch_object
2016-12-06 12:43:23 -08:00
Jeff King
cb4d2d35c4 http: treat http-alternates like redirects
The previous commit made HTTP redirects more obvious and
tightened up the default behavior. However, there's another
way for a server to ask a git client to fetch arbitrary
content: by having an http-alternates file (or a regular
alternates file, which is used as a backup).

Similar to the HTTP redirect case, a malicious server can
claim to have refs pointing at object X, return a 404 when
the client asks for X, but point to some other URL via
http-alternates, which the client will transparently fetch.
The end result is that it looks from the user's perspective
like the objects came from the malicious server, as the
other URL is not mentioned at all.

Worse, because we feed the new URL to curl ourselves, the
usual protocol restrictions do not kick in (neither curl's
default of disallowing file://, nor the protocol
whitelisting in f4113cac0 (http: limit redirection to
protocol-whitelist, 2015-09-22).

Let's apply the same rules here as we do for HTTP redirects.
Namely:

  - unless http.followRedirects is set to "always", we will
    not follow remote redirects from http-alternates (or
    alternates) at all

  - set CURLOPT_PROTOCOLS alongside CURLOPT_REDIR_PROTOCOLS
    restrict ourselves to a known-safe set and respect any
    user-provided whitelist.

  - mention alternate object stores on stderr so that the
    user is aware another source of objects may be involved

The first item may prove to be too restrictive. The most
common use of alternates is to point to another path on the
same server. While it's possible for a single-server
redirect to be an attack, it takes a fairly obscure setup
(victim and evil repository on the same host, host speaks
dumb http, and evil repository has access to edit its own
http-alternates file).

So we could make the checks more specific, and only cover
cross-server redirects. But that means parsing the URLs
ourselves, rather than letting curl handle them. This patch
goes for the simpler approach. Given that they are only used
with dumb http, http-alternates are probably pretty rare.
And there's an escape hatch: the user can allow redirects on
a specific server by setting http.<url>.followRedirects to
"always".

Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 12:32:48 -08:00
Jeff King
50d3413740 http: make redirects more obvious
We instruct curl to always follow HTTP redirects. This is
convenient, but it creates opportunities for malicious
servers to create confusing situations. For instance,
imagine Alice is a git user with access to a private
repository on Bob's server. Mallory runs her own server and
wants to access objects from Bob's repository.

Mallory may try a few tricks that involve asking Alice to
clone from her, build on top, and then push the result:

  1. Mallory may simply redirect all fetch requests to Bob's
     server. Git will transparently follow those redirects
     and fetch Bob's history, which Alice may believe she
     got from Mallory. The subsequent push seems like it is
     just feeding Mallory back her own objects, but is
     actually leaking Bob's objects. There is nothing in
     git's output to indicate that Bob's repository was
     involved at all.

     The downside (for Mallory) of this attack is that Alice
     will have received Bob's entire repository, and is
     likely to notice that when building on top of it.

  2. If Mallory happens to know the sha1 of some object X in
     Bob's repository, she can instead build her own history
     that references that object. She then runs a dumb http
     server, and Alice's client will fetch each object
     individually. When it asks for X, Mallory redirects her
     to Bob's server. The end result is that Alice obtains
     objects from Bob, but they may be buried deep in
     history. Alice is less likely to notice.

Both of these attacks are fairly hard to pull off. There's a
social component in getting Mallory to convince Alice to
work with her. Alice may be prompted for credentials in
accessing Bob's repository (but not always, if she is using
a credential helper that caches). Attack (1) requires a
certain amount of obliviousness on Alice's part while making
a new commit. Attack (2) requires that Mallory knows a sha1
in Bob's repository, that Bob's server supports dumb http,
and that the object in question is loose on Bob's server.

But we can probably make things a bit more obvious without
any loss of functionality. This patch does two things to
that end.

First, when we encounter a whole-repo redirect during the
initial ref discovery, we now inform the user on stderr,
making attack (1) much more obvious.

Second, the decision to follow redirects is now
configurable. The truly paranoid can set the new
http.followRedirects to false to avoid any redirection
entirely. But for a more practical default, we will disallow
redirects only after the initial ref discovery. This is
enough to thwart attacks similar to (2), while still
allowing the common use of redirects at the repository
level. Since c93c92f30 (http: update base URLs when we see
redirects, 2013-09-28) we re-root all further requests from
the redirect destination, which should generally mean that
no further redirection is necessary.

As an escape hatch, in case there really is a server that
needs to redirect individual requests, the user can set
http.followRedirects to "true" (and this can be done on a
per-server basis via http.*.followRedirects config).

Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 12:32:48 -08:00