1
0
Fork 0
mirror of https://github.com/git/git.git synced 2024-10-31 22:37:54 +01:00
git/Documentation/git-fetch-pack.txt

115 lines
3.3 KiB
Text
Raw Normal View History

git-fetch-pack(1)
=================
NAME
----
git-fetch-pack - Receive missing objects from another repository
SYNOPSIS
--------
[verse]
'git fetch-pack' [--all] [--quiet|-q] [--keep|-k] [--thin] [--include-tag]
[--upload-pack=<git-upload-pack>]
[--depth=<n>] [--no-progress]
[-v] <repository> [<refs>...]
DESCRIPTION
-----------
Usually you would want to use 'git fetch', which is a
higher level wrapper of this command, instead.
Invokes 'git-upload-pack' on a possibly remote repository
and asks it to send objects missing from this repository, to
update the named heads. The list of commits available locally
is found out by scanning the local refs/ hierarchy and sent to
'git-upload-pack' running on the other end.
This command degenerates to download everything to complete the
asked refs from the remote side when the local side does not
have a common ancestor commit.
OPTIONS
-------
--all::
Fetch all remote refs.
fetch-pack: new --stdin option to read refs from stdin If a remote repo has too many tags (or branches), cloning it over the smart HTTP transport can fail because remote-curl.c puts all the refs from the remote repo on the fetch-pack command line. This can make the command line longer than the global OS command line limit, causing fetch-pack to fail. This is especially a problem on Windows where the command line limit is orders of magnitude shorter than Linux. There are already real repos out there that msysGit cannot clone over smart HTTP due to this problem. Here is an easy way to trigger this problem: git init too-many-refs cd too-many-refs echo bla > bla.txt git add . git commit -m test sha=$(git rev-parse HEAD) tag=$(perl -e 'print "bla" x 30') for i in `seq 50000`; do echo $sha refs/tags/$tag-$i >> .git/packed-refs done Then share this repo over the smart HTTP protocol and try cloning it: $ git clone http://localhost/.../too-many-refs/.git Cloning into 'too-many-refs'... fatal: cannot exec 'fetch-pack': Argument list too long 50k tags is obviously an absurd number, but it is required to demonstrate the problem on Linux because it has a much more generous command line limit. On Windows the clone fails with as little as 500 tags in the above loop, which is getting uncomfortably close to the number of tags you might see in real long lived repos. This is not just theoretical, msysGit is already failing to clone our company repo due to this. It's a large repo converted from CVS, nearly 10 years of history. Four possible solutions were discussed on the Git mailing list (in no particular order): 1) Call fetch-pack multiple times with smaller batches of refs. This was dismissed as inefficient and inelegant. 2) Add option --refs-fd=$n to pass a an fd from where to read the refs. This was rejected because inheriting descriptors other than stdin/stdout/stderr through exec() is apparently problematic on Windows, plus it would require changes to the run-command API to open extra pipes. 3) Add option --refs-from=$tmpfile to pass the refs using a temp file. This was not favored because of the temp file requirement. 4) Add option --stdin to pass the refs on stdin, one per line. In the end this option was chosen as the most efficient and most desirable from scripting perspective. There was however a small complication when using stdin to pass refs to fetch-pack. The --stateless-rpc option to fetch-pack also uses stdin for communication with the remote server. If we are going to sneak refs on stdin line by line, it would have to be done very carefully in the presence of --stateless-rpc, because when reading refs line by line we might read ahead too much data into our buffer and eat some of the remote protocol data which is also coming on stdin. One way to solve this would be to refactor get_remote_heads() in fetch-pack.c to accept a residual buffer from our stdin line parsing above, but this function is used in several places so other callers would be burdened by this residual buffer interface even when most of them don't need it. In the end we settled on the following solution: If --stdin is specified without --stateless-rpc, fetch-pack would read the refs from stdin one per line, in a script friendly format. However if --stdin is specified together with --stateless-rpc, fetch-pack would read the refs from stdin in packetized format (pkt-line) with a flush packet terminating the list of refs. This way we can read the exact number of bytes that we need from stdin, and then get_remote_heads() can continue reading from the same fd without losing a single byte of remote protocol data. This way the --stdin option only loses generality and scriptability when used together with --stateless-rpc, which is not easily scriptable anyway because it also uses pkt-line when talking to the remote server. Signed-off-by: Ivan Todoroski <grnch@gmx.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-02 17:13:48 +02:00
--stdin::
Take the list of refs from stdin, one per line. If there
are refs specified on the command line in addition to this
option, then the refs from stdin are processed after those
on the command line.
+
If '--stateless-rpc' is specified together with this option then
the list of refs must be in packet format (pkt-line). Each ref must
be in a separate packet, and the list must end with a flush packet.
-q::
--quiet::
Pass '-q' flag to 'git unpack-objects'; this makes the
cloning process less verbose.
-k::
--keep::
Do not invoke 'git unpack-objects' on received data, but
create a single packfile out of it instead, and store it
in the object database. If provided twice then the pack is
locked against repacking.
--thin::
Fetch a "thin" pack, which records objects in deltified form based
on objects not included in the pack to reduce network traffic.
--include-tag::
If the remote side supports it, annotated tags objects will
be downloaded on the same connection as the other objects if
the object the tag references is downloaded. The caller must
otherwise determine the tags this option made available.
--upload-pack=<git-upload-pack>::
Use this to specify the path to 'git-upload-pack' on the
remote side, if is not found on your $PATH.
Installations of sshd ignores the user's environment
setup scripts for login shells (e.g. .bash_profile) and
your privately installed git may not be found on the system
default $PATH. Another workaround suggested is to set
up your $PATH in ".bashrc", but this flag is for people
who do not want to pay the overhead for non-interactive
shells by having a lean .bashrc file (they set most of
the things up in .bash_profile).
--exec=<git-upload-pack>::
Same as \--upload-pack=<git-upload-pack>.
--depth=<n>::
Limit fetching to ancestor-chains not longer than n.
'git-upload-pack' treats the special depth 2147483647 as
infinite even if there is an ancestor-chain that long.
--no-progress::
Do not show the progress.
--check-self-contained-and-connected::
Output "connectivity-ok" if the received pack is
self-contained and connected.
-v::
Run verbosely.
<repository>::
The URL to the remote repository.
<refs>...::
The remote heads to update from. This is relative to
$GIT_DIR (e.g. "HEAD", "refs/heads/master"). When
unspecified, update from all heads the remote side has.
SEE ALSO
--------
linkgit:git-fetch[1]
GIT
---
Part of the linkgit:git[1] suite