There is no reason to "reserve" a gap between the public and private
flags values.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is only used internally now. Document it a little bit better, too.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Reviewed-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 0b6806b9 (xread, xwrite: limit size of IO to 8MB, 2013-08-20),
we chomp our calls to read(2) and write(2) into chunks of
MAX_IO_SIZE bytes (8 MiB), because a large IO results in a bad
latency when the program needs to be killed. This also brought our
IO below SSIZE_MAX, which is a limit POSIX allows read(2) and
write(2) to fail when the IO size exceeds it, for OS X, where a
problem was originally reported.
However, there are other systems that define SSIZE_MAX smaller than
our default, and feeding 8 MiB to underlying read(2)/write(2) would
fail. Make sure we clip our calls to the lower limit as well.
Reported-by: Joachim Schmitz <jojo@schmitz-digital.de>
Helped-by: Torsten Bögershausen <tboegi@web.de>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Mark file-local symbols as "static", and drop functions that nobody
uses.
* jc/unused-symbols:
shallow.c: make check_shallow_file_for_update() static
remote.c: make clear_cas_option() static
urlmatch.c: make match_urls() static
revision.c: make save_parents() and free_saved_parents() static
line-log.c: make line_log_data_init() static
pack-bitmap.c: make pack_bitmap_filename() static
prompt.c: remove git_getpass() nobody uses
http.c: make finish_active_slot() and handle_curl_result() static
Extending the js/push-to-deploy topic, the behaviour of "git push"
when updating the working tree and the index with an update to the
branch that is checked out can be tweaked by push-to-checkout hook.
* jc/push-to-checkout:
receive-pack: support push-to-checkout hook
receive-pack: refactor updateInstead codepath
"git push" has been taught a "--atomic" option that makes push to
update more than one ref an "all-or-none" affair.
* sb/atomic-push:
Document receive.advertiseatomic
t5543-atomic-push.sh: add basic tests for atomic pushes
push.c: add an --atomic argument
send-pack.c: add --atomic command line argument
send-pack: rename ref_update_to_be_sent to check_to_send_update
receive-pack.c: negotiate atomic push support
receive-pack.c: add execute_commands_atomic function
receive-pack.c: move transaction handling in a central place
receive-pack.c: move iterating over all commands outside execute_commands
receive-pack.c: die instead of error in case of possible future bug
receive-pack.c: shorten the execute_commands loop over all commands
Restructure "reflog expire" to fit the reflogs better with the
recently updated ref API.
Looked reasonable (except that some shortlog entries stood out like
a sore thumb).
* mh/reflog-expire: (24 commits)
refs.c: let fprintf handle the formatting
refs.c: don't expose the internal struct ref_lock in the header file
lock_any_ref_for_update(): inline function
refs.c: remove unlock_ref/close_ref/commit_ref from the refs api
reflog_expire(): new function in the reference API
expire_reflog(): treat the policy callback data as opaque
Move newlog and last_kept_sha1 to "struct expire_reflog_cb"
expire_reflog(): move rewrite to flags argument
expire_reflog(): move verbose to flags argument
expire_reflog(): pass flags through to expire_reflog_ent()
struct expire_reflog_cb: a new callback data type
Rename expire_reflog_cb to expire_reflog_policy_cb
expire_reflog(): move updateref to flags argument
expire_reflog(): move dry_run to flags argument
expire_reflog(): add a "flags" argument
expire_reflog(): extract two policy-related functions
Extract function should_expire_reflog_ent()
expire_reflog(): use a lock_file for rewriting the reflog file
expire_reflog(): return early if the reference has no reflog
expire_reflog(): rename "ref" parameter to "refname"
...
"git log --invert-grep --grep=WIP" will show only commits that do
not have the string "WIP" in their messages.
* cj/log-invert-grep:
log: teach --invert-grep option
After attempting and failing a password-less authentication
(e.g. kerberos), libcURL refuses to fall back to password based
Basic authentication without a bit of help/encouragement.
* bc/http-fallback-to-password-after-krb-fails:
remote-curl: fall back to Basic auth if Negotiate fails
Setting diff.submodule to 'log' made "git format-patch" produce
broken patches.
* dk/format-patch-ignore-diff-submodule:
format-patch: ignore diff.submodule setting
t4255: test am submodule with diff.submodule
"git rerere" (invoked internally from many mergy operations) did
not correctly signal errors when told to update the working tree
files and failed to do so for whatever reason.
* jn/rerere-fail-on-auto-update-failure:
rerere: error out on autoupdate failure
"git blame HEAD -- missing" failed to correctly say "HEAD" when it
tried to say "No such path 'missing' in HEAD".
* jk/blame-commit-label:
blame.c: fix garbled error message
use xstrdup_or_null to replace ternary conditionals
builtin/commit.c: use xstrdup_or_null instead of envdup
builtin/apply.c: use xstrdup_or_null instead of null_strdup
git-compat-util: add xstrdup_or_null helper
The clone subcommand has long had support for excluding
subdirectories, but sync has not. This is a nuisance,
since as soon as you do a sync, any changed files that
were initially excluded start showing up.
Move the "exclude" command-line option into the parent
class; the actual behavior was already present there so
it simply had to be exposed.
Signed-off-by: Luke Diamand <luke@diamand.org>
Reviewed-by: Pete Wyckoff <pw@padd.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
run_setup_gently() is called before merge-file. This may result in changing
current working directory, which wasn't taken into account when opening a file
for writing.
Fix by prepending the passed prefix. Previous var is left so that error
messages keep referring to the file from the user's working directory
perspective.
Signed-off-by: Aleksander Boruch-Gruszecki <aleksander.boruchgruszecki@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Because Git tracks symbolic links as symbolic links, a path that
has a symbolic link in its leading part (e.g. path/to/dir/file,
where path/to/dir is a symbolic link to somewhere else, be it
inside or outside the working tree) can never appear in a patch
that validly applies, unless the same patch first removes the
symbolic link to allow a directory to be created there.
Detect and reject such a patch.
Things to note:
- Unfortunately, we cannot reuse the has_symlink_leading_path()
from dir.c, as that is only about the working tree, but "git
apply" can be told to apply the patch only to the index or to
both the index and to the working tree.
- We cannot directly use has_symlink_leading_path() even when we
are applying only to the working tree, as an early patch of a
valid input may remove a symbolic link path/to/dir and then a
later patch of the input may create a path path/to/dir/file, but
"git apply" first checks the input without touching either the
index or the working tree. The leading symbolic link check must
be done on the interim result we compute in-core (i.e. after the
first patch, there is no path/to/dir symbolic link and it is
perfectly valid to create path/to/dir/file).
Similarly, when an input creates a symbolic link path/to/dir and
then creates a file path/to/dir/file, we need to flag it as an
error without actually creating path/to/dir symbolic link in the
filesystem.
Instead, for any patch in the input that leaves a path (i.e. a non
deletion) in the result, we check all leading paths against the
resulting tree that the patch would create by inspecting all the
patches in the input and then the target of patch application
(either the index or the working tree).
This way, we catch a mischief or a mistake to add a symbolic link
path/to/dir and a file path/to/dir/file at the same time, while
allowing a valid patch that removes a symbolic link path/to/dir and
then adds a file path/to/dir/file.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We should reject a patch, whether it renames/copies dir/file to
elsewhere with or without modificiation, or updates dir/file in
place, if "dir/" part is actually a symbolic link to elsewhere,
by making sure that the code to read the preimage does not read
from a path that is beyond a symbolic link.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We currently read the preimage to apply a patch from the index only
when the --cached option is given. Do so also when the command is
running under the --index option. With --index, the index entry and
the working tree file for a path that is involved in a patch must be
identical, so this should not affect the result, but by reading from
the index, we will get the protection to avoid reading an unintended
path beyond a symbolic link automatically.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
By default, a patch that affects outside the working area (either a
Git controlled working tree, or the current working directory when
"git apply" is used as a replacement of GNU patch) is rejected as a
mistake (or a mischief). Git itself does not create such a patch,
unless the user bends over backwards and specifies a non-standard
prefix to "git diff" and friends.
When `git apply` is used as a "better GNU patch", the user can pass
the `--unsafe-paths` option to override this safety check. This
option has no effect when `--index` or `--cached` is in use.
The new test was stolen from Jeff King with slight enhancements.
Note that a few new tests for touching outside the working area by
following a symbolic link are still expected to fail at this step,
but will be fixed in later steps.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When an import has finished, we run end_packfile() to
finalize the data and move the packfile into place. If this
process fails, we call die() and end up in our die_nicely()
handler. Which unfortunately includes running end_packfile
to save any progress we made. We enter the function again,
and start operating on the pack_data struct while it is in
an inconsistent state, leading to a segfault.
One way to trigger this is to simply start two identical
fast-imports at the same time. They will both create the
same packfiles, which will then try to create identically
named ".keep" files. One will win the race, and the other
will die(), and end up with the segfault.
Since 3c078b9, we already reset the pack_data pointer to
NULL at the end of end_packfile. That covers the case of us
calling die() right after end_packfile, before we have
reinitialized the pack_data pointer. This new problem is
quite similar, except that we are worried about calling
die() _during_ end_packfile, not right after. Ideally we
would simply set pack_data to NULL as soon as we enter the
function, and operate on a copy of the pointer.
Unfortunately, it is not so easy. pack_data is a global, and
end_packfile calls into other functions which operate on the
global directly. We would have to teach each of these to
take an argument, and there is no guarantee that we would
catch all of the spots.
Instead, we can simply use a static flag to avoid
recursively entering the function. This is a little less
elegant, but it's short and fool-proof.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since ea02ffa3 (mailmap: simplify map_user() interface, 2013-01-05),
find_alignment() has been invoking commit_info_destroy() on an
uninitialized auto 'struct commit_info' (when METAINFO_SHOWN is not
set). commit_info_destroy() calls strbuf_release() for each
'commit_info' strbuf member, which randomly invokes free() on
whatever random stack value happens to reside in strbuf.buf, thus
leading to periodic crashes.
Reported-by: Dilyan Palauzov <dilyan.palauzov@aegee.org>
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* sb/atomic-push:
Document receive.advertiseatomic
t5543-atomic-push.sh: add basic tests for atomic pushes
push.c: add an --atomic argument
send-pack.c: add --atomic command line argument
send-pack: rename ref_update_to_be_sent to check_to_send_update
receive-pack.c: negotiate atomic push support
receive-pack.c: add execute_commands_atomic function
receive-pack.c: move transaction handling in a central place
receive-pack.c: move iterating over all commands outside execute_commands
receive-pack.c: die instead of error in case of possible future bug
receive-pack.c: shorten the execute_commands loop over all commands
* mh/reflog-expire: (24 commits)
refs.c: let fprintf handle the formatting
refs.c: don't expose the internal struct ref_lock in the header file
lock_any_ref_for_update(): inline function
refs.c: remove unlock_ref/close_ref/commit_ref from the refs api
reflog_expire(): new function in the reference API
expire_reflog(): treat the policy callback data as opaque
Move newlog and last_kept_sha1 to "struct expire_reflog_cb"
expire_reflog(): move rewrite to flags argument
expire_reflog(): move verbose to flags argument
expire_reflog(): pass flags through to expire_reflog_ent()
struct expire_reflog_cb: a new callback data type
Rename expire_reflog_cb to expire_reflog_policy_cb
expire_reflog(): move updateref to flags argument
expire_reflog(): move dry_run to flags argument
expire_reflog(): add a "flags" argument
expire_reflog(): extract two policy-related functions
Extract function should_expire_reflog_ent()
expire_reflog(): use a lock_file for rewriting the reflog file
expire_reflog(): return early if the reference has no reflog
expire_reflog(): rename "ref" parameter to "refname"
...
The string in 'base' contains a path suffix to a specific object;
when its value is used, the suffix must either be filled (as in
stat_sha1_file, open_sha1_file, check_and_freshen_nonlocal) or
cleared (as in prepare_packed_git) to avoid junk at the end.
660c889e (sha1_file: add for_each iterators for loose and packed
objects, 2014-10-15) introduced loose_from_alt_odb(), but this did
neither and treated 'base' as a complete path to the "base" object
directory, instead of a pointer to the "base" of the full path
string.
The trailing path after 'base' is still initialized to NUL, hiding
the bug in some common cases. Additionally the descendent
for_each_file_in_obj_subdir() function swallows ENOENT, so an error
only shows if the alternate's path was last filled with a valid
object (where statting /path/to/existing/00/0bjectfile/00 fails).
Signed-off-by: Jonathon Mah <me@JonathonMah.com>
Helped-by: Kyle J. McKay <mackyle@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We feed a root "objdir" path to this iterator function,
which then copies the result into a strbuf, so that it can
repeatedly append the object sub-directories to it. Let's
make it easy for callers to just pass us a strbuf in the
first place.
We leave the original interface as a convenience for callers
who want to just pass a const string like the result of
get_object_directory().
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
MAC_OS_X_VERSION_MIN_REQUIRED may be defined by the builder to a
specific version in order to produce compatible binaries for a
particular system. Blindly defining it to MAC_OS_X_VERSION_10_6
is bad.
Additionally MAC_OS_X_VERSION_10_6 will not be defined on older
systems and should AvailabilityMacros.h be included on such as
system an error will result. However, using the explicit value
of 1060 (which is what MAC_OS_X_VERSION_10_6 is defined to) does
not solve the problem.
The changes that introduced stepping on MAC_OS_X_VERSION_MIN were
made in b195aa00 (git-compat-util: suppress unavoidable
Apple-specific deprecation warnings) to avoid deprecation
warnings.
Instead of blindly setting MAC_OS_X_VERSION_MIN to 1060 change
the definition of DEPRECATED_ATTRIBUTE to empty to avoid the
warnings. This preserves any MAC_OS_X_VERSION_MIN_REQUIRED
setting while avoiding the warnings as intended by b195aa00.
Signed-off-by: Kyle J. McKay <mackyle@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our config code simulates a stdio stream around a buffer,
but our fake ungetc() does not behave quite like the real
one. In particular, we only rewind the position by one
character, but do _not_ actually put the character from the
caller into position.
It turns out that this does not matter, because we only ever
push back the character we just read. In other words, such
an assignment would be a noop. But because the function is
called ungetc, and because it takes a character parameter,
it is a mistake waiting to happen.
Actually assigning the character into the buffer would be
ideal, but our pointer is actually a "const" copy of the
buffer. We do not know who the real owner of the buffer is
in this code, and would not want to munge their contents.
Instead, we can simply add an assertion that matches what
the current caller does, and will let us know if new callers
are added that violate the contract.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The decimal_width function originally appeared in blame.c as
"lineno_width", and was designed for calculating the
print-width of small-ish integer values (line numbers in
text files). In ec7ff5b, it was made into a reusable
function, and in dc801e7, we started using it to align
diffstats.
Binary files in a diffstat show byte counts rather than line
numbers, meaning they can be quite large (e.g., consider
adding or removing a 2GB file). decimal_width is not up to
the challenge for two reasons:
1. It takes the value as an "int", whereas large files may
easily surpass this. The value may be truncated, in
which case we will produce an incorrect value.
2. It counts "up" by repeatedly multiplying another
integer by 10 until it surpasses the value. This can
cause an infinite loop when the value is close to the
largest representable integer.
For example, consider using a 32-bit signed integer,
and a value of 2,140,000,000 (just shy of 2^31-1).
We will count up and eventually see that 1,000,000,000
is smaller than our value. The next step would be to
multiply by 10 and see that 10,000,000,000 is too
large, ending the loop. But we can't represent that
value, and we have signed overflow.
This is technically undefined behavior, but a common
behavior is to lose the high bits, in which case our
iterator will certainly be less than the number. So
we'll keep multiplying, overflow again, and so on.
This patch changes the argument to a uintmax_t (the same
type we use to store the diffstat information for binary
filese), and counts "down" by repeatedly dividing our value
by 10.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we are parsing a config value, if we see a carriage
return, we fgetc the next character to see if it is a
line feed (in which case we silently drop the CR). If it
isn't, we then ungetc the character, and take the literal
CR.
But we never check whether we in fact got a character at
all. If the config file ends in CR, we will get EOF here,
and try to ungetc EOF. This works OK for a real stdio
stream. The ungetc returns an error, and the next fgetc will
then return EOF again.
However, our custom buffer-based stream is not so fortunate.
It happily rewinds the position of the stream by one
character, ignoring the fact that we fed it EOF. The next
fgetc call returns the final CR again, over and over, and we
end up in an infinite loop.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
On Windows, the builtin executable names are suffixed with $X.
Signed-off-by: Sebastian Schuberth <sschuberth@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove the need to have duplicated "if there is a change then feed
null_sha1 and otherwise sha1 from the cache entry" for the "new"
side of the diff by introducing two temporary variables to point
at the object name of the old and the new side of the blobs.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The __builtin_ctzll function was added in gcc 3.4.0.
This extends the check for gcc so that use of __builtin_ctzll is only
enabled if gcc >= 3.4.0.
Signed-off-by: Tom G. Christensen <tgc@statsbiblioteket.dk>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
curl 7.11.0 through 7.12.2 when built from their official release
archives will present a 5 digit version number instead of the documented
6 digits which breaks the version check in the Makefile.
Correct these broken version numbers on the fly when extracting them to
ensure the comparison works correctly.
[jc: shortened the new sed scripts a bit]
Signed-off-by: Tom G. Christensen <tgc@statsbiblioteket.dk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit dd61399 introduced support for http proxies that require
authentication but it relies on the CURL_PROXYAUTH option which was
added in curl 7.10.7.
This makes sure proxy authentication is only enabled if libcurl can
support it.
Signed-off-by: Tom G. Christensen <tgc@statsbiblioteket.dk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we add a new submodule the path of the submodule is being
normalized. We fail to normalize multiple adjacent '/./', though.
Thus 'path/to/././submodule' will become 'path/to/./submodule' where
it should be 'path/to/submodule' instead.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Acked-by: Jens Lehmann <Jens.Lehmann@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We may want to say something about command line option names in the
new section as well, but for now, let's make sure everybody is clear
on how to structure and name their configuration variables.
The text for the rules are partly taken from the log message of
Jonathan's 6b3020a2 (add: introduce add.ignoreerrors synonym for
add.ignore-errors, 2010-12-01).
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It seems to be a common mistake to try using a single remote
(e.g. 'origin') to fetch from one place (i.e. upstream) while
pushing to another (i.e. your publishing point).
That will never work satisfactorily, and it is easy to understand
why if you think about what refs/remotes/origin/* would mean in such
a world. It fundamentally cannot reflect the reality. If it
follows the state of your upstream, it cannot match what you have
published, and vice versa.
It may be that misinformation is spread by some people. Let's
counter them by adding a few words to our documentation.
- The description was referring to <oldurl> and <newurl>, but never
mentioned <name> argument you give from the command line. By
mentioning "remote <name>", stress the fact that it is configuring
a single remote.
- Add a reminder that explicitly states that this is about a single
remote, which the triangular workflow is not about.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some older versions of gpg (reportedly v1.2.6 from RHEL4) cannot
import the keyrings found in our test suite, and thus cannot even
make a signature. The previous change works it around, but we
cannot anticipate breakages update to GPG would cause in the future.
Do a test-sign before declaring the GPG prerequisite fulfilled
to future-proof our tests.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since 1e3eefb (tests: replace binary GPG keyrings with
ASCII-armored keys, 2014-12-12), we import our test GPG keys
from a single file. Each keypair in the import stream
contains both the secret and public keys. However, older
versions of gpg reportedly fail to import the public half of
the key. We can solve this by including duplicates of the
public keys separately. The duplicates are ignored by modern
gpg, and this makes older versions work.
Reported by Tom G. Christensen <tgc@statsbiblioteket.dk> on
gpg 1.2.6 (from RHEL4).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>