Move the function remove_branches() from builtin/remote.c to refs.c,
rename it to delete_refs(), and make it public.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
delete_ref() uses a different convention for its old_sha1 parameter
than, say, ref_transaction_delete(): NULL_SHA1 means not to check the
old value. Make this fact a little bit clearer in the code by handling
it in explicit, commented code rather than burying it in a conditional
expression.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Also
* Add a docstring
* Rename the second parameter to "old_sha1", to be consistent with the
convention used elsewhere in the refs module
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Bring consistency to error reporting mechanism used in "refs" API.
* mh/verify-lock-error-report:
ref_transaction_commit(): do not capitalize error messages
verify_lock(): do not capitalize error messages
verify_lock(): report errors via a strbuf
verify_lock(): on errors, let the caller unlock the lock
verify_lock(): return 0/-1 rather than struct ref_lock *
Our convention is for error messages to start with a lower-case
letter.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our convention is for error messages to start with a lower-case
letter.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of writing error messages directly to stderr, write them to
a "strbuf *err". The caller, lock_ref_sha1_basic(), uses this error
reporting convention with all the other callees, and reports its
error this way to its callers.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The caller already knows how to do it, so always do it in the same
place.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Its return value wasn't conveying any extra information, but it made
the reader wonder whether the ref_lock that it returned might be
different than the one that was passed to it. So change the function
to the traditional "return 0 on success or a negative value on error".
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
All of the callers of the for_each_ref family of functions have now
been rewritten to work with object_ids, so this adapter is no longer
needed.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change typedef each_ref_fn to take a "const struct object_id *oid"
parameter instead of "const unsigned char *sha1".
To aid this transition, implement an adapter that can be used to wrap
old-style functions matching the old typedef, which is now called
"each_ref_sha1_fn"), and make such functions callable via the new
interface. This requires the old function and its cb_data to be
wrapped in a "struct each_ref_fn_sha1_adapter", and that object to be
used as the cb_data for an adapter function, each_ref_fn_adapter().
This is an enormous diff, but most of it consists of simple,
mechanical changes to the sites that call any of the "for_each_ref"
family of functions. Subsequent to this change, the call sites can be
rewritten one by one to use the new interface.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of dying immediately upon failing to obtain a lock, retry
after a short while with backoff.
* mh/lockfile-retry:
lock_packed_refs(): allow retries when acquiring the packed-refs lock
lockfile: allow file locking to be retried with a timeout
The ref API did not handle cases where 'refs/heads/xyzzy/frotz' is
removed at the same time as 'refs/heads/xyzzy' is added (or vice
versa) very well.
* mh/ref-directory-file:
reflog_expire(): integrate lock_ref_sha1_basic() errors into ours
ref_transaction_commit(): delete extra "the" from error message
ref_transaction_commit(): provide better error messages
rename_ref(): integrate lock_ref_sha1_basic() errors into ours
lock_ref_sha1_basic(): improve diagnostics for ref D/F conflicts
lock_ref_sha1_basic(): report errors via a "struct strbuf *err"
verify_refname_available(): report errors via a "struct strbuf *err"
verify_refname_available(): rename function
refs: check for D/F conflicts among refs created in a transaction
ref_transaction_commit(): use a string_list for detecting duplicates
is_refname_available(): use dirname in first loop
struct nonmatching_ref_data: store a refname instead of a ref_entry
report_refname_conflict(): inline function
entry_matches(): inline function
is_refname_available(): convert local variable "dirname" to strbuf
is_refname_available(): avoid shadowing "dir" variable
is_refname_available(): revamp the comments
t1404: new tests of ref D/F conflicts within transactions
Multi-ref transaction support we merged a few releases ago
unnecessarily kept many file descriptors open, risking to fail with
resource exhaustion. This is for 2.4.x track.
* mh/write-refs-sooner-2.4:
ref_transaction_commit(): fix atomicity and avoid fd exhaustion
ref_transaction_commit(): remove the local flags variable
ref_transaction_commit(): inline call to write_ref_sha1()
rename_ref(): inline calls to write_ref_sha1() from this function
commit_ref_update(): new function, extracted from write_ref_sha1()
write_ref_to_lockfile(): new function, extracted from write_ref_sha1()
t7004: rename ULIMIT test prerequisite to ULIMIT_STACK_SIZE
update-ref: test handling large transactions properly
ref_transaction_commit(): fix atomicity and avoid fd exhaustion
ref_transaction_commit(): remove the local flags variable
ref_transaction_commit(): inline call to write_ref_sha1()
rename_ref(): inline calls to write_ref_sha1() from this function
commit_ref_update(): new function, extracted from write_ref_sha1()
write_ref_to_lockfile(): new function, extracted from write_ref_sha1()
t7004: rename ULIMIT test prerequisite to ULIMIT_STACK_SIZE
update-ref: test handling large transactions properly
The refs API uses ref_lock struct which had its own "int fd", even
though the same file descriptor was in the lock struct it contains.
Clean-up the code to lose this redundant field.
* sb/ref-lock-lose-lock-fd:
refs.c: remove lock_fd from struct ref_lock
Currently, there is only one attempt to acquire any lockfile, and if
the lock is held by another process, the locking attempt fails
immediately.
This is not such a limitation for loose reference files. First, they
don't take long to rewrite. Second, most reference updates have a
known "old" value, so if another process is updating a reference at
the same moment that we are trying to lock it, then probably the
expected "old" value will not longer be valid, and the update will
fail anyway.
But these arguments do not hold for packed-refs:
* The packed-refs file can be large and take significant time to
rewrite.
* Many references are stored in a single packed-refs file, so it could
be that the other process was changing a different reference than
the one that we are interested in.
Therefore, it is much more likely for there to be spurious lock
conflicts in connection to the packed-refs file, resulting in
unnecessary command failures.
So, if the first attempt to lock the packed-refs file fails, continue
retrying for a configurable length of time before giving up. The
default timeout is 1 second.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The old code was roughly
for update in updates:
acquire locks and check old_sha
for update in updates:
if changing value:
write_ref_to_lockfile()
commit_ref_update()
for update in updates:
if deleting value:
unlink()
rewrite packed-refs file
for update in updates:
if reference still locked:
unlock_ref()
This has two problems.
Non-atomic updates
==================
The atomicity of the reference transaction depends on all pre-checks
being done in the first loop, before any changes have started being
committed in the second loop. The problem is that
write_ref_to_lockfile() (previously part of write_ref_sha1()), which
is called from the second loop, contains two more checks:
* It verifies that new_sha1 is a valid object
* If the reference being updated is a branch, it verifies that
new_sha1 points at a commit object (as opposed to a tag, tree, or
blob).
If either of these checks fails, the "transaction" is aborted during
the second loop. But this might happen after some reference updates
have already been permanently committed. In other words, the
all-or-nothing promise of "git update-ref --stdin" could be violated.
So these checks have to be moved to the first loop.
File descriptor exhaustion
==========================
The old code locked all of the references in the first loop, leaving
all of the lockfiles open until later loops. Since we might be
updating a lot of references, this could result in file descriptor
exhaustion.
The solution
============
After this patch, the code looks like
for update in updates:
acquire locks and check old_sha
if changing value:
write_ref_to_lockfile()
else:
close_ref()
for update in updates:
if changing value:
commit_ref_update()
for update in updates:
if deleting value:
unlink()
rewrite packed-refs file
for update in updates:
if reference still locked:
unlock_ref()
This fixes both problems:
1. The pre-checks in write_ref_to_lockfile() are now done in the first
loop, before any changes have been committed. If any of the checks
fails, the whole transaction can now be rolled back correctly.
2. All lockfiles are closed in the first loop immediately after they
are created (either by write_ref_to_lockfile() or by close_ref()).
This means that there is never more than one open lockfile at a
time, preventing file descriptor exhaustion.
To simplify the bookkeeping across loops, add a new REF_NEEDS_COMMIT
bit to update->flags, which keeps track of whether the corresponding
lockfile needs to be committed, as opposed to just unlocked. (Since
"struct ref_update" is internal to the refs module, this change is not
visible to external callers.)
This change fixes two tests in t1400.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead, work directly with update->flags. This has the advantage that
the REF_DELETING bit, set in the first loop, can be read in the second
loop instead of having to be recomputed. Plus, it was potentially
confusing having both update->flags and flags, which sometimes had
different values.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
That was the last caller, so delete function write_ref_sha1().
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most of what it does is unneeded from these call sites.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is the first step towards separating the checking and writing of
the new reference value to committing the change.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
An earlier rewrite to use strbuf_getwholeline() instead of fgets(3)
to read packed-refs file revealed that the former is unacceptably
inefficient.
* jk/reading-packed-refs:
t1430: add another refs-escape test
read_packed_refs: avoid double-checking sane refs
strbuf_getwholeline: use getdelim if it is available
strbuf_getwholeline: avoid calling strbuf_grow
strbuf_addch: avoid calling strbuf_grow
config: use getc_unlocked when reading from file
strbuf_getwholeline: use getc_unlocked
git-compat-util: add fallbacks for unlocked stdio
strbuf_getwholeline: use getc macro
A replacement for contrib/workdir/git-new-workdir that does not
rely on symbolic links and make sharing of objects and refs safer
by making the borrowee and borrowers aware of each other.
* nd/multiple-work-trees: (41 commits)
prune --worktrees: fix expire vs worktree existence condition
t1501: fix test with split index
t2026: fix broken &&-chain
t2026 needs procondition SANITY
git-checkout.txt: a note about multiple checkout support for submodules
checkout: add --ignore-other-wortrees
checkout: pass whole struct to parse_branchname_arg instead of individual flags
git-common-dir: make "modules/" per-working-directory directory
checkout: do not fail if target is an empty directory
t2025: add a test to make sure grafts is working from a linked checkout
checkout: don't require a work tree when checking out into a new one
git_path(): keep "info/sparse-checkout" per work-tree
count-objects: report unused files in $GIT_DIR/worktrees/...
gc: support prune --worktrees
gc: factor out gc.pruneexpire parsing code
gc: style change -- no SP before closing parenthesis
checkout: clean up half-prepared directories in --to mode
checkout: reject if the branch is already checked out elsewhere
prune: strategies for linked checkouts
checkout: support checking out into a new working directory
...
Now that lock_ref_sha1_basic() gives us back its error messages via a
strbuf, incorporate its error message into our error message rather
than emitting two separate error messages.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
While we are in the area, let's remove a superfluous definite article
from the error message that is emitted when the reference cannot be
locked. This improves how it reads and makes it a bit shorter.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Now that lock_ref_sha1_basic() gives us back its error messages via a
strbuf, incorporate its error message into our error message rather
than emitting one error messages to stderr immediately and returning a
second to our caller.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Now that lock_ref_sha1_basic() gives us back its error messages via a
strbuf, incorporate its error message into our error message rather
than emitting two separate error messages.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
If there is a failure to lock a reference that is likely caused by a
D/F conflict (e.g., trying to lock "refs/foo/bar" when reference
"refs/foo" already exists), invoke verify_refname_available() to try
to generate a more helpful error message.
That function might not detect an error. For example, some
non-reference file might be blocking the deletion of an
otherwise-empty directory tree, or there might be a race with another
process that just deleted the offending reference. In such cases,
generate the strerror-based error message like before.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
For now, change the callers to spew the error to stderr like before.
But soon we will change them to incorporate the reason for the failure
into their own error messages.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
It shouldn't be spewing errors directly to stderr.
For now, change its callers to spew the errors to stderr.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Rename is_refname_available() to verify_refname_available() and change
its return value from 1 for success to 0 for success, to be consistent
with our error-handling convention. In a moment it will also get a
"struct strbuf *err" parameter.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
If two references that D/F conflict (e.g., "refs/foo" and
"refs/foo/bar") are created in a single transaction, the old code
discovered the problem only after the "commit" phase of
ref_transaction_commit() had already begun. This could leave some
references updated and others not, which violates the promise of
atomicity.
Instead, check for such conflicts during the "locking" phase:
* Teach is_refname_available() to take an "extras" parameter that can
contain extra reference names with which the specified refname must
not conflict.
* Change lock_ref_sha1_basic() to take an "extras" parameter, which it
passes through to is_refname_available().
* Change ref_transaction_commit() to pass "affected_refnames" to
lock_ref_sha1_basic() as its "extras" argument.
This change fixes a test case in t1404.
This code is a bit stricter than it needs to be. We could conceivably
allow reference "refs/foo/bar" to be created in the same transaction
as "refs/foo" is deleted (or vice versa). But that would be
complicated to implement, because it is not possible to lock
"refs/foo/bar" while "refs/foo" exists as a loose reference, but on
the other hand we don't want to delete some references before adding
others (because that could leave a gap during which required objects
are unreachable). There is also a complication that reflog files'
paths can conflict.
Any less-strict implementation would probably require tricks like the
packing of all references before the start of the real transaction, or
the use of temporary intermediate reference names.
So for now let's accept too-strict checks. Some reference update
transactions will be rejected unnecessarily, but they will be rejected
in their entirety rather than leaving the repository in an
intermediate state, as would happen now.
Please note that there is still one kind of D/F conflict that is *not*
handled correctly. If two processes are running at the same time, and
one tries to create "refs/foo" at the same time that the other tries
to create "refs/foo/bar", then they can race with each other. Both
processes can obtain their respective locks ("refs/foo.lock" and
"refs/foo/bar.lock"), proceed to the "commit" phase of
ref_transaction_commit(), and then the slower process will discover
that it cannot rename its lockfile into place (after possibly having
committed changes to other references). There appears to be no way to
fix this race without changing the locking policy, which in turn would
require a change to *all* Git clients.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Detect duplicates by storing the reference names in a string_list and
sorting that, instead of sorting the ref_updates directly.
* In a moment the string_list will be used for another purpose, too.
* This removes the need for the custom comparison function
ref_update_compare().
* This means that we can carry out the updates in the order that the
user specified them instead of reordering them. This might be handy
someday if, we want to permit multiple updates to a single reference
as long as they are compatible with each other.
Note: we can't use string_list_remove_duplicates() to check for
duplicates, because we need to know the name of the reference that
appeared multiple times, to be used in the error message.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
In the first loop (over prefixes of refname), use dirname to keep
track of the current prefix. This is not an improvement in itself, but
in a moment we will start using dirname for a role where a
NUL-terminated string is needed.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Now that we don't need a ref_entry to pass to
report_refname_conflict(), it is sufficient to store the refname of
the conflicting reference.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
It wasn't pulling its weight. And we are about to need code similar to
this where no ref_entry is available and with more diverse error
messages. Rather than try to generalize the function, just inline it.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
It wasn't pulling its weight. And in a moment we will need similar
tests that take a refname rather than a ref_entry as parameter, which
would have made entry_matches() even less useful.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
This change wouldn't be worth it by itself, but in a moment we will
use the strbuf for more string juggling.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
The function had a "dir" parameter that was shadowed by a local "dir"
variable within a code block. Use the former in place of the latter.
(This is consistent with "dir"'s use elsewhere in the function.)
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Change the comments to a running example of running the function with
refname set to "refs/foo/bar". Add some more explanation of the logic.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
The 'lock_fd' is the same as 'lk->fd'. No need to store it twice so remove
it.
No functional changes intended.
Signed-off-by: Stefan Beller <sbeller@google.com>
Reviewed-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Prior to d0f810f (refs.c: allow listing and deleting badly
named refs, 2014-09-03), read_packed_refs would barf on any
malformed refnames by virtue of calling create_ref_entry
with the "check" parameter set to 1. That commit loosened
our reading so that we call check_refname_format ourselves
and just set a REF_BAD_NAME flag.
We then call create_ref_entry with the check parameter set
to 0. That function learned to do an extra safety check even
when the check parameter is 0, so that we don't load any
dangerous refnames (like "../../../etc/passwd"). This is
implemented by calling refname_is_safe() in
create_ref_entry().
However, we can observe that refname_is_safe() can only be
true if check_refname_format() also failed. So in the common
case of a sanely named ref, we perform _both_ checks, even
though we know that the latter will never trigger. This has
a noticeable performance impact when the packed-refs file is
large.
Let's drop the refname_is_safe check from create_ref_entry(),
and make it the responsibility of the caller. Of the three
callers that pass a check parameter of "0", two will have
just called check_refname_format(), and can check the
refname-safety only when it fails. The third case,
pack_if_possible_fn, is copying from an existing ref entry,
which must have previously passed our safety check.
With this patch, running "git rev-parse refs/heads/does-not-exist"
on a repo with a large (1.6GB) packed-refs file went from:
real 0m6.768s
user 0m6.340s
sys 0m0.432s
to:
real 0m5.703s
user 0m5.276s
sys 0m0.432s
for a wall-clock speedup of 15%.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git prune" used to largely ignore broken refs when deciding which
objects are still being used, which could spread an existing small
damage and make it a larger one.
* jk/prune-with-corrupt-refs:
refs.c: drop curate_packed_refs
repack: turn on "ref paranoia" when doing a destructive repack
prune: turn on ref_paranoia flag
refs: introduce a "ref paranoia" flag
t5312: test object deletion code paths in a corrupted repository
When we delete a ref, we have to rewrite the entire
packed-refs file. We take this opportunity to "curate" the
packed-refs file and drop any entries that are crufty or
broken.
Dropping broken entries (e.g., with bogus names, or ones
that point to missing objects) is actively a bad idea, as it
means that we lose any notion that the data was there in the
first place. Aside from the general hackiness that we might
lose any information about ref "foo" while deleting an
unrelated ref "bar", this may seriously hamper any attempts
by the user at recovering from the corruption in "foo".
They will lose the sha1 and name of "foo"; the exact pointer
may still be useful even if they recover missing objects
from a different copy of the repository. But worse, once the
ref is gone, there is no trace of the corruption. A
follow-up "git prune" may delete objects, even though it
would otherwise bail when seeing corruption.
We could just drop the "broken" bits from
curate_packed_refs, and continue to drop the "crufty" bits:
refs whose loose counterpart exists in the filesystem. This
is not wrong to do, and it does have the advantage that we
may write out a slightly smaller packed-refs file. But it
has two disadvantages:
1. It is a potential source of races or mistakes with
respect to these refs that are otherwise unrelated to
the operation. To my knowledge, there aren't any active
problems in this area, but it seems like an unnecessary
risk.
2. We have to spend time looking up the matching loose
refs for every item in the packed-refs file. If you
have a large number of packed refs that do not change,
that outweighs the benefit from writing out a smaller
packed-refs file (it doesn't get smaller, and you do a
bunch of directory traversal to find that out).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Most operations that iterate over refs are happy to ignore
broken cruft. However, some operations should be performed
with knowledge of these broken refs, because it is better
for the operation to choke on a missing object than it is to
silently pretend that the ref did not exist (e.g., if we are
computing the set of reachable tips in order to prune
objects).
These processes could just call for_each_rawref, except that
ref iteration is often hidden behind other interfaces. For
instance, for a destructive "repack -ad", we would have to
inform "pack-objects" that we are destructive, and then it
would in turn have to tell the revision code that our
"--all" should include broken refs.
It's much simpler to just set a global for "dangerous"
operations that includes broken refs in all iterations.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Various issues around "reflog expire", e.g. using --updateref when
expiring a reflog for a symbolic reference, have been corrected
and/or made saner.
* mh/expire-updateref-fixes:
reflog_expire(): never update a reference to null_sha1
reflog_expire(): ignore --updateref for symbolic references
reflog: improve and update documentation
struct ref_lock: delete the force_write member
lock_ref_sha1_basic(): do not set force_write for missing references
write_ref_sha1(): move write elision test to callers
write_ref_sha1(): remove check for lock == NULL