In the early days of the bitmap code, there was a single
static bitmap_index struct that was used behind the scenes,
and any bitmap-related functions could lazily check
bitmap_git.loaded to see if they needed to read the on-disk
data.
But since 3ae5fa0768 (pack-bitmap: remove bitmap_git global
variable, 2018-06-07), the caller is responsible for the
lifetime of the bitmap_index struct, and we return it from
prepare_bitmap_git() and prepare_bitmap_walk(), both of
which load the on-disk data (or return NULL).
So outside of these functions, it's not possible to have a
bitmap_index for which the loaded flag is not true. Nor is
it possible to accidentally pass an already-loaded
bitmap_index to the loading function (which is static-local
to the file).
We can drop this unnecessary and confusing flag.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since it was introduced in fff42755ef (pack-bitmap: add
support for bitmap indexes, 2013-12-21), this function has
freed the result after traversing it. That is an artifact of
the early days of the bitmap code, when we had a single
static "struct bitmap_index". Back then, it was intended
that you would do:
prepare_bitmap_walk(&revs);
traverse_bitmap_commit_list(&revs);
Since the actual bitmap_index struct was totally behind the
scenes, it was convenient for traverse_bitmap_commit_list()
to clean it up, clearing the way for another traversal.
But since 3ae5fa0768 (pack-bitmap: remove bitmap_git global
variable, 2018-06-07), the caller explicitly manages the
bitmap_index struct itself, like this:
b = prepare_bitmap_walk(&revs);
traverse_bitmap_commit_list(b, &revs);
free_bitmap_index(b);
It no longer makes sense to auto-free the result after the
traversal. If you want to do another traversal, you'd just
create a new bitmap_index. And while nobody tries to call
traverse_bitmap_commit_list() twice, the fact that it throws
away the result might be surprising, and is better avoided.
Note that in the "old" way it was possible for two walks to
amortize the cost of opening the on-disk .bitmap file (since
it was stored in the global bitmap_index), but we lost that
in 3ae5fa0768. However, no code actually does this, so it's
not worth addressing now. The solution might involve a new:
reset_bitmap_walk(b, &revs);
call. Or we might even attach the bitmap data to its
matching packed_git struct, so that multiple
prepare_bitmap_walk() calls could use it. That can wait
until somebody actually has need of the optimization (and
until then, we'll do the correct, unsurprising thing).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 6a1e32d532 (pack-objects: reuse on-disk deltas for
thin "have" objects, 2018-08-21) taught pack-objects a new
optimization trick. Since this wasn't meant to change
user-visible behavior, but only produce smaller packs more
quickly, testing focused on t/perf/p5311.
However, since people don't run perf tests very often, we
should make sure that the feature is exercised in the
regular test suite. This patch does so.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 30cdc33fba (pack-bitmap: save "have" bitmap from
walk, 2018-08-21) introduced a new function for looking at
the "have" side of a bitmap walk. Because it only makes
sense to do so after we've finished the walk, we added an
extra safety assertion, making sure that bitmap_git->result
is non-NULL.
However, this safety is misguided. It was trying to catch
the case where we had called prepare_bitmap_walk() to give
us a "struct bitmap_index", but had not yet called
traverse_bitmap_commit_list() to walk it. But all of the
interesting computation (including setting up the result and
"have" bitmaps) happens in the first function! The latter
function only delivers the result to a callback function.
So the case we were worried about is impossible; if you get
a non-NULL result from prepare_bitmap_walk(), then its
"have" field will be fully formed.
But much worse, traverse_bitmap_commit_list() actually frees
the result field as it finishes. Which means that this
assertion is worse than useless: it's almost guaranteed to
trigger!
Our test suite didn't catch this because the function isn't
actually exercised at all. The only caller comes from
6a1e32d532 (pack-objects: reuse on-disk deltas for thin
"have" objects, 2018-08-21), and that's triggered only when
you fetch or push history that contains an object with a
base that is found deep in history. Our test suite fetches
and pushes either don't use bitmaps, or use too-small
example repositories. But any reasonably-sized real-world
push or fetch (with bitmaps) would trigger this.
This patch drops the harmful assertion and tweaks the
docstring for the function to make the precondition clear.
The tests need to be improved to exercise this new
pack-objects feature, but we'll do that in a separate
commit.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change "fetch" to treat "+" in refspecs (aka --force) to mean we
should clobber a local tag of the same name.
This changes the long-standing behavior of "fetch" added in
853a3697dc ("[PATCH] Multi-head fetch.", 2005-08-20). Before this
change, all tag fetches effectively had --force enabled. See the
git-fetch-script code in fast_forward_local() with the comment:
> Tags need not be pointing at commits so there is no way to
> guarantee "fast-forward" anyway.
That commit and the rest of the history of "fetch" shows that the
"+" (--force) part of refpecs was only conceived for branch updates,
while tags have accepted any changes from upstream unconditionally and
clobbered the local tag object. Changing this behavior has been
discussed as early as 2011[1].
The current behavior doesn't make sense to me, it easily results in
local tags accidentally being clobbered. We could namespace our tags
per-remote and not locally populate refs/tags/*, but as with my
97716d217c ("fetch: add a --prune-tags option and fetch.pruneTags
config", 2018-02-09) it's easier to work around the current
implementation than to fix the root cause.
So this change implements suggestion #1 from Jeff's 2011 E-Mail[1],
"fetch" now only clobbers the tag if either "+" is provided as part of
the refspec, or if "--force" is provided on the command-line.
This also makes it nicely symmetrical with how "tag" itself works when
creating tags. I.e. we refuse to clobber any existing tags unless
"--force" is supplied. Now we can refuse all such clobbering, whether
it would happen by clobbering a local tag with "tag", or by fetching
it from the remote with "fetch".
Ref updates outside refs/{tags,heads/* are still still not symmetrical
with how "git push" works, as discussed in the recently changed
pull-fetch-param.txt documentation. This change brings the two
divergent behaviors more into line with one another. I don't think
there's any reason "fetch" couldn't fully converge with the behavior
used by "push", but that's a topic for another change.
One of the tests added in 31b808a032 ("clone --single: limit the fetch
refspec to fetched branch", 2012-09-20) is being changed to use
--force where a clone would clobber a tag. This changes nothing about
the existing behavior of the test.
1. https://public-inbox.org/git/20111123221658.GA22313@sigill.intra.peff.net/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Refer to the new git-push(1) documentation about when ref updates are
and aren't allowed with and without --force, noting how "git-fetch"
differs from the behavior of "git-push".
Perhaps it would be better to split this all out into a new
gitrefspecs(7) man page, or present this information using tables.
In lieu of that, this is accurate, and fixes a big omission in the
existing refspec docs.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There's complex rules governing whether a push is allowed to take
place depending on whether we're pushing to refs/heads/*, refs/tags/*
or refs/not-that/*. See is_branch() in refs.c, and the various
assertions in refs/files-backend.c. (e.g. "trying to write non-commit
object %s to branch '%s'").
This documentation has never been quite correct, but went downhill
after dbfeddb12e ("push: require force for refs under refs/tags/",
2012-11-29) when we started claiming that <dst> couldn't be a tag
object, which is incorrect. After some of the logic in that patch was
changed in 256b9d70a4 ("push: fix "refs/tags/ hierarchy cannot be
updated without --force"", 2013-01-16) the docs weren't updated, and
we've had some version of documentation that confused whether <src>
was a tag or not with whether <dst> would accept either an annotated
tag object or the commit it points to.
This makes the intro somewhat more verbose & complex, perhaps we
should have a shorter description here and split the full complexity
into a dedicated section. Very few users will find themselves needing
to e.g. push blobs or trees to refs/custom-namespace/* (or blobs or
trees at all), and that could be covered separately as an advanced
topic.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This change will be followed-up with a subsequent change where I'll
change both sides of this mention of "tag <tag>" to be something
that's best read without interruption.
To make that change smaller, let's move this mention of "tag <tag>" to
the end of the "<refspec>..." section, it's now somewhere in the
middle.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Saying that "git push <remote> <src>:<dst>" won't push a merger of
<src> and <dst> to <dst> is clear from the rest of the context here,
so mentioning it is redundant, furthermore the mention of "EXAMPLES
below" isn't specific or useful.
This phrase was originally added in 149f6ddfb3 ("Docs: Expand
explanation of the use of + in git push refspecs.", 2009-02-19), as
can be seen in that change the point of the example being cited was to
show that force pushing can leave unreferenced commits on the
remote. It's enough that we explain that in its own section, it
doesn't need to be mentioned here.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The test suite only incidentally (and unintentionally) tested for the
current behavior of eager tag clobbering on "fetch". This is a
followup to 380efb65df ("push tests: assert re-pushing annotated
tags", 2018-07-31) which tests for it explicitly.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The quoted -m'msg' option would mean the same as -mmsg when passed
through the test_force_push_tag helper. Let's instead use a string
with spaces in it, to have a working example in case we need to pass
other whitespace-delimited arguments to git-tag.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix up a logic error in 380efb65df ("push tests: assert re-pushing
annotated tags", 2018-07-31), where the $tag_type_description variable
was assigned to but never used, unlike in the subsequently added
companion test for fetches in 2d216a7ef6 ("fetch tests: add a test for
clobbering tag behavior", 2018-04-29).
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The -h output has been referring to the --force command as forcing the
overwriting of local branches, but since "fetch" more generally
fetches all sorts of references in all refs/ namespaces, let's talk
about forcing the update of a a "reference" instead.
This wording was initially introduced in 8320199873 ("Rewrite
builtin-fetch option parsing to use parse_options().", 2007-12-04).
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
OLD_ICONV has long been needed by FreeBSD so config.mak.uname defines
it unconditionally. However, recent versions do not need it, and its
presence results in compilation warnings. Resolve this issue by defining
OLD_ICONV only for older FreeBSD versions.
Specifically, revision r281550[1], which is part of FreeBSD 11, removed
the need for OLD_ICONV, and r282275[2] back-ported that change to 10.2.
Versions prior to 10.2 do need it.
[1] b0813ee288
[2] b709ec868a
[es: commit message; tweak version check to distinguish 10.x versions]
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
doc-diff creates a temporary working tree (git-worktree) and generates a
bunch of temporary files which it does not remove since they act as a
cache to speed up subsequent runs. Although doc-diff's working tree and
generated files are not strictly build products of the Makefile (which,
itself, never runs doc-diff), as a convenience, update "make clean" to
clean up doc-diff's working tree and generated files along with other
development detritus normally removed by "make clean".
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As part of its operation, doc-diff creates a bunch of temporary
working files and holds onto them in order to speed up subsequent
invocations. These files are never deleted. Moreover, it creates a
temporary working tree (via git-wortkree) which likewise never gets
removed.
Without knowing the implementation details of the tool, a user may not
know how to clean up manually afterward. Worse, the user may find it
surprising and alarming to discover a working tree which s/he did not
create explicitly.
To address these issues, add a --clean mode which removes the
temporary working tree and deletes all generated files.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
doc-diff invokes 'man' with the -l option to force "local" mode,
however, neither MacOS nor FreeBSD recognize this option. On those
platforms, if the argument to 'man' contains a slash, it is
automatically interpreted as a file specification, so a "local"-like
mode is not needed. And, it turns out, 'man' which does support -l
falls back to enabling -l automatically if it can't otherwise find a
manual entry corresponding to the argument. Since doc-diff always
passes an absolute path of the nroff source file to 'man', the -l
option kicks in anyhow, despite not being specified explicitly.
Therefore, make the invocation portable to the various platforms by
simply dropping -l.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous commit switched "-l" to meaning "--list", but a
few vestiges of its prior meaning as "--create-reflog"
remained:
- the synopsis mentioned "-l" when creating a new branch;
we can drop this entirely, as it has been the default
for years
- the --list command mentions the unfortunate "-l"
confusion, but we've now fixed that
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The exact byte count of the delta base file is important.
The test-delta helper will feed it to patch_delta(), which
will barf if it doesn't match the size byte given in the
delta. Using "echo" may end up with unexpected line endings
on some platforms (e.g,. "\r\n" instead of just "\n").
This actually wouldn't cause the test to fail (since we
already expect test-delta to complain about these bogus
deltas), but would mean that we're not exercising the code
we think we are.
Let's use printf instead (which we already trust to give us
byte-perfect output when we generate the deltas).
While we're here, let's tighten the 5-byte result size used
in the "truncated copy parameters" test. This just needs to
have enough room to attempt to parse the bogus copy command,
meaning 2 is sufficient. Using 5 was arbitrary and just
copied from the base size; since those no longer match, it's
simply confusing. Let's use a more meaningful number.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 3afc679b "commit: use generations in paint_down_to_common()",
the queue in paint_down_to_common() was changed to use a priority
order based on generation number before commit date. This served
two purposes:
1. When generation numbers are present, the walk guarantees
correct topological relationships, regardless of clock skew in
commit dates.
2. It enables short-circuiting the walk when the min_generation
parameter is added in d7c1ec3e "commit: add short-circuit to
paint_down_to_common()". This short-circuit helps commands
like 'git branch --contains' from needing to walk to a merge
base when we know the result is false.
The commit message for 3afc679b includes the following sentence:
This change does not affect the number of commits that are
walked during the execution of paint_down_to_common(), only
the order that those commits are inspected.
This statement is incorrect. Because it changes the order in which
the commits are inspected, it changes the order they are added to
the queue, and hence can change the number of loops before the
queue_has_nonstale() method returns true.
This change makes a concrete difference depending on the topology
of the commit graph. For instance, computing the merge-base between
consecutive versions of the Linux kernel has no effect for versions
after v4.9, but 'git merge-base v4.8 v4.9' presents a performance
regression:
v2.18.0: 0.122s
v2.19.0-rc1: 0.547s
HEAD: 0.127s
To determine that this was simply an ordering issue, I inserted
a counter within the while loop of paint_down_to_common() and
found that the loop runs 167,468 times in v2.18.0 and 635,579
times in v2.19.0-rc1.
The topology of this case can be described in a simplified way
here:
v4.9
| \
| \
v4.8 \
| \ \
| \ |
... A B
| / /
| / /
|/__/
C
Here, the "..." means "a very long line of commits". By generation
number, A and B have generation one more than C. However, A and B
have commit date higher than most of the commits reachable from
v4.8. When the walk reaches v4.8, we realize that it has PARENT1
and PARENT2 flags, so everything it can reach is marked as STALE,
including A. B has only the PARENT1 flag, so is not STALE.
When paint_down_to_common() is run using
compare_commits_by_commit_date, A and B are removed from the queue
early and C is inserted into the queue. At this point, C and the
rest of the queue entries are marked as STALE. The loop then
terminates.
When paint_down_to_common() is run using
compare_commits_by_gen_then_commit_date, B is removed from the
queue only after the many commits reachable from v4.8 are explored.
This causes the loop to run longer. The reason for this regression
is simple: the queue order is intended to not explore a commit
until everything that _could_ reach that commit is explored. From
the information gathered by the original ordering, we have no
guarantee that there is not a commit D reachable from v4.8 that
can also reach B. We gained absolute correctness in exchange for
a performance regression.
The performance regression is probably the worse option, since
these incorrect results in paint_down_to_common() are rare. The
topology required for the performance regression are less rare,
but still require multiple merge commits where the parents differ
greatly in generation number. In our example above, the commit A
is as important as the commit B to demonstrate the problem, since
otherwise the commit C will sit in the queue as non-stale just as
long in both orders.
The solution provided uses the min_generation parameter to decide
if we should use generation numbers in our ordering. When
min_generation is equal to zero, it means that the caller has no
known cutoff for the walk, so we should rely on our commit-date
heuristic as before; this is the case with merge_bases_many().
When min_generation is non-zero, then the caller knows a valuable
cutoff for the short-circuit mechanism; this is the case with
remove_redundant() and in_merge_bases_many().
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we see a delta command instructing us to copy bytes
from the base, we have to read the offset and size from the
delta stream. We do this without checking whether we're at
the end of the stream, meaning we may read past the end of
the buffer.
In practice this isn't exploitable in any interesting way
because:
1. Deltas are always in packfiles, so we have at least a
20-byte trailer that we'll end up reading.
2. The worst case is that we try to perform a nonsense
copy from the base object into the result, based on
whatever was in the pack stream next. In most cases
this will simply fail due to our bounds-checks against
the base or the result.
But even if you carefully constructed a pack stream for
which it succeeds, it wouldn't perform any delta
operation that you couldn't have simply included in a
non-broken form.
But obviously it's poor form to read past the end of the
buffer we've been given. Unfortunately there's no easy way
to do a single length check, since the number of bytes we
need depends on the number of bits set in the initial
command byte. So we'll just check each byte as we parse. We
can hide the complexity in a macro; it's ugly, but not as
ugly as writing out each individual conditional.
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When applying a delta, if we see an opcode that cannot be
fulfilled (e.g., asking to write more bytes than the
destination has left), we break out of our parsing loop but
don't signal an explicit error. We rely on the sanity check
after the loop to see if we have leftover delta bytes or
didn't fill our result buffer.
This can silently ignore corruption when the delta buffer
ends with a bogus command and the destination buffer is
already full. Instead, let's jump into the error handler
directly when we see this case.
Note that the tests also cover the "bad opcode" case, which
already handles this correctly.
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If `cmd` is in the range [0x01,0x7f] and `cmd > top-data`, the
`memcpy(out, data, cmd)` can copy out-of-bounds data from after `delta_buf`
into `dst_buf`.
This is not an exploitable bug because triggering the bug increments the
`data` pointer beyond `top`, causing the `data != top` sanity check after
the loop to trigger and discard the destination buffer - which means that
the result of the out-of-bounds read is never used for anything.
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We don't have any tests that specifically check boundary
cases in patch_delta(). It obviously gets exercised by tests
which read from packfiles, but it's hard to create packfiles
with bogus deltas.
So let's cover some obvious boundary cases:
1. commands that overflow the result buffer
a. literal content from the delta
b. copies from a base
2. commands where the source isn't large enough
a. literal content from a truncated delta
b. copies that need more bytes than the base has
3. copy commands who parameters are truncated
And indeed, we have problems with both 2a and 3. I've marked
these both as expect_failure, though note that because they
involve reading past the end of a buffer, they will
typically only be caught when run under valgrind or ASan.
There's one more test here, too, which just applies a basic
delta. Since all of the other tests expect failure and we
don't otherwise use "test-tool delta" in the test suite,
this gives a sanity check that the tool works at all.
These are based on an earlier patch by Jann Horn
<jannh@google.com>.
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We currently read the input to test-delta by mmap()-ing it.
However, memory-checking tools like valgrind and ASan are
less able to detect reads/writes past the end of an mmap'd
buffer, because the OS is likely to give us extra bytes to
pad out the final page size. So instead, let's read into a
heap buffer.
As a bonus, this also makes it possible to write tests with
empty bases, as mmap() will complain about a zero-length
map.
This is based on a patch by Jann Horn <jannh@google.com>
which actually aligned the data at the end of a page, and
followed it with another page marked with mprotect(). That
would detect problems even without a tool like ASan, but it
was significantly more complex and may have introduced
portability problems. By comparison, this approach pushes
the complexity onto existing memory-checking tools.
Note that this could be done even more simply by using
strbuf_read_file(), but that would defeat the purpose:
strbufs generally overallocate (and at the very least
include a trailing NUL which we do not care about), which
would defeat most memory checkers.
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The doc-diff script immediately resolves its two endpoints
to actual object ids, so that we can reuse cached results
even if they appear under a different name. But we still use
the original name the user fed us when running "git
checkout" in our temporary worktree. This can lead to
confusing results:
- the namespace inside the worktree is different than the
one outside. In particular, "./doc-diff origin HEAD"
will resolve HEAD inside the worktree, whose detached
HEAD will be pointing at origin! As a result, such a
diff would always be empty.
- worse, we will store this result under the oid we got by
resolving HEAD in the main worktree, thus polluting our
cache
- we didn't pass --detach, which meant that using a branch
name would cause us to actually check out that branch,
making it unavailable to other worktrees.
We can solve this by feeding the already-resolved object id
to git-checkout. That naturally forces a detached HEAD, but
just to make clear our expectation, let's explicitly pass
--detach.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
For cleanliness, "git worktree prune" deletes the .git/worktrees
directory if it is empty after pruning is complete.
For consistency, make "git worktree remove <path>" likewise delete
.git/worktrees if it is empty after the removal.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
For consistency with "add -f -f" and "move -f -f" which override
the lock on a worktree, allow "remove -f -f" to do so, as well, as a
convenience.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
For consistency with "add -f -f", which allows a missing but locked
worktree path to be re-used, allow "move -f -f" to override a lock,
as well, as a convenience.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
For safety, "git worktree add <path>" will refuse to add a new
worktree at <path> if <path> is already associated with a worktree
entry, even if <path> is missing (for instance, has been deleted or
resides on non-mounted removable media or network share). The typical
way to re-create a worktree at <path> in such a situation is either to
prune all "broken" entries ("git worktree prune") or to selectively
remove the worktree entry manually ("git worktree remove <path>").
However, neither of these approaches ("prune" nor "remove") is
especially convenient, and they may be unsuitable for scripting when a
tool merely wants to re-use a worktree if it exists or create it from
scratch if it doesn't (much as a tool might use "mkdir -p" to re-use
or create a directory).
Therefore, teach 'add' to respect --force as a convenient way to
re-use a path already associated with a worktree entry if the path is
non-existent. For a locked worktree, require --force to be specified
twice.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A given path should only ever be associated with a single registered
worktree. This invariant is enforced by refusing to create a new
worktree at a given path if that path already exists. For example:
$ git worktree add -q --detach foo
$ git worktree add -q --detach foo
fatal: 'foo' already exists
However, the check can be fooled, and the invariant broken, if the
path is missing. Continuing the example:
$ rm -fr foo
$ git worktree add -q --detach foo
$ git worktree list
... eadebfe [master]
.../foo eadebfe (detached HEAD)
.../foo eadebfe (detached HEAD)
This "corruption" leads to the unfortunate situation in which the
worktree can not be removed:
$ git worktree remove foo
fatal: validation failed, cannot remove working tree: '.../foo'
does not point back to '.git/worktrees/foo'
Nor can the bogus entry be pruned:
$ git worktree prune -v
$ git worktree list
... eadebfe [master]
.../foo eadebfe (detached HEAD)
.../foo eadebfe (detached HEAD)
without first deleting the worktree directory manually:
$ rm -fr foo
$ git worktree prune -v
Removing .../foo: gitdir file points to non-existent location
Removing .../foo1: gitdir file points to non-existent location
$ git worktree list
... eadebfe [master]
or by manually deleting the worktree entry in .git/worktrees.
To address this problem, upgrade "git worktree add" validation to
allow worktree creation only if the given path is not already
associated with an existing worktree (even if the path itself is
non-existent), thus preventing such bogus worktree entries from being
created in the first place.
Reported-by: Jeff King <peff@peff.net>
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Certain conditions must be met for a path to be a valid candidate as the
location of a new worktree; for instance, the path must not exist or
must be an empty directory. Although the number of conditions is small,
new conditions will soon be added so factor out the existing checks into
a separate function to avoid further bloating add_worktree().
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
prune_worktrees() and delete_git_dir() both remove worktree
administrative entries from .git/worktrees, and their implementations
are nearly identical. The only difference is that prune_worktrees() is
also capable of removing a bogus non-worktree-related file from
.git/worktrees.
Simplify by extending delete_git_dir() to handle the little bit of
extra functionality needed by prune_worktrees(), and drop the
effectively duplicate code from the latter.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is a pure code movement to avoid having to forward-declare the
function when new callers are subsequently added.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Callers don't expect library function find_worktree() to die(); they
expect it to return the named worktree if found, or NULL if not.
Although find_worktree() itself never invokes die(), it calls
real_pathdup() with 'die_on_error' incorrectly set to 'true', thus will
die() indirectly if the user-provided path is not to real_pathdup()'s
liking. This can be observed, for instance, with any git-worktree
command which searches for an existing worktree:
$ git worktree unlock foo
fatal: 'foo' is not a working tree
$ git worktree unlock foo/bar
fatal: Invalid path '.../foo': No such file or directory
The first error message is the expected one from "git worktree unlock"
not finding the specified worktree; the second is from find_worktree()
invoking real_pathdup() incorrectly and die()ing prematurely.
Aside from the inconsistent error message between the two cases, this
bug hasn't otherwise been a serious problem since existing callers all
die() anyhow when the worktree can't be found. However, that may not be
true of callers added in the future, so fix find_worktree() to avoid
die()ing.
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Let's say you have the following three trees, where Base is from one commit
behind either master or branch:
Base : bar_v1, foo/{file1, file2, file3}
branch: bar_v2, foo/{file1, file2}, goo/file3
master: bar_v3, foo/{file1, file2, file3}
Using git-am (or am-based rebase) to apply the changes from branch onto
master results in the following tree:
Result: bar_merged, goo/{file1, file2, file3}
This is not what users want; they did not rename foo/ -> goo/, they only
renamed one file within that directory. The reason this happens is am
constructs fake trees (via build_fake_ancestor()) of the following form:
Base_bfa : bar_v1, foo/file3
branch_bfa: bar_v2, goo/file3
Combining these two trees with master's tree:
master: bar_v3, foo/{file1, file2, file3},
You can see that merge_recursive_generic() would see branch_bfa as renaming
foo/ -> goo/, and master as just adding both foo/file1 and foo/file2. As
such, it ends up with goo/{file1, file2, file3}
The core problem is that am does not have access to the original trees; it
can only construct trees using the blobs involved in the patch. As such,
it is not safe to perform directory rename detection within am -3.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Similar to commit 16346883ab ("t3401: add directory rename testcases for
rebase and am", 2018-06-27), add another testcase for directory rename
detection. This new testcase differs in that it showcases a situation
where no directory rename was performed, but which some backends
incorrectly detect.
As with the other testcase, run this in conjunction with each of the
types of rebases:
git-rebase--interactive
git-rebase--am
git-rebase--merge
and also use the same testcase for
git am --3way
Reported-by: Nikolay Kasyanov <corrmage@gmail.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add best-effort support for patches sent using format=flowed (RFC 3676).
Remove leading spaces ("unstuff"), remove soft line breaks (indicated
by space + newline), but leave the signature separator (dash dash space
newline) alone.
Warn in git am when encountering a format=flowed patch, because any
trailing spaces would most probably be lost, as the sending MUA is
encouraged to remove them when preparing the email.
Provide a test patch formatted by Mozilla Thunderbird 60 using its
default configuration. It reuses the contents of the file mailinfo.c
before and after this patch.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The exact sed command to generate manpage-base-url.xsl appears in
the output, unlike the rules for other files that by default only
show summary.
Make the output for this rule similiar to all the other rules by
printing a short status message instead of the whole command.
Signed-off-by: Tim Schumacher <timschumi@gmx.de>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We use two nested conditionals to store a content_changed
variable, but only bother to look at the result once,
directly after we set it. We can drop the variable entirely
and just use a single "if".
This needless complexity is the result of 2ff3a80334 (Teach
--dirstat not to completely ignore rearranged lines within a
file, 2011-04-11). Before that, we held onto the
content_changed variable much longer.
While we're touching the condition, we can swap out oidcmp()
for !oideq(). Our coccinelle patches didn't previously find
this case because of the intermediate variable, but now it's
a simple boolean in a conditional.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These functions return the full oidcmp() value, but the
callers really only care whether it is non-zero. We can use
the more strict !oideq(), which a compiler may be able to
optimize further.
This does change the meaning of the return value subtly, but
it's unlikely that anybody would try to use them for
ordering. They're static-local in this file, and they
already return other error values that would confuse an
ordering (e.g., open() failure gives -1).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The comparison functions used for hashmaps don't care about
strict ordering; they only want to compare entries for
equality. Let's use the oideq() function instead, which can
potentially be better optimized. Note that unlike the
previous patches mass-converting calls like "!oidcmp()",
this patch could actually provide an improvement even with
the current implementation. Those comparison functions are
passed around as function pointers, so at compile-time the
compiler cannot realize that the caller (which is in another
file completely) will treat the return value as a boolean.
Note that this does change the return values in quite a
subtle way (it's still an int, but now the sign bit is
irrelevant for ordering). Because of their funny
hashmap-specific signature, it's unlikely that any of these
static functions would be reused for more generic ordering.
But to be double-sure, let's stop using "cmp" in their
names.
Calling them "eq" doesn't quite work either, because the
hashmap convention is actually _inverted_. "0" means "same",
and non-zero means "different". So I've called them "neq" by
convention here.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This rounds out the previous three patches, covering the
inequality logic for the "hash" variant of the functions.
As with the previous three, the accompanying code changes
are the mechanical result of applying the coccinelle patch;
see those patches for more discussion.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is the flip side of the previous two patches: checking
for a non-zero oidcmp() can be more strictly expressed as
inequality. Like those patches, we write "!= 0" in the
coccinelle transformation, which covers by isomorphism the
more common:
if (oidcmp(E1, E2))
As with the previous two patches, this patch can be achieved
almost entirely by running "make coccicheck"; the only
differences are manual line-wrap fixes to match the original
code.
There is one thing to note for anybody replicating this,
though: coccinelle 1.0.4 seems to miss the case in
builtin/tag.c, even though it's basically the same as all
the others. Running with 1.0.7 does catch this, so
presumably it's just a coccinelle bug that was fixed in the
interim.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is the partner patch to the previous one, but covering
the "hash" variants instead of "oid". Note that our
coccinelle rule is slightly more complex to avoid triggering
the call in hasheq().
I didn't bother to add a new rule to convert:
- hasheq(E1->hash, E2->hash)
+ oideq(E1, E2)
Since these are new functions, there won't be any such
existing callers. And since most of the code is already
using oideq, we're not likely to introduce new ones.
We might still see "!hashcmp(E1->hash, E2->hash)" from topics
in flight. But because our new rule comes after the existing
ones, that should first get converted to "!oidcmp(E1, E2)"
and then to "oideq(E1, E2)".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Using the more restrictive oideq() should, in the long run,
give the compiler more opportunities to optimize these
callsites. For now, this conversion should be a complete
noop with respect to the generated code.
The result is also perhaps a little more readable, as it
avoids the "zero is equal" idiom. Since it's so prevalent in
C, I think seasoned programmers tend not to even notice it
anymore, but it can sometimes make for awkward double
negations (e.g., we can drop a few !!oidcmp() instances
here).
This patch was generated almost entirely by the included
coccinelle patch. This mechanical conversion should be
completely safe, because we check explicitly for cases where
oidcmp() is compared to 0, which is what oideq() is doing
under the hood. Note that we don't have to catch "!oidcmp()"
separately; coccinelle's standard isomorphisms make sure the
two are treated equivalently.
I say "almost" because I did hand-edit the coccinelle output
to fix up a few style violations (it mostly keeps the
original formatting, but sometimes unwraps long lines).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The main comparison functions we provide for comparing
object ids are hashcmp() and oidcmp(). These are more
flexible than a strict equality check, since they also
express ordering. That makes them useful for sorting and
binary searching. However, it also makes them potentially
slower than a strict equality check. Consider this C code,
which is traditionally what our hashcmp has looked like:
#include <string.h>
int hashcmp(const unsigned char *a, const unsigned char *b)
{
return memcmp(a, b, 20);
}
Compiling with "gcc -O2 -S -fverbose-asm", the generated
assembly shows that we actually call memcmp(). But if we
change this to a strict equality check:
return !memcmp(a, b, 20);
we get a faster inline version:
movq (%rdi), %rax # MEM[(void *)a_4(D)], MEM[(void *)a_4(D)]
movq 8(%rdi), %rdx # MEM[(void *)a_4(D)], tmp101
xorq (%rsi), %rax # MEM[(void *)b_5(D)], tmp94
xorq 8(%rsi), %rdx # MEM[(void *)b_5(D)], tmp93
orq %rax, %rdx # tmp94, tmp93
jne .L2 #,
movl 16(%rsi), %eax # MEM[(void *)b_5(D)], tmp104
cmpl %eax, 16(%rdi) # tmp104, MEM[(void *)a_4(D)]
je .L5 #,
Obviously our hashcmp() doesn't include the "!". But because
it's an inline function, optimizing compilers are able to
see "!hashcmp(a,b)" in calling code and take advantage of
this case. So there has been no value thus far in
introducing a more restricted interface for doing strict
equality checks.
But as Git learns about more values for the_hash_algo, our
hashcmp() will grow more complicated and may even delegate
at runtime to functions optimized specifically for that hash
size. That breaks the inline connection we have, and the
compiler will have to assume that the caller really cares
about the sign and magnitude of the memcmp() result, even
though the vast majority don't.
We can solve that by introducing a hasheq() function (and
matching oideq() wrapper), which callers can use to make it
clear that they only care about equality. For now, the
implementation will literally be "!hashcmp()", but it frees
us up later to introduce code optimized specifically for the
equality check.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>