"git format-patch" learned a new "--interdiff" option to explain
the difference between this version and the previous atttempt in
the cover letter (or after the tree-dashes as a comment).
* es/format-patch-interdiff:
format-patch: allow --interdiff to apply to a lone-patch
log-tree: show_log: make commentary block delimiting reusable
interdiff: teach show_interdiff() to indent interdiff
format-patch: teach --interdiff to respect -v/--reroll-count
format-patch: add --interdiff option to embed diff in cover letter
format-patch: allow additional generated content in make_cover_letter()
"git interpret-trailers" and its underlying machinery had a buggy
code that attempted to ignore patch text after commit log message,
which triggered in various codepaths that will always get the log
message alone and never get such an input.
* jk/trailer-fixes:
append_signoff: use size_t for string offsets
sequencer: ignore "---" divider when parsing trailers
pretty, ref-filter: format %(trailers) with no_divider option
interpret-trailers: allow suppressing "---" divider
interpret-trailers: tighten check for "---" patch boundary
trailer: pass process_trailer_opts to trailer_info_get()
trailer: use size_t for iterating trailer list
trailer: use size_t for string offsets
The color output support for recently introduced "range-diff"
command got tweaked a bit.
* sb/range-diff-colors:
range-diff: indent special lines as context
range-diff: make use of different output indicators
diff.c: add --output-indicator-{new, old, context}
diff.c: rewrite emit_line_0 more understandably
diff.c: omit check for line prefix in emit_line_0
diff: use emit_line_0 once per line
diff.c: add set_sign to emit_line_0
diff.c: reorder arguments for emit_line_ws_markup
diff.c: simplify caller of emit_line_0
t3206: add color test for range-diff --dual-color
test_decode_color: understand FAINT and ITALIC
When creating a thin pack, which allows objects to be made into a
delta against another object that is not in the resulting pack but
is known to be present on the receiving end, the code learned to
take advantage of the reachability bitmap; this allows the server
to send a delta against a base beyond the "boundary" commit.
* jk/pack-delta-reuse-with-bitmap:
pack-objects: reuse on-disk deltas for thin "have" objects
pack-bitmap: save "have" bitmap from walk
t/perf: add perf tests for fetches from a bitmapped server
t/perf: add infrastructure for measuring sizes
t/perf: factor out percent calculations
t/perf: factor boilerplate out of test_perf
The unpack_trees() API used in checking out a branch and merging
walks one or more trees along with the index. When the cache-tree
in the index tells us that we are walking a tree whose flattened
contents is known (i.e. matches a span in the index), as linearly
scanning a span in the index is much more efficient than having to
open tree objects recursively and listing their entries, the walk
can be optimized, which is done in this topic.
* nd/unpack-trees-with-cache-tree:
Document update for nd/unpack-trees-with-cache-tree
cache-tree: verify valid cache-tree in the test suite
unpack-trees: add missing cache invalidation
unpack-trees: reuse (still valid) cache-tree from src_index
unpack-trees: reduce malloc in cache-tree walk
unpack-trees: optimize walking same trees with cache-tree
unpack-trees: add performance tracing
trace.h: support nested performance tracing
The code for computing history reachability has been shuffled,
obtained a bunch of new tests to cover them, and then being
improved.
* ds/reachable:
commit-reach: correct accidental #include of C file
commit-reach: use can_all_from_reach
commit-reach: make can_all_from_reach... linear
commit-reach: replace ref_newer logic
test-reach: test commit_contains
test-reach: test can_all_from_reach_with_flags
test-reach: test reduce_heads
test-reach: test get_merge_bases_many
test-reach: test is_descendant_of
test-reach: test in_merge_bases
test-reach: create new test tool for ref_newer
commit-reach: move can_all_from_reach_with_flags
upload-pack: generalize commit date cutoff
upload-pack: refactor ok_to_give_up()
upload-pack: make reachable() more generic
commit-reach: move commit_contains from ref-filter
commit-reach: move ref_newer from remote.c
commit.h: remove method declarations
commit-reach: move walk methods from commit.c
"git submodule update" is getting rewritten piece-by-piece into C.
* sb/submodule-update-in-c:
submodule--helper: introduce new update-module-mode helper
submodule--helper: replace connect-gitdir-workingtree by ensure-core-worktree
builtin/submodule--helper: factor out method to update a single submodule
builtin/submodule--helper: store update_clone information in a struct
builtin/submodule--helper: factor out submodule updating
git-submodule.sh: rename unused variables
git-submodule.sh: align error reporting for update mode to use path
Fixes to "git rerere" corner cases, especially when conflict
markers cannot be parsed in the file.
* tg/rerere:
rerere: recalculate conflict ID when unresolved conflict is committed
rerere: teach rerere to handle nested conflicts
rerere: return strbuf from handle path
rerere: factor out handle_conflict function
rerere: only return whether a path has conflicts or not
rerere: fix crash with files rerere can't handle
rerere: add documentation for conflict normalization
rerere: mark strings for translation
rerere: wrap paths in output in sq
rerere: lowercase error messages
rerere: unify error messages when read_cache fails
When there are too many packfiles in a repository (which is not
recommended), looking up an object in these would require
consulting many pack .idx files; a new mechanism to have a single
file that consolidates all of these .idx files is introduced.
* ds/multi-pack-index: (32 commits)
pack-objects: consider packs in multi-pack-index
midx: test a few commands that use get_all_packs
treewide: use get_all_packs
packfile: add all_packs list
midx: fix bug that skips midx with alternates
midx: stop reporting garbage
midx: mark bad packed objects
multi-pack-index: store local property
multi-pack-index: provide more helpful usage info
midx: clear midx on repack
packfile: skip loading index if in multi-pack-index
midx: prevent duplicate packfile loads
midx: use midx in approximate_object_count
midx: use existing midx when writing new one
midx: use midx in abbreviation calculations
midx: read objects from multi-pack-index
config: create core.multiPackIndex setting
midx: write object offsets
midx: write object id fanout chunk
midx: write object ids in a chunk
...
Updated plan to repurpose the "-l" option to "git branch".
* jk/branch-l-1-repurpose:
doc/git-branch: remove obsolete "-l" references
branch: make "-l" a synonym for "--list"
"git rev-list --stdin </dev/null" used to be an error; it now shows
no output without an error. "git rev-list --stdin --default HEAD"
still falls back to the given default when nothing is given on the
standard input.
* jk/rev-list-stdin-noop-is-ok:
rev-list: make empty --stdin not an error
"git checkout -b newbranch [HEAD]" should not have to do as much as
checking out a commit different from HEAD. An attempt is made to
optimize this special case.
* bp/checkout-new-branch-optim:
checkout: optimize "git checkout -b <new_branch>"
Running "git clone" against a project that contain two files with
pathnames that differ only in cases on a case insensitive
filesystem would result in one of the files lost because the
underlying filesystem is incapable of holding both at the same
time. An attempt is made to detect such a case and warn.
* nd/clone-case-smashing-warning:
clone: report duplicate entries on case-insensitive filesystems
When core.multiPackIndex is true, we may have a multi-pack-index
in our object directory. Add calls to 'git multi-pack-index verify'
at the end of 'git fsck' if so.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When verifying a multi-pack-index, the only action that takes
significant time is checking the object offsets. For example,
to verify a multi-pack-index containing 6.2 million objects in
the Linux kernel repository takes 1.3 seconds on my machine.
99% of that time is spent looking up object offsets in each of
the packfiles and comparing them to the multi-pack-index offset.
Add a progress indicator for that section of the 'verify' verb.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The 'git multi-pack-index verify' command must verify the object
offsets stored in the multi-pack-index are correct. There are two
ways the offset chunk can be incorrect: the pack-int-id and the
object offset.
Replace the BUG() statement with a die() statement, now that we
may hit a bad pack-int-id during a 'verify' command on a corrupt
multi-pack-index, and it is covered by a test.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When loading a 64-bit offset, we intend to check that off_t can store
the resulting offset. However, the condition accidentally checks the
32-bit offset to see if it is smaller than a 64-bit value. Fix it,
and this will be covered by a test in the 'git multi-pack-index verify'
command in a later commit.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The final check we make while loading a multi-pack-index is that
the packfile names are in lexicographical order. Make this error
be a die() instead.
In order to test this condition, we need multiple packfiles.
Earlier in t5319-multi-pack-index.sh, we tested the interaction with
'git repack' but this limits us to one packfile in our object dir.
Move these repack tests until after the 'verify' tests.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When verifying if a multi-pack-index file is valid, we want the
command to fail to signal an invalid file. Previously, we wrote
an error to stderr and continued as if we had no multi-pack-index.
Now, die() instead of error().
Add tests that check corrupted headers in a few ways:
* Bad signature
* Bad file version
* Bad hash version
* Truncated hash count
* Extended hash count
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The multi-pack-index builtin writes multi-pack-index files, and
uses a 'write' verb to do so. Add a 'verify' verb that checks this
file matches the contents of the pack-indexes it replaces.
The current implementation is a no-op, but will be extended in
small increments in later commits.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This reverts commit 6f924265a0, which
started to require that we have an executable git available in order
to say "make clean", which gives us a chicken-and-egg problem.
Having to have Git installed, or be in a repository, in order to be
able to run an optional "doc-diff" tool is fine. Requiring either
in order to run "make clean" is a different story.
Reported by Jonathan Nieder <jrnieder@gmail.com>.
For the reasons explained in the "commit-graph write: add progress
output" commit leading up to this one, emit progress on "commit-graph
verify". Since e0fd51e1d7 ("fsck: verify commit-graph", 2018-06-27)
"git fsck" has called this command if core.commitGraph=true, but
there's been no progress output to indicate that anything was
different. Now there is (on my tiny dotfiles.git repository):
$ git -c core.commitGraph=true -C ~/ fsck
Checking object directories: 100% (256/256), done.
Checking objects: 100% (2821/2821), done.
dangling blob 5b8bbdb9b788ed90459f505b0934619c17cc605b
Verifying commits in commit graph: 100% (867/867), done.
And on a larger repository, such as the 2015-04-03-1M-git.git test
repository:
$ time git -c core.commitGraph=true -C ~/g/2015-04-03-1M-git/ commit-graph verify
Verifying commits in commit graph: 100% (1000447/1000447), done.
real 0m7.813s
[...]
Since the "commit-graph verify" subcommand is never called from "git
gc", we don't have to worry about passing some some "report_progress"
progress variable around for this codepath.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before this change the "commit-graph write" command didn't report any
progress. On my machine this command takes more than 10 seconds to
write the graph for linux.git, and around 1m30s on the
2015-04-03-1M-git.git[1] test repository (a test case for a large
monorepository).
Furthermore, since the gc.writeCommitGraph setting was added in
d5d5d7b641 ("gc: automatically write commit-graph files", 2018-06-27),
there was no indication at all from a "git gc" run that anything was
different. This why one of the progress bars being added here uses
start_progress() instead of start_delayed_progress(), so that it's
guaranteed to be seen. E.g. on my tiny 867 commit dotfiles.git
repository:
$ git -c gc.writeCommitGraph=true gc
Enumerating objects: 2821, done.
[...]
Computing commit graph generation numbers: 100% (867/867), done.
On larger repositories, such as linux.git the delayed progress bar(s)
will kick in, and we'll show what's going on instead of, as was
previously happening, printing nothing while we write the graph:
$ git -c gc.writeCommitGraph=true gc
[...]
Annotating commits in commit graph: 1565573, done.
Computing commit graph generation numbers: 100% (782484/782484), done.
Note that here we don't show "Finding commits for commit graph", this
is because under "git gc" we seed the search with the commit
references in the repository, and that set is too small to show any
progress, but would e.g. on a smaller repo such as git.git with
--stdin-commits:
$ git rev-list --all | git -c gc.writeCommitGraph=true write --stdin-commits
Finding commits for commit graph: 100% (162576/162576), done.
Computing commit graph generation numbers: 100% (162576/162576), done.
With --stdin-packs we don't show any estimation of how much is left to
do. This is because we might be processing more than one pack. We
could be less lazy here and show progress, either by detecting that
we're only processing one pack, or by first looping over the packs to
discover how many commits they have. I don't see the point in doing
that work. So instead we get (on 2015-04-03-1M-git.git):
$ echo pack-<HASH>.idx | git -c gc.writeCommitGraph=true --exec-path=$PWD commit-graph write --stdin-packs
Finding commits for commit graph: 13064614, done.
Annotating commits in commit graph: 3001341, done.
Computing commit graph generation numbers: 100% (1000447/1000447), done.
No GC mode uses --stdin-packs. It's what they use at Microsoft to
manually compute the generation numbers for their collection of large
packs which are never coalesced.
The reason we need a "report_progress" variable passed down from "git
gc" is so that we don't report this output when we're running in the
process "git gc --auto" detaches from the terminal.
Since we write the commit graph from the "git gc" process itself (as
opposed to what we do with say the "git repack" phase), we'd end up
writing the output to .git/gc.log and reporting it to the user next
time as part of the "The last gc run reported the following[...]"
error, see 329e6e8794 ("gc: save log from daemonized gc --auto and
print it next time", 2015-09-19).
So we must keep track of whether or not we're running in that
demonized mode, and if so print no progress.
See [2] and subsequent replies for a discussion of an approach not
taken in compute_generation_numbers(). I.e. we're saying "Computing
commit graph generation numbers", even though on an established
history we're mostly skipping over all the work we did in the
past. This is similar to the white lie we tell in the "Writing
objects" phase (not all are objects being written).
Always showing progress is considered more important than
accuracy. I.e. on a repository like 2015-04-03-1M-git.git we'd hang
for 6 seconds with no output on the second "git gc" if no changes were
made to any objects in the interim if we'd take the approach in [2].
1. https://github.com/avar/2015-04-03-1M-git
2. <c6960252-c095-fb2b-e0bc-b1e6bb261614@gmail.com>
(https://public-inbox.org/git/c6960252-c095-fb2b-e0bc-b1e6bb261614@gmail.com/)
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Introduce a testing suite that is dedicated to aliases.
For now, check only if nested aliases work and if looping
aliases are detected successfully.
The looping aliases check for mixed execution is there but
disabled, because it is blocking the test suite for a full
minute. As soon as there is a solution for loops using
external commands, it should be enabled.
Signed-off-by: Tim Schumacher <timschumi@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Just printing the command that the user entered is not particularly
helpful when trying to find the alias that causes the loop.
Print the history of substituted commands to help the user find the
offending alias. Mark the entrypoint of the loop with "<==" and the
last command (which looped back to the entrypoint) with "==>".
Signed-off-by: Tim Schumacher <timschumi@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Aliases can only contain non-alias git commands and their arguments,
not other user-defined aliases. Resolving further (nested) aliases
is prevented by breaking the loop after the first alias was
processed. Git then fails with a command-not-found error.
Allow resolving nested aliases by not breaking the loop in
run_argv() after the first alias was processed. Instead, continue
the loop until `handle_alias()` fails, which means that there are no
further aliases that can be processed. Prevent looping aliases by
storing substituted commands in `cmd_list` and checking if a command
has been substituted previously.
While we're at it, fix a styling issue just below the added code.
Signed-off-by: Tim Schumacher <timschumi@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of hard-coding a 40-based constant, split the output of
for-each-ref and for-each-reflog by field.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of hard-coding a 40-based constant, split the output of
for-each-ref and for-each-reflog by field.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of hard-coding a 40-based constant, split the output of
for-each-ref and for-each-reflog by field.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Switch a hard-coded all-zeros object ID to use a variable instead.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Compute the size of the tree and commit objects we're creating by
checking for the size of an object ID and computing the resulting sizes
accordingly.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Compute test values of the appropriate size instead of hard-coding
40-character values. Rename the echo20 function to echoid, since the
values may be of varying sizes.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When adding new remote name with empty string, git will print the
following error message,
fatal: '' is not a valid remote name\n
But when removing remote name with empty string as input, git shows the
empty string without quote,
fatal: No such remote: \n
To make these error messages consistent, quote the name of the remote
that we tried and failed to find.
Signed-off-by: Shulhan <m.shulhan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Currently the 'compute_assignment()' function may read memory out
of bounds, even if used correctly. Namely this happens when we only
have one column. In that case we try to calculate the initial
minimum cost using '!j1' as column in the reduction transfer code.
That in turn causes us to try and get the cost from column 1 in the
cost matrix, which does not exist, and thus results in an out of
bounds memory read.
In the original paper [1], the example code initializes that minimum
cost to "infinite". We could emulate something similar by setting the
minimum cost to INT_MAX, which would result in the same minimum cost
as the current algorithm, as we'd always go into the if condition at
least once, except when we only have one column, and column_count thus
equals 1.
If column_count does equal 1, the condition in the loop would always
be false, and we'd end up with a minimum of INT_MAX, which may lead to
integer overflows later in the algorithm.
For a column count of 1, we however do not even really need to go
through the whole algorithm. A column count of 1 means that there's
no possible assignments, and we can just zero out the column2row and
row2column arrays, and return early from the function, while keeping
the reduction transfer part of the function the same as it is
currently.
Another solution would be to just not call the 'compute_assignment()'
function from the range diff code in this case, however it's better to
make the compute_assignment function more robust, so future callers
don't run into this potential problem.
Note that the test only fails under valgrind on Linux, but the same
command has been reported to segfault on Mac OS.
[1]: Jonker, R., & Volgenant, A. (1987). A shortest augmenting path
algorithm for dense and sparse linear assignment
problems. Computing, 38(4), 325–340.
Reported-by: ryenus <ryenus@gmail.com>
Helped-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Adjust the test so that it computes variables for object IDs instead of
using hard-coded hashes.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Test t0000 tests the "basics of the basics" and as such, checks that we
have various fixed hard-coded object IDs. The tests relying on these
assertions have been marked with the SHA1 prerequisite, as they will
obviously not function in their current form with SHA-256.
Use the test_oid helper to update these assertions and provide values
for both SHA-1 and SHA-256.
These object IDs were synthesized using a set of scripts that created
the objects for both SHA-1 and SHA-256 using the same method to ensure
that they are indeed the correct values.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the hash we're using is 32 bytes in size, attempting to insert a
20-byte object name won't work. Since these are synthesized objects
that are almost all zeros, look them up in a translation table.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add several test functions to make working with various hash-related
values easier.
Add test_oid_init, which loads common hash-related constants and
placeholder object IDs from the newly added files in t/oid-info.
Provide values for these constants for both SHA-1 and SHA-256.
Add test_oid_cache, which accepts data on standard input in the form of
hash-specific key-value pairs that can be looked up later, using the
same format as the files in t/oid-info. Document this format in a
t/oid-info/README directory so that it's easier to use in the future.
Add test_oid, which is used to specify look up a per-hash value
(produced on standard output) based on the key specified as its
argument. Usually the data to be looked up will be a hash-related
constant (such as the size of the hash in binary or hexadecimal), a
well-known or placeholder object ID (such as the all-zeros object ID or
one consisting of "deadbeef" repeated), or something similar. For these
reasons, test_oid will usually be used within a command substitution.
Consequently, redirect the error output to standard error, since
otherwise it will not be displayed.
Add test_detect_hash, which currently only detects SHA-1, and
test_set_hash, which can be used to set a different hash algorithm for
test purposes. In the future, test_detect_hash will learn to actually
detect the hash depending on how the testsuite is to be run.
Use the local keyword within these functions to avoid overwriting other
shell variables. We have had a test balloon in place for a couple of
releases to catch shells that don't have this keyword and have not
received any reports of failure. Note that the varying usages of local
used here are supported by all common open-source shells supporting the
local keyword.
Test these new functions as part of t0000, which also serves to
demonstrate basic usage of them. In addition, add documentation on how
to format the lookup data and how to use the test functions.
Implement two basic lookup charts, one for common invalid or synthesized
object IDs, and one for various facts about the hash function in use.
Provide versions of the data for both SHA-1 and SHA-256.
Since we use shell variables for storage, names used for lookup can
currently consist only of shell identifier characters. If this is a
problem in the future, we can hash the names before use.
Improved-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
fetch_objects() currently does not set exact_oid in struct ref when
invoking transport_fetch_refs(). If the server supports ref-in-want,
fetch_pack() uses this field to determine whether a wanted ref should be
requested as a "want-ref" line or a "want" line; without the setting of
exact_oid, the wrong line will be sent.
Set exact_oid, so that the correct line is sent.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are fetch_object() and fetch_objects() helpers in
fetch-object.h; as the latter takes "struct oid_array",
the former cannot be made into a thin wrapper around the
latter without an extra allocation and set-up cost.
Update fetch_objects() to take an array of "struct object_id"
and number of elements in it as separate parameters, remove
fetch_object(), and adjust all existing callers of these
functions to use the new fetch_objects().
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>