Merge branch 'master' into git-po

This commit is contained in:
Jiang Xin 2012-02-28 12:23:26 +08:00
Родитель 0ad9e96d2e 25a7850a10
Коммит 508d1244dc
201 изменённых файлов: 4949 добавлений и 1990 удалений

Просмотреть файл

@ -35,10 +35,22 @@ For shell scripts specifically (not exhaustive):
- Case arms are indented at the same depth as case and esac lines.
- Redirection operators should be written with space before, but no
space after them. In other words, write 'echo test >"$file"'
instead of 'echo test> $file' or 'echo test > $file'. Note that
even though it is not required by POSIX to double-quote the
redirection target in a variable (as shown above), our code does so
because some versions of bash issue a warning without the quotes.
- We prefer $( ... ) for command substitution; unlike ``, it
properly nests. It should have been the way Bourne spelled
it from day one, but unfortunately isn't.
- If you want to find out if a command is available on the user's
$PATH, you should use 'type <command>', instead of 'which <command>'.
The output of 'which' is not machine parseable and its exit code
is not reliable across platforms.
- We use POSIX compliant parameter substitutions and avoid bashisms;
namely:

Просмотреть файл

@ -8,9 +8,20 @@ UI, Workflows & Features
* Improved handling of views, labels and branches in git-p4 (in contrib).
* "git-p4" (in contrib) suffered from unnecessary merge conflicts when
p4 expanded the embedded $RCS$-like keywords; it can be now told to
unexpand them.
* Some "git-svn" updates.
* "vcs-svn"/"svn-fe" learned to read dumps with svn-deltas and
support incremental imports.
* The configuration mechanism learned an "include" facility; an
assignment to the include.path pseudo-variable causes the named
file to be included in-place when Git looks up configuration
variables.
* "git am" learned to pass "-b" option to underlying "git mailinfo", so
that bracketed string other than "PATCH" at the beginning can be kept.
@ -24,31 +35,52 @@ UI, Workflows & Features
lines are taken from the postimage, in order to make it easier to
view the output.
* "diff-highlight" filter (in contrib/) was updated to produce more
aesthetically pleasing output.
* "git merge" in an interactive session learned to spawn the editor
by default to let the user edit the auto-generated merge message,
to encourage people to explain their merges better. Legacy scripts
can export MERGE_AUTOEDIT=no to retain the historical behaviour.
can export GIT_MERGE_AUTOEDIT=no to retain the historical behavior.
Both "git merge" and "git pull" can be given --no-edit from the
command line to accept the auto-generated merge message.
* "git push" learned the "--prune" option, similar to "git fetch".
* "git tag --list" can be given "--points-at <object>" to limit its
output to those that point at the given object.
* "gitweb" allows intermediate entries in the directory hierarchy
that leads to a projects to be clicked, which in turn shows the
list of projects inside that directory.
* "gitweb" learned to read various pieces of information for the
repositories lazily, instead of reading everything that could be
needed (including the ones that are not necessary for a specific
task).
Performance
* During "git upload-pack" in respose to "git fetch", unnecessary calls
* During "git upload-pack" in response to "git fetch", unnecessary calls
to parse_object() have been eliminated, to help performance in
repositories with excessive number of refs.
Internal Implementation
Internal Implementation (please report possible regressions)
* Recursive call chains in "git index-pack" to deal with long delta
chains have been flattened, to reduce the stack footprint.
* Use of add_extra_ref() API is slowly getting removed, to make it
possible to cleanly restructure the overall refs API.
* Use of add_extra_ref() API is now gone, to make it possible to
cleanly restructure the overall refs API.
* The command line parser of "git pack-objects" now uses parse-options
API.
* The test suite supports the new "test_pause" helper function.
* Parallel to the test suite, there is a beginning of performance
benchmarking framework.
* t/Makefile is adjusted to prevent newer versions of GNU make from
running tests in seemingly random order.
@ -62,30 +94,35 @@ Unless otherwise noted, all the fixes since v1.7.9 in the maintenance
releases are contained in this release (see release notes to them for
details).
* "add -e" learned not to show a diff for an otherwise unmodified
submodule that only has uncommitted local changes in the patch
prepared by for the user to edit.
(merge 701825d js/add-e-submodule-fix later to maint).
* The bulk check-in codepath streamed contents that needs
smudge/clean filters without running them, instead of punting and
delegating to the codepath to run filters after slurping everything
to core.
(merge 4f22b10 jk/maint-avoid-streaming-filtered-contents later to maint).
* "rebase" and "commit --amend" failed to work on commits with ancient
timestamps near year 1970.
(merge 2c733fb jc/parse-date-raw later to maint).
* When the filter driver exits before reading the content before the
main git process writes the contents to be filtered to the pipe to
it, the latter could be killed with SIGPIPE instead of ignoring
such an event as an error.
(merge 6424c2a jb/filter-ignore-sigpipe later to maint).
* "git merge --ff-only $tag" failed because it cannot record the
required mergetag without creating a merge, but this is so common
operation for branch that is used _only_ to follow the upstream, so
it is allowed to fast-forward without recording the mergetag.
(merge b5c9f1c jc/merge-ff-only-stronger-than-signed-merge later to maint).
* When a remote helper exits before reading the blank line from the
main git process to signal the end of commands, the latter could be
killed with SIGPIPE. Instead we should ignore such event as a
non-error.
(merge c34fe63 sp/smart-http-failure-to-push later to maint).
* Typo in "git branch --edit-description my-tpoic" was not diagnosed.
(merge c2d17ba jc/branch-desc-typoavoidance later to maint).
* "git bundle create" produced a corrupt bundle file upon seeing
commits with excessively long subject line.
(merge 8a557bb tr/maint-bundle-long-subject later to maint).
* rpmbuild noticed an unpackaged but installed *.mo file and failed.
(merge 3a9f58c jn/rpm-spec later to maint).
* "gitweb" used to drop warnings in the log file when "heads" view is
accessed in a repository whose HEAD does not point at a valid
branch.
---
exec >/var/tmp/1
O=v1.7.9-208-gee8d52f
O=v1.7.9.2-301-g507fba2
echo O=$(git describe)
git log --first-parent --oneline ^maint $O..
echo

Просмотреть файл

@ -0,0 +1,19 @@
Git v1.7.8.5 Release Notes
==========================
Fixes since v1.7.8.4
--------------------
* Dependency on our thread-utils.h header file was missing for
objects that depend on it in the Makefile.
* "git am" when fed an empty file did not correctly finish reading it
when it attempts to guess the input format.
* "git grep -P" (when PCRE is enabled in the build) did not match the
beginning and the end of the line correctly with ^ and $.
* "git rebase -m" tried to run "git notes copy" needlessly when
nothing was rewritten.
Also contains minor fixes and documentation updates.

Просмотреть файл

@ -4,9 +4,24 @@ Git v1.7.9.1 Release Notes
Fixes since v1.7.9
------------------
* The makefile allowed environment variable X seep into it result in
command names suffixed with unnecessary strings.
* The set of included header files in compat/inet-{ntop,pton}
wrappers was updated for Windows some time ago, but in a way that
broke Solaris build.
* rpmbuild noticed an unpackaged but installed *.mo file and failed.
* Subprocesses spawned from various git programs were often left running
to completion even when the top-level process was killed.
* "git add -e" learned not to show a diff for an otherwise unmodified
submodule that only has uncommitted local changes in the patch
prepared by for the user to edit.
* Typo in "git branch --edit-description my-tpoic" was not diagnosed.
* Using "git grep -l/-L" together with options -W or --break may not
make much sense as the output is to only count the number of hits
and there is no place for file breaks, but the latter options made
@ -16,14 +31,24 @@ Fixes since v1.7.9
chain and veered into side branch from which the whole change to the
specified paths came.
* "git merge --no-edit $tag" failed to honor the --no-edit option.
* "git merge --ff-only $tag" failed because it cannot record the
required mergetag without creating a merge, but this is so common
operation for branch that is used _only_ to follow the upstream, so
it was changed to allow fast-forwarding without recording the mergetag.
* "git mergetool" now gives an empty file as the common base version
to the backend when dealing with the "both sides added, differently"
case.
* "git push -q" was not sufficiently quiet.
* When "git push" fails to update any refs, the client side did not
report an error correctly to the end user.
* "git mergetool" now gives an empty file as the common base version
to the backend when dealing with the "both sides added, differently"
case.
* "rebase" and "commit --amend" failed to work on commits with ancient
timestamps near year 1970.
* When asking for a tag to be pulled, "request-pull" did not show the
name of the tag prefixed with "tags/", which would have helped older
@ -33,4 +58,6 @@ Fixes since v1.7.9
in .gitmodules when the submodule at $path was once added to the
superproject and already initialized.
* Many small corner case bugs on "git tag -n" was corrected.
Also contains minor fixes and documentation updates.

Просмотреть файл

@ -0,0 +1,69 @@
Git v1.7.9.2 Release Notes
==========================
Fixes since v1.7.9.1
--------------------
* Bash completion script (in contrib/) did not like a pattern that
begins with a dash to be passed to __git_ps1 helper function.
* Adaptation of the bash completion script (in contrib/) for zsh
incorrectly listed all subcommands when "git <TAB><TAB>" was given
to ask for list of porcelain subcommands.
* The build procedure for profile-directed optimized binary was not
working very well.
* Some systems need to explicitly link -lcharset to get locale_charset().
* t5541 ignored user-supplied port number used for HTTP server testing.
* The error message emitted when we see an empty loose object was
not phrased correctly.
* The code to ask for password did not fall back to the terminal
input when GIT_ASKPASS is set but does not work (e.g. lack of X
with GUI askpass helper).
* We failed to give the true terminal width to any subcommand when
they are invoked with the pager, i.e. "git -p cmd".
* map_user() was not rewriting its output correctly, which resulted
in the user visible symptom that "git blame -e" sometimes showed
excess '>' at the end of email addresses.
* "git checkout -b" did not allow switching out of an unborn branch.
* When you have both .../foo and .../foo.git, "git clone .../foo" did not
favor the former but the latter.
* "git commit" refused to create a commit when entries added with
"add -N" remained in the index, without telling Git what their content
in the next commit should be. We should have created the commit without
these paths.
* "git diff --stat" said "files", "insertions", and "deletions" even
when it is showing one "file", one "insertion" or one "deletion".
* The output from "git diff --stat" for two paths that have the same
amount of changes showed graph bars of different length due to the
way we handled rounding errors.
* "git grep" did not pay attention to -diff (hence -binary) attribute.
* The transport programs (fetch, push, clone)ignored --no-progress
and showed progress when sending their output to a terminal.
* Sometimes error status detected by a check in an earlier phase of
"git receive-pack" (the other end of "git push") was lost by later
checks, resulting in false indication of success.
* "git rev-list --verify" sometimes skipped verification depending on
the phase of the moon, which dates back to 1.7.8.x series.
* Search box in "gitweb" did not accept non-ASCII characters correctly.
* Search interface of "gitweb" did not show multiple matches in the same file
correctly.
Also contains minor fixes and documentation updates.

Просмотреть файл

@ -0,0 +1,24 @@
Git v1.7.9.3 Release Notes
==========================
Fixes since v1.7.9.2
--------------------
* "git p4" (in contrib/) submit the changes to a wrong place when the
"--use-client-spec" option is set.
* The config.mak.autogen generated by optional autoconf support tried
to link the binary with -lintl even when libintl.h is missing from
the system.
* "git add --refresh <pathspec>" used to warn about unmerged paths
outside the given pathspec.
* The commit log template given with "git merge --edit" did not have
a short instructive text like what "git commit" gives.
* "gitweb" used to drop warnings in the log file when "heads" view is
accessed in a repository whose HEAD does not point at a valid
branch.
Also contains minor fixes and documentation updates.

Просмотреть файл

@ -84,6 +84,17 @@ customary UNIX fashion.
Some variables may require a special value format.
Includes
~~~~~~~~
You can include one config file from another by setting the special
`include.path` variable to the name of the file to be included. The
included file is expanded immediately, as if its contents had been
found at the location of the include directive. If the value of the
`include.path` variable is a relative path, the path is considered to be
relative to the configuration file in which the include directive was
found. See below for examples.
Example
~~~~~~~
@ -106,6 +117,10 @@ Example
gitProxy="ssh" for "kernel.org"
gitProxy=default-proxy ; for the rest
[include]
path = /path/to/foo.inc ; include by absolute path
path = foo ; expand "foo" relative to the current file
Variables
~~~~~~~~~

Просмотреть файл

@ -178,6 +178,11 @@ See also <<FILES>>.
Opens an editor to modify the specified config file; either
'--system', '--global', or repository (default).
--includes::
--no-includes::
Respect `include.*` directives in config files when looking up
values. Defaults to on.
[[FILES]]
FILES
-----

Просмотреть файл

@ -53,6 +53,11 @@ OPTIONS
CONFIGURATION
-------------
merge.branchdesc::
In addition to branch names, populate the log message with
the branch description text associated with them. Defaults
to false.
merge.log::
In addition to branch names, populate the log message with at
most the specified number of one-line descriptions from the

Просмотреть файл

@ -303,9 +303,13 @@ CLIENT SPEC
-----------
The p4 client specification is maintained with the 'p4 client' command
and contains among other fields, a View that specifies how the depot
is mapped into the client repository. Git-p4 can consult the client
spec when given the '--use-client-spec' option or useClientSpec
variable.
is mapped into the client repository. The 'clone' and 'sync' commands
can consult the client spec when given the '--use-client-spec' option or
when the useClientSpec variable is true. After 'git p4 clone', the
useClientSpec variable is automatically set in the repository
configuration file. This allows future 'git p4 submit' commands to
work properly; the submit command looks only at the variable and does
not have a command-line option.
The full syntax for a p4 view is documented in 'p4 help views'. Git-p4
knows only a subset of the view syntax. It understands multi-line
@ -483,6 +487,11 @@ git-p4.skipUserNameCheck::
user map, 'git p4' exits. This option can be used to force
submission regardless.
git-p4.attemptRCSCleanup:
If enabled, 'git p4 submit' will attempt to cleanup RCS keywords
($Header$, etc). These would otherwise cause merge conflicts and prevent
the submit going ahead. This option should be considered experimental at
present.
IMPLEMENTATION DETAILS
----------------------

Просмотреть файл

@ -10,7 +10,7 @@ SYNOPSIS
--------
[verse]
'git push' [--all | --mirror | --tags] [-n | --dry-run] [--receive-pack=<git-receive-pack>]
[--repo=<repository>] [-f | --force] [-v | --verbose] [-u | --set-upstream]
[--repo=<repository>] [-f | --force] [--prune] [-v | --verbose] [-u | --set-upstream]
[<repository> [<refspec>...]]
DESCRIPTION
@ -71,6 +71,14 @@ nor in any Push line of the corresponding remotes file---see below).
Instead of naming each ref to push, specifies that all
refs under `refs/heads/` be pushed.
--prune::
Remove remote branches that don't have a local counterpart. For example
a remote branch `tmp` will be removed if a local branch with the same
name doesn't exist any more. This also respects refspecs, e.g.
`git push --prune remote refs/heads/{asterisk}:refs/tmp/{asterisk}` would
make sure that remote `refs/tmp/foo` will be removed if `refs/heads/foo`
doesn't exist.
--mirror::
Instead of naming each ref to push, specifies that all
refs under `refs/` (which includes but is not

Просмотреть файл

@ -14,7 +14,7 @@ SYNOPSIS
'git remote rename' <old> <new>
'git remote rm' <name>
'git remote set-head' <name> (-a | -d | <branch>)
'git remote set-branches' <name> [--add] <branch>...
'git remote set-branches' [--add] <name> <branch>...
'git remote set-url' [--push] <name> <newurl> [<oldurl>]
'git remote set-url --add' [--push] <name> <newurl>
'git remote set-url --delete' [--push] <name> <url>

Просмотреть файл

@ -198,6 +198,10 @@ must be used for each option.
if a username is not specified (with '--smtp-user' or 'sendemail.smtpuser'),
then authentication is not attempted.
--smtp-debug=0|1::
Enable (1) or disable (0) debug output. If enabled, SMTP
commands and replies will be printed. Useful to debug TLS
connection and authentication problems.
Automating
~~~~~~~~~~

Просмотреть файл

@ -12,7 +12,8 @@ SYNOPSIS
'git tag' [-a | -s | -u <key-id>] [-f] [-m <msg> | -F <file>]
<tagname> [<commit> | <object>]
'git tag' -d <tagname>...
'git tag' [-n[<num>]] -l [--contains <commit>] [<pattern>...]
'git tag' [-n[<num>]] -l [--contains <commit>] [--points-at <object>]
[<pattern>...]
'git tag' -v <tagname>...
DESCRIPTION
@ -86,6 +87,9 @@ OPTIONS
--contains <commit>::
Only list tags which contain the specified commit.
--points-at <object>::
Only list tags of the given object.
-m <msg>::
--message=<msg>::
Use the given tag message (instead of prompting).

Просмотреть файл

@ -9,11 +9,11 @@ git - the stupid content tracker
SYNOPSIS
--------
[verse]
'git' [--version] [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
'git' [--version] [--help] [-c <name>=<value>]
[--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
[-p|--paginate|--no-pager] [--no-replace-objects] [--bare]
[--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
[-c <name>=<value>]
[--help] <command> [<args>]
<command> [<args>]
DESCRIPTION
-----------
@ -44,9 +44,11 @@ unreleased) version of git, that is available from 'master'
branch of the `git.git` repository.
Documentation for older releases are available here:
* link:v1.7.9/git.html[documentation for release 1.7.9]
* link:v1.7.9.2/git.html[documentation for release 1.7.9.2]
* release notes for
link:RelNotes/1.7.9.2.txt[1.7.9.2],
link:RelNotes/1.7.9.1.txt[1.7.9.1],
link:RelNotes/1.7.9.txt[1.7.9].
* link:v1.7.8.4/git.html[documentation for release 1.7.8.4]

Просмотреть файл

@ -1004,7 +1004,7 @@ Updating from ae3a2da... to a80b4aa....
Fast-forward (no commit created; -m option ignored)
example | 1 +
hello | 1 +
2 files changed, 2 insertions(+), 0 deletions(-)
2 files changed, 2 insertions(+)
----------------
Because your branch did not contain anything more than what had

Просмотреть файл

@ -34,12 +34,12 @@ $ echo 'hello world' > file.txt
$ git add .
$ git commit -a -m "initial commit"
[master (root-commit) 54196cc] initial commit
1 files changed, 1 insertions(+), 0 deletions(-)
1 file changed, 1 insertion(+)
create mode 100644 file.txt
$ echo 'hello world!' >file.txt
$ git commit -a -m "add emphasis"
[master c4d59f3] add emphasis
1 files changed, 1 insertions(+), 1 deletions(-)
1 file changed, 1 insertion(+), 1 deletion(-)
------------------------------------------------
What are the 7 digits of hex that git responded to the commit with?

Просмотреть файл

@ -24,13 +24,18 @@ updated behaviour, the environment variable `GIT_MERGE_AUTOEDIT` can be
set to `no` at the beginning of them.
--ff::
When the merge resolves as a fast-forward, only update the branch
pointer, without creating a merge commit. This is the default
behavior.
--no-ff::
Do not generate a merge commit if the merge resolved as
a fast-forward, only update the branch pointer. This is
the default behavior of git-merge.
+
With --no-ff Generate a merge commit even if the merge
resolved as a fast-forward.
Create a merge commit even when the merge resolves as a
fast-forward.
--ff-only::
Refuse to merge and exit with a non-zero status unless the
current `HEAD` is already up-to-date or the merge can be
resolved as a fast-forward.
--log[=<n>]::
--no-log::
@ -65,11 +70,6 @@ merge.
With --no-squash perform the merge and commit the result. This
option can be used to override --squash.
--ff-only::
Refuse to merge and exit with a non-zero status unless the
current `HEAD` is already up-to-date or the merge can be
resolved as a fast-forward.
-s <strategy>::
--strategy=<strategy>::
Use the given merge strategy; can be supplied more than

Просмотреть файл

@ -0,0 +1,140 @@
config API
==========
The config API gives callers a way to access git configuration files
(and files which have the same syntax). See linkgit:git-config[1] for a
discussion of the config file syntax.
General Usage
-------------
Config files are parsed linearly, and each variable found is passed to a
caller-provided callback function. The callback function is responsible
for any actions to be taken on the config option, and is free to ignore
some options. It is not uncommon for the configuration to be parsed
several times during the run of a git program, with different callbacks
picking out different variables useful to themselves.
A config callback function takes three parameters:
- the name of the parsed variable. This is in canonical "flat" form: the
section, subsection, and variable segments will be separated by dots,
and the section and variable segments will be all lowercase. E.g.,
`core.ignorecase`, `diff.SomeType.textconv`.
- the value of the found variable, as a string. If the variable had no
value specified, the value will be NULL (typically this means it
should be interpreted as boolean true).
- a void pointer passed in by the caller of the config API; this can
contain callback-specific data
A config callback should return 0 for success, or -1 if the variable
could not be parsed properly.
Basic Config Querying
---------------------
Most programs will simply want to look up variables in all config files
that git knows about, using the normal precedence rules. To do this,
call `git_config` with a callback function and void data pointer.
`git_config` will read all config sources in order of increasing
priority. Thus a callback should typically overwrite previously-seen
entries with new ones (e.g., if both the user-wide `~/.gitconfig` and
repo-specific `.git/config` contain `color.ui`, the config machinery
will first feed the user-wide one to the callback, and then the
repo-specific one; by overwriting, the higher-priority repo-specific
value is left at the end).
The `git_config_with_options` function lets the caller examine config
while adjusting some of the default behavior of `git_config`. It should
almost never be used by "regular" git code that is looking up
configuration variables. It is intended for advanced callers like
`git-config`, which are intentionally tweaking the normal config-lookup
process. It takes two extra parameters:
`filename`::
If this parameter is non-NULL, it specifies the name of a file to
parse for configuration, rather than looking in the usual files. Regular
`git_config` defaults to `NULL`.
`respect_includes`::
Specify whether include directives should be followed in parsed files.
Regular `git_config` defaults to `1`.
There is a special version of `git_config` called `git_config_early`.
This version takes an additional parameter to specify the repository
config, instead of having it looked up via `git_path`. This is useful
early in a git program before the repository has been found. Unless
you're working with early setup code, you probably don't want to use
this.
Reading Specific Files
----------------------
To read a specific file in git-config format, use
`git_config_from_file`. This takes the same callback and data parameters
as `git_config`.
Value Parsing Helpers
---------------------
To aid in parsing string values, the config API provides callbacks with
a number of helper functions, including:
`git_config_int`::
Parse the string to an integer, including unit factors. Dies on error;
otherwise, returns the parsed result.
`git_config_ulong`::
Identical to `git_config_int`, but for unsigned longs.
`git_config_bool`::
Parse a string into a boolean value, respecting keywords like "true" and
"false". Integer values are converted into true/false values (when they
are non-zero or zero, respectively). Other values cause a die(). If
parsing is successful, the return value is the result.
`git_config_bool_or_int`::
Same as `git_config_bool`, except that integers are returned as-is, and
an `is_bool` flag is unset.
`git_config_maybe_bool`::
Same as `git_config_bool`, except that it returns -1 on error rather
than dying.
`git_config_string`::
Allocates and copies the value string into the `dest` parameter; if no
string is given, prints an error message and returns -1.
`git_config_pathname`::
Similar to `git_config_string`, but expands `~` or `~user` into the
user's home directory when found at the beginning of the path.
Include Directives
------------------
By default, the config parser does not respect include directives.
However, a caller can use the special `git_config_include` wrapper
callback to support them. To do so, you simply wrap your "real" callback
function and data pointer in a `struct config_include_data`, and pass
the wrapper to the regular config-reading functions. For example:
-------------------------------------------
int read_file_with_include(const char *file, config_fn_t fn, void *data)
{
struct config_include_data inc = CONFIG_INCLUDE_INIT;
inc.fn = fn;
inc.data = data;
return git_config_from_file(git_config_include, file, &inc);
}
-------------------------------------------
`git_config` respects includes automatically. The lower-level
`git_config_from_file` does not.
Writing Config Files
--------------------
TODO

Просмотреть файл

@ -255,8 +255,24 @@ same behaviour as well.
`strbuf_getline`::
Read a line from a FILE* pointer. The second argument specifies the line
Read a line from a FILE *, overwriting the existing contents
of the strbuf. The second argument specifies the line
terminator character, typically `'\n'`.
Reading stops after the terminator or at EOF. The terminator
is removed from the buffer before returning. Returns 0 unless
there was nothing left before EOF, in which case it returns `EOF`.
`strbuf_getwholeline`::
Like `strbuf_getline`, but keeps the trailing terminator (if
any) in the buffer.
`strbuf_getwholeline_fd`::
Like `strbuf_getwholeline`, but operates on a file descriptor.
It reads one character at a time, so it is very slow. Do not
use it unless you need the correct position in the file
descriptor.
`stripspace`::

17
INSTALL
Просмотреть файл

@ -28,16 +28,25 @@ set up install paths (via config.mak.autogen), so you can write instead
If you're willing to trade off (much) longer build time for a later
faster git you can also do a profile feedback build with
$ make profile-all
# make prefix=... install
$ make prefix=/usr PROFILE=BUILD all
# make prefix=/usr PROFILE=BUILD install
This will run the complete test suite as training workload and then
rebuild git with the generated profile feedback. This results in a git
which is a few percent faster on CPU intensive workloads. This
may be a good tradeoff for distribution packagers.
Note that the profile feedback build stage currently generates
a lot of additional compiler warnings.
Or if you just want to install a profile-optimized version of git into
your home directory, you could run:
$ make PROFILE=BUILD install
As a caveat: a profile-optimized build takes a *lot* longer since the
git tree must be built twice, and in order for the profiling
measurements to work properly, ccache must be disabled and the test
suite has to be run using only a single CPU. In addition, the profile
feedback build stage currently generates a lot of additional compiler
warnings.
Issues of note:

Просмотреть файл

@ -56,6 +56,10 @@ all::
# FreeBSD can use either, but MinGW and some others need to use
# libcharset.h's locale_charset() instead.
#
# Define CHARSET_LIB to you need to link with library other than -liconv to
# use locale_charset() function. On some platforms this needs to set to
# -lcharset
#
# Define LIBC_CONTAINS_LIBINTL if your gettext implementation doesn't
# need -lintl when linking.
#
@ -342,7 +346,7 @@ pathsep = :
export prefix bindir sharedir sysconfdir gitwebdir localedir
CC = gcc
CC = cc
AR = ar
RM = rm -f
DIFF = diff
@ -460,6 +464,9 @@ PROGRAM_OBJS += http-backend.o
PROGRAM_OBJS += sh-i18n--envsubst.o
PROGRAM_OBJS += credential-store.o
# Binary suffix, set to .exe for Windows builds
X =
PROGRAMS += $(patsubst %.o,git-%$X,$(PROGRAM_OBJS))
TEST_PROGRAMS_NEED_X += test-chmtime
@ -613,6 +620,7 @@ LIB_H += streaming.h
LIB_H += string-list.h
LIB_H += submodule.h
LIB_H += tag.h
LIB_H += thread-utils.h
LIB_H += transport.h
LIB_H += tree.h
LIB_H += tree-walk.h
@ -1698,6 +1706,7 @@ endif
ifdef HAVE_LIBCHARSET_H
BASIC_CFLAGS += -DHAVE_LIBCHARSET_H
EXTLIBS += $(CHARSET_LIB)
endif
ifdef HAVE_DEV_TTY
@ -1774,6 +1783,26 @@ ifdef ASCIIDOC7
export ASCIIDOC7
endif
### profile feedback build
#
# Can adjust this to be a global directory if you want to do extended
# data gathering
PROFILE_DIR := $(CURDIR)
ifeq ("$(PROFILE)","GEN")
CFLAGS += -fprofile-generate=$(PROFILE_DIR) -DNO_NORETURN=1
EXTLIBS += -lgcov
export CCACHE_DISABLE=t
V=1
else
ifneq ("$(PROFILE)","")
CFLAGS += -fprofile-use=$(PROFILE_DIR) -fprofile-correction -DNO_NORETURN=1
export CCACHE_DISABLE=t
V=1
endif
endif
# Shell quote (do not use $(call) to accommodate ancient setups);
SHA1_HEADER_SQ = $(subst ','\'',$(SHA1_HEADER))
@ -1830,7 +1859,17 @@ export DIFF TAR INSTALL DESTDIR SHELL_PATH
SHELL = $(SHELL_PATH)
all:: shell_compatibility_test $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS) GIT-BUILD-OPTIONS
all:: shell_compatibility_test
ifeq "$(PROFILE)" "BUILD"
ifeq ($(filter all,$(MAKECMDGOALS)),all)
all:: profile-clean
$(MAKE) PROFILE=GEN all
$(MAKE) PROFILE=GEN -j1 test
endif
endif
all:: $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS) GIT-BUILD-OPTIONS
ifneq (,$X)
$(QUIET_BUILT_IN)$(foreach p,$(patsubst %$X,%,$(filter %$X,$(ALL_PROGRAMS) $(BUILT_INS) git$X)), test -d '$p' -o '$p' -ef '$p$X' || $(RM) '$p';)
endif
@ -2323,6 +2362,10 @@ GIT-BUILD-OPTIONS: FORCE
@echo USE_LIBPCRE=\''$(subst ','\'',$(subst ','\'',$(USE_LIBPCRE)))'\' >>$@
@echo NO_PERL=\''$(subst ','\'',$(subst ','\'',$(NO_PERL)))'\' >>$@
@echo NO_PYTHON=\''$(subst ','\'',$(subst ','\'',$(NO_PYTHON)))'\' >>$@
@echo NO_UNIX_SOCKETS=\''$(subst ','\'',$(subst ','\'',$(NO_UNIX_SOCKETS)))'\' >>$@
ifdef GIT_TEST_OPTS
@echo GIT_TEST_OPTS=\''$(subst ','\'',$(subst ','\'',$(GIT_TEST_OPTS)))'\' >>$@
endif
ifdef GIT_TEST_CMP
@echo GIT_TEST_CMP=\''$(subst ','\'',$(subst ','\'',$(GIT_TEST_CMP)))'\' >>$@
endif
@ -2331,7 +2374,18 @@ ifdef GIT_TEST_CMP_USE_COPIED_CONTEXT
endif
@echo NO_GETTEXT=\''$(subst ','\'',$(subst ','\'',$(NO_GETTEXT)))'\' >>$@
@echo GETTEXT_POISON=\''$(subst ','\'',$(subst ','\'',$(GETTEXT_POISON)))'\' >>$@
@echo NO_UNIX_SOCKETS=\''$(subst ','\'',$(subst ','\'',$(NO_UNIX_SOCKETS)))'\' >>$@
ifdef GIT_PERF_REPEAT_COUNT
@echo GIT_PERF_REPEAT_COUNT=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_REPEAT_COUNT)))'\' >>$@
endif
ifdef GIT_PERF_REPO
@echo GIT_PERF_REPO=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_REPO)))'\' >>$@
endif
ifdef GIT_PERF_LARGE_REPO
@echo GIT_PERF_LARGE_REPO=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_LARGE_REPO)))'\' >>$@
endif
ifdef GIT_PERF_MAKE_OPTS
@echo GIT_PERF_MAKE_OPTS=\''$(subst ','\'',$(subst ','\'',$(GIT_PERF_MAKE_OPTS)))'\' >>$@
endif
### Detect Tck/Tk interpreter path changes
ifndef NO_TCLTK
@ -2367,6 +2421,11 @@ export NO_SVN_TESTS
test: all
$(MAKE) -C t/ all
perf: all
$(MAKE) -C t/perf/ all
.PHONY: test perf
test-ctype$X: ctype.o
test-date$X: date.o ctype.o
@ -2577,7 +2636,11 @@ dist-doc:
distclean: clean
$(RM) configure
clean:
profile-clean:
$(RM) $(addsuffix *.gcda,$(addprefix $(PROFILE_DIR)/, $(object_dirs)))
$(RM) $(addsuffix *.gcno,$(addprefix $(PROFILE_DIR)/, $(object_dirs)))
clean: profile-clean
$(RM) *.o block-sha1/*.o ppc/*.o compat/*.o compat/*/*.o xdiff/*.o vcs-svn/*.o \
builtin/*.o $(LIB_FILE) $(XDIFF_LIB) $(VCSSVN_LIB)
$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) git$X
@ -2607,7 +2670,7 @@ ifndef NO_TCLTK
endif
$(RM) GIT-VERSION-FILE GIT-CFLAGS GIT-LDFLAGS GIT-GUI-VARS GIT-BUILD-OPTIONS
.PHONY: all install clean strip
.PHONY: all install profile-clean clean strip
.PHONY: shell_compatibility_test please_set_SHELL_PATH_to_a_more_modern_shell
.PHONY: FORCE cscope
@ -2717,18 +2780,3 @@ cover_db: coverage-report
cover_db_html: cover_db
cover -report html -outputdir cover_db_html cover_db
### profile feedback build
#
.PHONY: profile-all profile-clean
PROFILE_GEN_CFLAGS := $(CFLAGS) -fprofile-generate -DNO_NORETURN=1
PROFILE_USE_CFLAGS := $(CFLAGS) -fprofile-use -fprofile-correction -DNO_NORETURN=1
profile-clean:
$(RM) $(addsuffix *.gcda,$(object_dirs))
$(RM) $(addsuffix *.gcno,$(object_dirs))
profile-all: profile-clean
$(MAKE) CFLAGS="$(PROFILE_GEN_CFLAGS)" all
$(MAKE) CFLAGS="$(PROFILE_GEN_CFLAGS)" -j1 test
$(MAKE) CFLAGS="$(PROFILE_USE_CFLAGS)" all

10
README
Просмотреть файл

@ -42,10 +42,12 @@ including full documentation and Git related tools.
The user discussion and development of Git take place on the Git
mailing list -- everyone is welcome to post bug reports, feature
requests, comments and patches to git@vger.kernel.org. To subscribe
to the list, send an email with just "subscribe git" in the body to
majordomo@vger.kernel.org. The mailing list archives are available at
http://marc.theaimsgroup.com/?l=git and other archival sites.
requests, comments and patches to git@vger.kernel.org (read
Documentation/SubmittingPatches for instructions on patch submission).
To subscribe to the list, send an email with just "subscribe git" in
the body to majordomo@vger.kernel.org. The mailing list archives are
available at http://marc.theaimsgroup.com/?l=git and other archival
sites.
The messages titled "A note from the maintainer", "What's in
git.git (stable)" and "What's cooking in git.git (topics)" and

Просмотреть файл

@ -14,6 +14,7 @@
#include "builtin.h"
#include "string-list.h"
#include "dir.h"
#include "diff.h"
#include "parse-options.h"
/*
@ -3241,7 +3242,7 @@ static void stat_patch_list(struct patch *patch)
show_stats(patch);
}
printf(" %d files changed, %d insertions(+), %d deletions(-)\n", files, adds, dels);
print_stat_summary(stdout, files, adds, dels);
}
static void numstat_patch_list(struct patch *patch)

Просмотреть файл

@ -1828,18 +1828,6 @@ static int read_ancestry(const char *graft_file)
return 0;
}
/*
* How many columns do we need to show line numbers in decimal?
*/
static int lineno_width(int lines)
{
int i, width;
for (width = 1, i = 10; i <= lines; width++)
i *= 10;
return width;
}
/*
* How many columns do we need to show line numbers, authors,
* and filenames?
@ -1880,9 +1868,9 @@ static void find_alignment(struct scoreboard *sb, int *option)
if (largest_score < ent_score(sb, e))
largest_score = ent_score(sb, e);
}
max_orig_digits = lineno_width(longest_src_lines);
max_digits = lineno_width(longest_dst_lines);
max_score_digits = lineno_width(largest_score);
max_orig_digits = decimal_width(longest_src_lines);
max_digits = decimal_width(longest_dst_lines);
max_score_digits = decimal_width(largest_score);
}
/*
@ -2050,14 +2038,8 @@ static int git_blame_config(const char *var, const char *value, void *cb)
return 0;
}
switch (userdiff_config(var, value)) {
case 0:
break;
case -1:
if (userdiff_config(var, value) < 0)
return -1;
default:
return 0;
}
return git_default_config(var, value, cb);
}

Просмотреть файл

@ -226,14 +226,8 @@ static const char * const cat_file_usage[] = {
static int git_cat_file_config(const char *var, const char *value, void *cb)
{
switch (userdiff_config(var, value)) {
case 0:
break;
case -1:
if (userdiff_config(var, value) < 0)
return -1;
default:
return 0;
}
return git_default_config(var, value, cb);
}

Просмотреть файл

@ -908,6 +908,17 @@ static int parse_branchname_arg(int argc, const char **argv,
return argcount;
}
static int switch_unborn_to_new_branch(struct checkout_opts *opts)
{
int status;
struct strbuf branch_ref = STRBUF_INIT;
strbuf_addf(&branch_ref, "refs/heads/%s", opts->new_branch);
status = create_symref("HEAD", branch_ref.buf, "checkout -b");
strbuf_release(&branch_ref);
return status;
}
int cmd_checkout(int argc, const char **argv, const char *prefix)
{
struct checkout_opts opts;
@ -1079,5 +1090,13 @@ int cmd_checkout(int argc, const char **argv, const char *prefix)
if (opts.writeout_stage)
die(_("--ours/--theirs is incompatible with switching branches."));
if (!new.commit) {
unsigned char rev[20];
int flag;
if (!read_ref_full("HEAD", rev, 0, &flag) &&
(flag & REF_ISSYMREF) && is_null_sha1(rev))
return switch_unborn_to_new_branch(&opts);
}
return switch_branches(&opts, &new);
}

Просмотреть файл

@ -45,7 +45,7 @@ static char *option_branch = NULL;
static const char *real_git_dir;
static char *option_upload_pack = "git-upload-pack";
static int option_verbosity;
static int option_progress;
static int option_progress = -1;
static struct string_list option_config;
static struct string_list option_reference;
@ -60,8 +60,8 @@ static int opt_parse_reference(const struct option *opt, const char *arg, int un
static struct option builtin_clone_options[] = {
OPT__VERBOSITY(&option_verbosity),
OPT_BOOLEAN(0, "progress", &option_progress,
"force progress reporting"),
OPT_BOOL(0, "progress", &option_progress,
"force progress reporting"),
OPT_BOOLEAN('n', "no-checkout", &option_no_checkout,
"don't create a checkout"),
OPT_BOOLEAN(0, "bare", &option_bare, "create a bare repository"),
@ -107,7 +107,7 @@ static const char *argv_submodule[] = {
static char *get_repo_path(const char *repo, int *is_bundle)
{
static char *suffix[] = { "/.git", ".git", "" };
static char *suffix[] = { "/.git", "", ".git/.git", ".git" };
static char *bundle_suffix[] = { ".bundle", "" };
struct stat st;
int i;
@ -117,7 +117,7 @@ static char *get_repo_path(const char *repo, int *is_bundle)
path = mkpath("%s%s", repo, suffix[i]);
if (stat(path, &st))
continue;
if (S_ISDIR(st.st_mode)) {
if (S_ISDIR(st.st_mode) && is_git_directory(path)) {
*is_bundle = 0;
return xstrdup(absolute_path(path));
} else if (S_ISREG(st.st_mode) && st.st_size > 8) {
@ -232,9 +232,6 @@ static int add_one_reference(struct string_list_item *item, void *cb_data)
{
char *ref_git;
struct strbuf alternate = STRBUF_INIT;
struct remote *remote;
struct transport *transport;
const struct ref *extra;
/* Beware: real_path() and mkpath() return static buffer */
ref_git = xstrdup(real_path(item->string));
@ -249,14 +246,6 @@ static int add_one_reference(struct string_list_item *item, void *cb_data)
strbuf_addf(&alternate, "%s/objects", ref_git);
add_to_alternates_file(alternate.buf);
strbuf_release(&alternate);
remote = remote_get(ref_git);
transport = transport_get(remote, ref_git);
for (extra = transport_get_remote_refs(transport); extra;
extra = extra->next)
add_extra_ref(extra->name, extra->old_sha1, 0);
transport_disconnect(transport);
free(ref_git);
return 0;
}
@ -500,7 +489,6 @@ static void update_remote_refs(const struct ref *refs,
const char *msg)
{
if (refs) {
clear_extra_refs();
write_remote_refs(mapped_refs);
if (option_single_branch)
write_followtags(refs, msg);
@ -813,28 +801,28 @@ int cmd_clone(int argc, const char **argv, const char *prefix)
}
refs = transport_get_remote_refs(transport);
mapped_refs = refs ? wanted_peer_refs(refs, refspec) : NULL;
/*
* transport_get_remote_refs() may return refs with null sha-1
* in mapped_refs (see struct transport->get_refs_list
* comment). In that case we need fetch it early because
* remote_head code below relies on it.
*
* for normal clones, transport_get_remote_refs() should
* return reliable ref set, we can delay cloning until after
* remote HEAD check.
*/
for (ref = refs; ref; ref = ref->next)
if (is_null_sha1(ref->old_sha1)) {
complete_refs_before_fetch = 0;
break;
}
if (!is_local && !complete_refs_before_fetch && refs)
transport_fetch_refs(transport, mapped_refs);
if (refs) {
mapped_refs = wanted_peer_refs(refs, refspec);
/*
* transport_get_remote_refs() may return refs with null sha-1
* in mapped_refs (see struct transport->get_refs_list
* comment). In that case we need fetch it early because
* remote_head code below relies on it.
*
* for normal clones, transport_get_remote_refs() should
* return reliable ref set, we can delay cloning until after
* remote HEAD check.
*/
for (ref = refs; ref; ref = ref->next)
if (is_null_sha1(ref->old_sha1)) {
complete_refs_before_fetch = 0;
break;
}
if (!is_local && !complete_refs_before_fetch)
transport_fetch_refs(transport, mapped_refs);
remote_head = find_ref_by_name(refs, "HEAD");
remote_head_points_at =
guess_remote_head(remote_head, mapped_refs, 0);
@ -852,6 +840,7 @@ int cmd_clone(int argc, const char **argv, const char *prefix)
}
else {
warning(_("You appear to have cloned an empty repository."));
mapped_refs = NULL;
our_head_points_at = NULL;
remote_head_points_at = NULL;
remote_head = NULL;

Просмотреть файл

@ -400,7 +400,7 @@ static char *prepare_index(int argc, const char **argv, const char *prefix,
fd = hold_locked_index(&index_lock, 1);
add_files_to_cache(also ? prefix : NULL, pathspec, 0);
refresh_cache_or_die(refresh_flags);
update_main_cache_tree(1);
update_main_cache_tree(WRITE_TREE_SILENT);
if (write_cache(fd, active_cache, active_nr) ||
close_lock_file(&index_lock))
die(_("unable to write new_index file"));
@ -421,7 +421,7 @@ static char *prepare_index(int argc, const char **argv, const char *prefix,
fd = hold_locked_index(&index_lock, 1);
refresh_cache_or_die(refresh_flags);
if (active_cache_changed) {
update_main_cache_tree(1);
update_main_cache_tree(WRITE_TREE_SILENT);
if (write_cache(fd, active_cache, active_nr) ||
commit_locked_index(&index_lock))
die(_("unable to write new_index file"));

Просмотреть файл

@ -25,6 +25,7 @@ static const char *given_config_file;
static int actions, types;
static const char *get_color_slot, *get_colorbool_slot;
static int end_null;
static int respect_includes = -1;
#define ACTION_GET (1<<0)
#define ACTION_GET_ALL (1<<1)
@ -74,6 +75,7 @@ static struct option builtin_config_options[] = {
OPT_BIT(0, "path", &types, "value is a path (file or directory name)", TYPE_PATH),
OPT_GROUP("Other"),
OPT_BOOLEAN('z', "null", &end_null, "terminate values with NUL byte"),
OPT_BOOL(0, "includes", &respect_includes, "respect include directives on lookup"),
OPT_END(),
};
@ -161,8 +163,11 @@ static int get_value(const char *key_, const char *regex_)
int ret = -1;
char *global = NULL, *repo_config = NULL;
const char *system_wide = NULL, *local;
struct config_include_data inc = CONFIG_INCLUDE_INIT;
config_fn_t fn;
void *data;
local = config_exclusive_filename;
local = given_config_file;
if (!local) {
const char *home = getenv("HOME");
local = repo_config = git_pathdup("config");
@ -213,19 +218,28 @@ static int get_value(const char *key_, const char *regex_)
}
}
fn = show_config;
data = NULL;
if (respect_includes) {
inc.fn = fn;
inc.data = data;
fn = git_config_include;
data = &inc;
}
if (do_all && system_wide)
git_config_from_file(show_config, system_wide, NULL);
git_config_from_file(fn, system_wide, data);
if (do_all && global)
git_config_from_file(show_config, global, NULL);
git_config_from_file(fn, global, data);
if (do_all)
git_config_from_file(show_config, local, NULL);
git_config_from_parameters(show_config, NULL);
git_config_from_file(fn, local, data);
git_config_from_parameters(fn, data);
if (!do_all && !seen)
git_config_from_file(show_config, local, NULL);
git_config_from_file(fn, local, data);
if (!do_all && !seen && global)
git_config_from_file(show_config, global, NULL);
git_config_from_file(fn, global, data);
if (!do_all && !seen && system_wide)
git_config_from_file(show_config, system_wide, NULL);
git_config_from_file(fn, system_wide, data);
free(key);
if (regexp) {
@ -301,7 +315,8 @@ static void get_color(const char *def_color)
{
get_color_found = 0;
parsed_color[0] = '\0';
git_config(git_get_color_config, NULL);
git_config_with_options(git_get_color_config, NULL,
given_config_file, respect_includes);
if (!get_color_found && def_color)
color_parse(def_color, "command line", parsed_color);
@ -328,7 +343,8 @@ static int get_colorbool(int print)
{
get_colorbool_found = -1;
get_diff_color_found = -1;
git_config(git_get_colorbool_config, NULL);
git_config_with_options(git_get_colorbool_config, NULL,
given_config_file, respect_includes);
if (get_colorbool_found < 0) {
if (!strcmp(get_colorbool_slot, "color.diff"))
@ -351,7 +367,7 @@ int cmd_config(int argc, const char **argv, const char *prefix)
int nongit = !startup_info->have_repository;
char *value;
config_exclusive_filename = getenv(CONFIG_ENVIRONMENT);
given_config_file = getenv(CONFIG_ENVIRONMENT);
argc = parse_options(argc, argv, prefix, builtin_config_options,
builtin_config_usage,
@ -366,24 +382,28 @@ int cmd_config(int argc, const char **argv, const char *prefix)
char *home = getenv("HOME");
if (home) {
char *user_config = xstrdup(mkpath("%s/.gitconfig", home));
config_exclusive_filename = user_config;
given_config_file = user_config;
} else {
die("$HOME not set");
}
}
else if (use_system_config)
config_exclusive_filename = git_etc_gitconfig();
given_config_file = git_etc_gitconfig();
else if (use_local_config)
config_exclusive_filename = git_pathdup("config");
given_config_file = git_pathdup("config");
else if (given_config_file) {
if (!is_absolute_path(given_config_file) && prefix)
config_exclusive_filename = prefix_filename(prefix,
strlen(prefix),
given_config_file);
given_config_file =
xstrdup(prefix_filename(prefix,
strlen(prefix),
given_config_file));
else
config_exclusive_filename = given_config_file;
given_config_file = given_config_file;
}
if (respect_includes == -1)
respect_includes = !given_config_file;
if (end_null) {
term = '\0';
delim = '\n';
@ -420,28 +440,30 @@ int cmd_config(int argc, const char **argv, const char *prefix)
if (actions == ACTION_LIST) {
check_argc(argc, 0, 0);
if (git_config(show_all_config, NULL) < 0) {
if (config_exclusive_filename)
if (git_config_with_options(show_all_config, NULL,
given_config_file,
respect_includes) < 0) {
if (given_config_file)
die_errno("unable to read config file '%s'",
config_exclusive_filename);
given_config_file);
else
die("error processing config file(s)");
}
}
else if (actions == ACTION_EDIT) {
check_argc(argc, 0, 0);
if (!config_exclusive_filename && nongit)
if (!given_config_file && nongit)
die("not in a git directory");
git_config(git_default_config, NULL);
launch_editor(config_exclusive_filename ?
config_exclusive_filename : git_path("config"),
launch_editor(given_config_file ?
given_config_file : git_path("config"),
NULL, NULL);
}
else if (actions == ACTION_SET) {
int ret;
check_argc(argc, 2, 2);
value = normalize_value(argv[0], argv[1]);
ret = git_config_set(argv[0], value);
ret = git_config_set_in_file(given_config_file, argv[0], value);
if (ret == CONFIG_NOTHING_SET)
error("cannot overwrite multiple values with a single value\n"
" Use a regexp, --add or --replace-all to change %s.", argv[0]);
@ -450,17 +472,20 @@ int cmd_config(int argc, const char **argv, const char *prefix)
else if (actions == ACTION_SET_ALL) {
check_argc(argc, 2, 3);
value = normalize_value(argv[0], argv[1]);
return git_config_set_multivar(argv[0], value, argv[2], 0);
return git_config_set_multivar_in_file(given_config_file,
argv[0], value, argv[2], 0);
}
else if (actions == ACTION_ADD) {
check_argc(argc, 2, 2);
value = normalize_value(argv[0], argv[1]);
return git_config_set_multivar(argv[0], value, "^$", 0);
return git_config_set_multivar_in_file(given_config_file,
argv[0], value, "^$", 0);
}
else if (actions == ACTION_REPLACE_ALL) {
check_argc(argc, 2, 3);
value = normalize_value(argv[0], argv[1]);
return git_config_set_multivar(argv[0], value, argv[2], 1);
return git_config_set_multivar_in_file(given_config_file,
argv[0], value, argv[2], 1);
}
else if (actions == ACTION_GET) {
check_argc(argc, 1, 2);
@ -481,18 +506,22 @@ int cmd_config(int argc, const char **argv, const char *prefix)
else if (actions == ACTION_UNSET) {
check_argc(argc, 1, 2);
if (argc == 2)
return git_config_set_multivar(argv[0], NULL, argv[1], 0);
return git_config_set_multivar_in_file(given_config_file,
argv[0], NULL, argv[1], 0);
else
return git_config_set(argv[0], NULL);
return git_config_set_in_file(given_config_file,
argv[0], NULL);
}
else if (actions == ACTION_UNSET_ALL) {
check_argc(argc, 1, 2);
return git_config_set_multivar(argv[0], NULL, argv[1], 1);
return git_config_set_multivar_in_file(given_config_file,
argv[0], NULL, argv[1], 1);
}
else if (actions == ACTION_RENAME_SECTION) {
int ret;
check_argc(argc, 2, 2);
ret = git_config_rename_section(argv[0], argv[1]);
ret = git_config_rename_section_in_file(given_config_file,
argv[0], argv[1]);
if (ret < 0)
return ret;
if (ret == 0)
@ -501,7 +530,8 @@ int cmd_config(int argc, const char **argv, const char *prefix)
else if (actions == ACTION_REMOVE_SECTION) {
int ret;
check_argc(argc, 1, 1);
ret = git_config_rename_section(argv[0], NULL);
ret = git_config_rename_section_in_file(given_config_file,
argv[0], NULL);
if (ret < 0)
return ret;
if (ret == 0)

Просмотреть файл

@ -58,9 +58,9 @@ static void rev_list_push(struct commit *commit, int mark)
}
}
static int rev_list_insert_ref(const char *path, const unsigned char *sha1, int flag, void *cb_data)
static int rev_list_insert_ref(const char *refname, const unsigned char *sha1, int flag, void *cb_data)
{
struct object *o = deref_tag(parse_object(sha1), path, 0);
struct object *o = deref_tag(parse_object(sha1), refname, 0);
if (o && o->type == OBJ_COMMIT)
rev_list_push((struct commit *)o, SEEN);
@ -68,9 +68,9 @@ static int rev_list_insert_ref(const char *path, const unsigned char *sha1, int
return 0;
}
static int clear_marks(const char *path, const unsigned char *sha1, int flag, void *cb_data)
static int clear_marks(const char *refname, const unsigned char *sha1, int flag, void *cb_data)
{
struct object *o = deref_tag(parse_object(sha1), path, 0);
struct object *o = deref_tag(parse_object(sha1), refname, 0);
if (o && o->type == OBJ_COMMIT)
clear_commit_marks((struct commit *)o,
@ -256,11 +256,6 @@ static void insert_one_alternate_ref(const struct ref *ref, void *unused)
rev_list_insert_ref(NULL, ref->old_sha1, 0, NULL);
}
static void insert_alternate_refs(void)
{
for_each_alternate_ref(insert_one_alternate_ref, NULL);
}
#define INITIAL_FLUSH 16
#define PIPESAFE_FLUSH 32
#define LARGE_FLUSH 1024
@ -295,7 +290,7 @@ static int find_common(int fd[2], unsigned char *result_sha1,
marked = 1;
for_each_ref(rev_list_insert_ref, NULL);
insert_alternate_refs();
for_each_alternate_ref(insert_one_alternate_ref, NULL);
fetching = 0;
for ( ; refs ; refs = refs->next) {
@ -493,7 +488,7 @@ done:
static struct commit_list *complete;
static int mark_complete(const char *path, const unsigned char *sha1, int flag, void *cb_data)
static int mark_complete(const char *refname, const unsigned char *sha1, int flag, void *cb_data)
{
struct object *o = parse_object(sha1);
@ -586,6 +581,11 @@ static void filter_refs(struct ref **refs, int nr_match, char **match)
*refs = newlist;
}
static void mark_alternate_complete(const struct ref *ref, void *unused)
{
mark_complete(NULL, ref->old_sha1, 0, NULL);
}
static int everything_local(struct ref **refs, int nr_match, char **match)
{
struct ref *ref;
@ -614,6 +614,7 @@ static int everything_local(struct ref **refs, int nr_match, char **match)
if (!args.depth) {
for_each_ref(mark_complete, NULL);
for_each_alternate_ref(mark_alternate_complete, NULL);
if (cutoff)
mark_recent_complete_commits(cutoff);
}
@ -736,7 +737,7 @@ static int get_pack(int xd[2], char **pack_lockfile)
}
else {
*av++ = "unpack-objects";
if (args.quiet)
if (args.quiet || args.no_progress)
*av++ = "-q";
}
if (*hdr_arg)

Просмотреть файл

@ -30,7 +30,7 @@ enum {
};
static int all, append, dry_run, force, keep, multiple, prune, update_head_ok, verbosity;
static int progress, recurse_submodules = RECURSE_SUBMODULES_DEFAULT;
static int progress = -1, recurse_submodules = RECURSE_SUBMODULES_DEFAULT;
static int tags = TAGS_DEFAULT;
static const char *depth;
static const char *upload_pack;
@ -78,7 +78,7 @@ static struct option builtin_fetch_options[] = {
OPT_BOOLEAN('k', "keep", &keep, "keep downloaded pack"),
OPT_BOOLEAN('u', "update-head-ok", &update_head_ok,
"allow updating of HEAD ref"),
OPT_BOOLEAN(0, "progress", &progress, "force progress reporting"),
OPT_BOOL(0, "progress", &progress, "force progress reporting"),
OPT_STRING(0, "depth", &depth, "depth",
"deepen history of shallow clone"),
{ OPTION_STRING, 0, "submodule-prefix", &submodule_prefix, "dir",

Просмотреть файл

@ -29,25 +29,12 @@ static int use_threads = 1;
#define THREADS 8
static pthread_t threads[THREADS];
static void *load_sha1(const unsigned char *sha1, unsigned long *size,
const char *name);
static void *load_file(const char *filename, size_t *sz);
enum work_type {WORK_SHA1, WORK_FILE};
/* We use one producer thread and THREADS consumer
* threads. The producer adds struct work_items to 'todo' and the
* consumers pick work items from the same array.
*/
struct work_item {
enum work_type type;
char *name;
/* if type == WORK_SHA1, then 'identifier' is a SHA1,
* otherwise type == WORK_FILE, and 'identifier' is a NUL
* terminated filename.
*/
void *identifier;
struct grep_source source;
char done;
struct strbuf out;
};
@ -85,21 +72,6 @@ static inline void grep_unlock(void)
pthread_mutex_unlock(&grep_mutex);
}
/* Used to serialize calls to read_sha1_file. */
static pthread_mutex_t read_sha1_mutex;
static inline void read_sha1_lock(void)
{
if (use_threads)
pthread_mutex_lock(&read_sha1_mutex);
}
static inline void read_sha1_unlock(void)
{
if (use_threads)
pthread_mutex_unlock(&read_sha1_mutex);
}
/* Signalled when a new work_item is added to todo. */
static pthread_cond_t cond_add;
@ -113,7 +85,8 @@ static pthread_cond_t cond_result;
static int skip_first_line;
static void add_work(enum work_type type, char *name, void *id)
static void add_work(struct grep_opt *opt, enum grep_source_type type,
const char *name, const void *id)
{
grep_lock();
@ -121,9 +94,9 @@ static void add_work(enum work_type type, char *name, void *id)
pthread_cond_wait(&cond_write, &grep_mutex);
}
todo[todo_end].type = type;
todo[todo_end].name = name;
todo[todo_end].identifier = id;
grep_source_init(&todo[todo_end].source, type, name, id);
if (opt->binary != GREP_BINARY_TEXT)
grep_source_load_driver(&todo[todo_end].source);
todo[todo_end].done = 0;
strbuf_reset(&todo[todo_end].out);
todo_end = (todo_end + 1) % ARRAY_SIZE(todo);
@ -151,21 +124,6 @@ static struct work_item *get_work(void)
return ret;
}
static void grep_sha1_async(struct grep_opt *opt, char *name,
const unsigned char *sha1)
{
unsigned char *s;
s = xmalloc(20);
memcpy(s, sha1, 20);
add_work(WORK_SHA1, name, s);
}
static void grep_file_async(struct grep_opt *opt, char *name,
const char *filename)
{
add_work(WORK_FILE, name, xstrdup(filename));
}
static void work_done(struct work_item *w)
{
int old_done;
@ -192,8 +150,7 @@ static void work_done(struct work_item *w)
write_or_die(1, p, len);
}
free(w->name);
free(w->identifier);
grep_source_clear(&w->source);
}
if (old_done != todo_done)
@ -216,25 +173,8 @@ static void *run(void *arg)
break;
opt->output_priv = w;
if (w->type == WORK_SHA1) {
unsigned long sz;
void* data = load_sha1(w->identifier, &sz, w->name);
if (data) {
hit |= grep_buffer(opt, w->name, data, sz);
free(data);
}
} else if (w->type == WORK_FILE) {
size_t sz;
void* data = load_file(w->identifier, &sz);
if (data) {
hit |= grep_buffer(opt, w->name, data, sz);
free(data);
}
} else {
assert(0);
}
hit |= grep_source(opt, &w->source);
grep_source_clear_data(&w->source);
work_done(w);
}
free_grep_patterns(arg);
@ -254,11 +194,12 @@ static void start_threads(struct grep_opt *opt)
int i;
pthread_mutex_init(&grep_mutex, NULL);
pthread_mutex_init(&read_sha1_mutex, NULL);
pthread_mutex_init(&grep_read_mutex, NULL);
pthread_mutex_init(&grep_attr_mutex, NULL);
pthread_cond_init(&cond_add, NULL);
pthread_cond_init(&cond_write, NULL);
pthread_cond_init(&cond_result, NULL);
grep_use_locks = 1;
for (i = 0; i < ARRAY_SIZE(todo); i++) {
strbuf_init(&todo[i].out, 0);
@ -302,17 +243,16 @@ static int wait_all(void)
}
pthread_mutex_destroy(&grep_mutex);
pthread_mutex_destroy(&read_sha1_mutex);
pthread_mutex_destroy(&grep_read_mutex);
pthread_mutex_destroy(&grep_attr_mutex);
pthread_cond_destroy(&cond_add);
pthread_cond_destroy(&cond_write);
pthread_cond_destroy(&cond_result);
grep_use_locks = 0;
return hit;
}
#else /* !NO_PTHREADS */
#define read_sha1_lock()
#define read_sha1_unlock()
static int wait_all(void)
{
@ -325,11 +265,8 @@ static int grep_config(const char *var, const char *value, void *cb)
struct grep_opt *opt = cb;
char *color = NULL;
switch (userdiff_config(var, value)) {
case 0: break;
case -1: return -1;
default: return 0;
}
if (userdiff_config(var, value) < 0)
return -1;
if (!strcmp(var, "grep.extendedregexp")) {
if (git_config_bool(var, value))
@ -374,21 +311,9 @@ static void *lock_and_read_sha1_file(const unsigned char *sha1, enum object_type
{
void *data;
read_sha1_lock();
grep_read_lock();
data = read_sha1_file(sha1, type, size);
read_sha1_unlock();
return data;
}
static void *load_sha1(const unsigned char *sha1, unsigned long *size,
const char *name)
{
enum object_type type;
void *data = lock_and_read_sha1_file(sha1, &type, size);
if (!data)
error(_("'%s': unable to read %s"), name, sha1_to_hex(sha1));
grep_read_unlock();
return data;
}
@ -396,7 +321,6 @@ static int grep_sha1(struct grep_opt *opt, const unsigned char *sha1,
const char *filename, int tree_name_len)
{
struct strbuf pathbuf = STRBUF_INIT;
char *name;
if (opt->relative && opt->prefix_length) {
quote_path_relative(filename + tree_name_len, -1, &pathbuf,
@ -406,87 +330,51 @@ static int grep_sha1(struct grep_opt *opt, const unsigned char *sha1,
strbuf_addstr(&pathbuf, filename);
}
name = strbuf_detach(&pathbuf, NULL);
#ifndef NO_PTHREADS
if (use_threads) {
grep_sha1_async(opt, name, sha1);
add_work(opt, GREP_SOURCE_SHA1, pathbuf.buf, sha1);
strbuf_release(&pathbuf);
return 0;
} else
#endif
{
struct grep_source gs;
int hit;
unsigned long sz;
void *data = load_sha1(sha1, &sz, name);
if (!data)
hit = 0;
else
hit = grep_buffer(opt, name, data, sz);
free(data);
free(name);
grep_source_init(&gs, GREP_SOURCE_SHA1, pathbuf.buf, sha1);
strbuf_release(&pathbuf);
hit = grep_source(opt, &gs);
grep_source_clear(&gs);
return hit;
}
}
static void *load_file(const char *filename, size_t *sz)
{
struct stat st;
char *data;
int i;
if (lstat(filename, &st) < 0) {
err_ret:
if (errno != ENOENT)
error(_("'%s': %s"), filename, strerror(errno));
return NULL;
}
if (!S_ISREG(st.st_mode))
return NULL;
*sz = xsize_t(st.st_size);
i = open(filename, O_RDONLY);
if (i < 0)
goto err_ret;
data = xmalloc(*sz + 1);
if (st.st_size != read_in_full(i, data, *sz)) {
error(_("'%s': short read %s"), filename, strerror(errno));
close(i);
free(data);
return NULL;
}
close(i);
data[*sz] = 0;
return data;
}
static int grep_file(struct grep_opt *opt, const char *filename)
{
struct strbuf buf = STRBUF_INIT;
char *name;
if (opt->relative && opt->prefix_length)
quote_path_relative(filename, -1, &buf, opt->prefix);
else
strbuf_addstr(&buf, filename);
name = strbuf_detach(&buf, NULL);
#ifndef NO_PTHREADS
if (use_threads) {
grep_file_async(opt, name, filename);
add_work(opt, GREP_SOURCE_FILE, buf.buf, filename);
strbuf_release(&buf);
return 0;
} else
#endif
{
struct grep_source gs;
int hit;
size_t sz;
void *data = load_file(filename, &sz);
if (!data)
hit = 0;
else
hit = grep_buffer(opt, name, data, sz);
free(data);
free(name);
grep_source_init(&gs, GREP_SOURCE_FILE, buf.buf, filename);
strbuf_release(&buf);
hit = grep_source(opt, &gs);
grep_source_clear(&gs);
return hit;
}
}
@ -615,10 +503,10 @@ static int grep_object(struct grep_opt *opt, const struct pathspec *pathspec,
struct strbuf base;
int hit, len;
read_sha1_lock();
grep_read_lock();
data = read_object_with_reference(obj->sha1, tree_type,
&size, NULL);
read_sha1_unlock();
grep_read_unlock();
if (!data)
die(_("unable to read tree (%s)"), sha1_to_hex(obj->sha1));
@ -1030,8 +918,6 @@ int cmd_grep(int argc, const char **argv, const char *prefix)
use_threads = 0;
#endif
opt.use_threads = use_threads;
#ifndef NO_PTHREADS
if (use_threads) {
if (!(opt.name_only || opt.unmatch_name_only || opt.count)

Просмотреть файл

@ -1129,7 +1129,7 @@ static int default_edit_option(void)
/* Use editor if stdin and stdout are the same and is a tty */
return (!fstat(0, &st_stdin) &&
!fstat(1, &st_stdout) &&
isatty(0) &&
isatty(0) && isatty(1) &&
st_stdin.st_dev == st_stdout.st_dev &&
st_stdin.st_ino == st_stdout.st_ino &&
st_stdin.st_mode == st_stdout.st_mode);
@ -1324,7 +1324,8 @@ int cmd_merge(int argc, const char **argv, const char *prefix)
merge_remote_util(commit) &&
merge_remote_util(commit)->obj &&
merge_remote_util(commit)->obj->type == OBJ_TAG) {
option_edit = 1;
if (option_edit < 0)
option_edit = 1;
allow_fast_forward = 0;
}
}

Просмотреть файл

@ -18,16 +18,11 @@
#include "refs.h"
#include "thread-utils.h"
static const char pack_usage[] =
"git pack-objects [ -q | --progress | --all-progress ]\n"
" [--all-progress-implied]\n"
" [--max-pack-size=<n>] [--local] [--incremental]\n"
" [--window=<n>] [--window-memory=<n>] [--depth=<n>]\n"
" [--no-reuse-delta] [--no-reuse-object] [--delta-base-offset]\n"
" [--threads=<n>] [--non-empty] [--revs [--unpacked | --all]]\n"
" [--reflog] [--stdout | base-name] [--include-tag]\n"
" [--keep-unreachable | --unpack-unreachable]\n"
" [< ref-list | < object-list]";
static const char *pack_usage[] = {
"git pack-objects --stdout [options...] [< ref-list | < object-list]",
"git pack-objects [options...] base-name [< ref-list | < object-list]",
NULL
};
struct object_entry {
struct pack_idx_entry idx;
@ -2305,204 +2300,159 @@ static void get_object_list(int ac, const char **av)
loosen_unused_packed_objects(&revs);
}
static int option_parse_index_version(const struct option *opt,
const char *arg, int unset)
{
char *c;
const char *val = arg;
pack_idx_opts.version = strtoul(val, &c, 10);
if (pack_idx_opts.version > 2)
die(_("unsupported index version %s"), val);
if (*c == ',' && c[1])
pack_idx_opts.off32_limit = strtoul(c+1, &c, 0);
if (*c || pack_idx_opts.off32_limit & 0x80000000)
die(_("bad index version '%s'"), val);
return 0;
}
static int option_parse_ulong(const struct option *opt,
const char *arg, int unset)
{
if (unset)
die(_("option %s does not accept negative form"),
opt->long_name);
if (!git_parse_ulong(arg, opt->value))
die(_("unable to parse value '%s' for option %s"),
arg, opt->long_name);
return 0;
}
#define OPT_ULONG(s, l, v, h) \
{ OPTION_CALLBACK, (s), (l), (v), "n", (h), \
PARSE_OPT_NONEG, option_parse_ulong }
int cmd_pack_objects(int argc, const char **argv, const char *prefix)
{
int use_internal_rev_list = 0;
int thin = 0;
int all_progress_implied = 0;
uint32_t i;
const char **rp_av;
int rp_ac_alloc = 64;
int rp_ac;
const char *rp_av[6];
int rp_ac = 0;
int rev_list_unpacked = 0, rev_list_all = 0, rev_list_reflog = 0;
struct option pack_objects_options[] = {
OPT_SET_INT('q', "quiet", &progress,
"do not show progress meter", 0),
OPT_SET_INT(0, "progress", &progress,
"show progress meter", 1),
OPT_SET_INT(0, "all-progress", &progress,
"show progress meter during object writing phase", 2),
OPT_BOOL(0, "all-progress-implied",
&all_progress_implied,
"similar to --all-progress when progress meter is shown"),
{ OPTION_CALLBACK, 0, "index-version", NULL, "version[,offset]",
"write the pack index file in the specified idx format version",
0, option_parse_index_version },
OPT_ULONG(0, "max-pack-size", &pack_size_limit,
"maximum size of each output pack file"),
OPT_BOOL(0, "local", &local,
"ignore borrowed objects from alternate object store"),
OPT_BOOL(0, "incremental", &incremental,
"ignore packed objects"),
OPT_INTEGER(0, "window", &window,
"limit pack window by objects"),
OPT_ULONG(0, "window-memory", &window_memory_limit,
"limit pack window by memory in addition to object limit"),
OPT_INTEGER(0, "depth", &depth,
"maximum length of delta chain allowed in the resulting pack"),
OPT_BOOL(0, "reuse-delta", &reuse_delta,
"reuse existing deltas"),
OPT_BOOL(0, "reuse-object", &reuse_object,
"reuse existing objects"),
OPT_BOOL(0, "delta-base-offset", &allow_ofs_delta,
"use OFS_DELTA objects"),
OPT_INTEGER(0, "threads", &delta_search_threads,
"use threads when searching for best delta matches"),
OPT_BOOL(0, "non-empty", &non_empty,
"do not create an empty pack output"),
OPT_BOOL(0, "revs", &use_internal_rev_list,
"read revision arguments from standard input"),
{ OPTION_SET_INT, 0, "unpacked", &rev_list_unpacked, NULL,
"limit the objects to those that are not yet packed",
PARSE_OPT_NOARG | PARSE_OPT_NONEG, NULL, 1 },
{ OPTION_SET_INT, 0, "all", &rev_list_all, NULL,
"include objects reachable from any reference",
PARSE_OPT_NOARG | PARSE_OPT_NONEG, NULL, 1 },
{ OPTION_SET_INT, 0, "reflog", &rev_list_reflog, NULL,
"include objects referred by reflog entries",
PARSE_OPT_NOARG | PARSE_OPT_NONEG, NULL, 1 },
OPT_BOOL(0, "stdout", &pack_to_stdout,
"output pack to stdout"),
OPT_BOOL(0, "include-tag", &include_tag,
"include tag objects that refer to objects to be packed"),
OPT_BOOL(0, "keep-unreachable", &keep_unreachable,
"keep unreachable objects"),
OPT_BOOL(0, "unpack-unreachable", &unpack_unreachable,
"unpack unreachable objects"),
OPT_BOOL(0, "thin", &thin,
"create thin packs"),
OPT_BOOL(0, "honor-pack-keep", &ignore_packed_keep,
"ignore packs that have companion .keep file"),
OPT_INTEGER(0, "compression", &pack_compression_level,
"pack compression level"),
OPT_SET_INT(0, "keep-true-parents", &grafts_replace_parents,
"do not hide commits by grafts", 0),
OPT_END(),
};
read_replace_refs = 0;
rp_av = xcalloc(rp_ac_alloc, sizeof(*rp_av));
rp_av[0] = "pack-objects";
rp_av[1] = "--objects"; /* --thin will make it --objects-edge */
rp_ac = 2;
reset_pack_idx_option(&pack_idx_opts);
git_config(git_pack_config, NULL);
if (!pack_compression_seen && core_compression_seen)
pack_compression_level = core_compression_level;
progress = isatty(2);
for (i = 1; i < argc; i++) {
const char *arg = argv[i];
argc = parse_options(argc, argv, prefix, pack_objects_options,
pack_usage, 0);
if (*arg != '-')
break;
if (argc) {
base_name = argv[0];
argc--;
}
if (pack_to_stdout != !base_name || argc)
usage_with_options(pack_usage, pack_objects_options);
if (!strcmp("--non-empty", arg)) {
non_empty = 1;
continue;
}
if (!strcmp("--local", arg)) {
local = 1;
continue;
}
if (!strcmp("--incremental", arg)) {
incremental = 1;
continue;
}
if (!strcmp("--honor-pack-keep", arg)) {
ignore_packed_keep = 1;
continue;
}
if (!prefixcmp(arg, "--compression=")) {
char *end;
int level = strtoul(arg+14, &end, 0);
if (!arg[14] || *end)
usage(pack_usage);
if (level == -1)
level = Z_DEFAULT_COMPRESSION;
else if (level < 0 || level > Z_BEST_COMPRESSION)
die("bad pack compression level %d", level);
pack_compression_level = level;
continue;
}
if (!prefixcmp(arg, "--max-pack-size=")) {
pack_size_limit_cfg = 0;
if (!git_parse_ulong(arg+16, &pack_size_limit))
usage(pack_usage);
continue;
}
if (!prefixcmp(arg, "--window=")) {
char *end;
window = strtoul(arg+9, &end, 0);
if (!arg[9] || *end)
usage(pack_usage);
continue;
}
if (!prefixcmp(arg, "--window-memory=")) {
if (!git_parse_ulong(arg+16, &window_memory_limit))
usage(pack_usage);
continue;
}
if (!prefixcmp(arg, "--threads=")) {
char *end;
delta_search_threads = strtoul(arg+10, &end, 0);
if (!arg[10] || *end || delta_search_threads < 0)
usage(pack_usage);
#ifdef NO_PTHREADS
if (delta_search_threads != 1)
warning("no threads support, "
"ignoring %s", arg);
#endif
continue;
}
if (!prefixcmp(arg, "--depth=")) {
char *end;
depth = strtoul(arg+8, &end, 0);
if (!arg[8] || *end)
usage(pack_usage);
continue;
}
if (!strcmp("--progress", arg)) {
progress = 1;
continue;
}
if (!strcmp("--all-progress", arg)) {
progress = 2;
continue;
}
if (!strcmp("--all-progress-implied", arg)) {
all_progress_implied = 1;
continue;
}
if (!strcmp("-q", arg)) {
progress = 0;
continue;
}
if (!strcmp("--no-reuse-delta", arg)) {
reuse_delta = 0;
continue;
}
if (!strcmp("--no-reuse-object", arg)) {
reuse_object = reuse_delta = 0;
continue;
}
if (!strcmp("--delta-base-offset", arg)) {
allow_ofs_delta = 1;
continue;
}
if (!strcmp("--stdout", arg)) {
pack_to_stdout = 1;
continue;
}
if (!strcmp("--revs", arg)) {
use_internal_rev_list = 1;
continue;
}
if (!strcmp("--keep-unreachable", arg)) {
keep_unreachable = 1;
continue;
}
if (!strcmp("--unpack-unreachable", arg)) {
unpack_unreachable = 1;
continue;
}
if (!strcmp("--include-tag", arg)) {
include_tag = 1;
continue;
}
if (!strcmp("--unpacked", arg) ||
!strcmp("--reflog", arg) ||
!strcmp("--all", arg)) {
use_internal_rev_list = 1;
if (rp_ac >= rp_ac_alloc - 1) {
rp_ac_alloc = alloc_nr(rp_ac_alloc);
rp_av = xrealloc(rp_av,
rp_ac_alloc * sizeof(*rp_av));
}
rp_av[rp_ac++] = arg;
continue;
}
if (!strcmp("--thin", arg)) {
use_internal_rev_list = 1;
thin = 1;
rp_av[1] = "--objects-edge";
continue;
}
if (!prefixcmp(arg, "--index-version=")) {
char *c;
pack_idx_opts.version = strtoul(arg + 16, &c, 10);
if (pack_idx_opts.version > 2)
die("bad %s", arg);
if (*c == ',')
pack_idx_opts.off32_limit = strtoul(c+1, &c, 0);
if (*c || pack_idx_opts.off32_limit & 0x80000000)
die("bad %s", arg);
continue;
}
if (!strcmp(arg, "--keep-true-parents")) {
grafts_replace_parents = 0;
continue;
}
usage(pack_usage);
rp_av[rp_ac++] = "pack-objects";
if (thin) {
use_internal_rev_list = 1;
rp_av[rp_ac++] = "--objects-edge";
} else
rp_av[rp_ac++] = "--objects";
if (rev_list_all) {
use_internal_rev_list = 1;
rp_av[rp_ac++] = "--all";
}
if (rev_list_reflog) {
use_internal_rev_list = 1;
rp_av[rp_ac++] = "--reflog";
}
if (rev_list_unpacked) {
use_internal_rev_list = 1;
rp_av[rp_ac++] = "--unpacked";
}
/* Traditionally "pack-objects [options] base extra" failed;
* we would however want to take refs parameter that would
* have been given to upstream rev-list ourselves, which means
* we somehow want to say what the base name is. So the
* syntax would be:
*
* pack-objects [options] base <refs...>
*
* in other words, we would treat the first non-option as the
* base_name and send everything else to the internal revision
* walker.
*/
if (!pack_to_stdout)
base_name = argv[i++];
if (pack_to_stdout != !base_name)
usage(pack_usage);
if (!reuse_object)
reuse_delta = 0;
if (pack_compression_level == -1)
pack_compression_level = Z_DEFAULT_COMPRESSION;
else if (pack_compression_level < 0 || pack_compression_level > Z_BEST_COMPRESSION)
die("bad pack compression level %d", pack_compression_level);
#ifdef NO_PTHREADS
if (delta_search_threads != 1)
warning("no threads support, ignoring --threads");
#endif
if (!pack_to_stdout && !pack_size_limit)
pack_size_limit = pack_size_limit_cfg;
if (pack_to_stdout && pack_size_limit)

Просмотреть файл

@ -19,7 +19,7 @@ static int thin;
static int deleterefs;
static const char *receivepack;
static int verbosity;
static int progress;
static int progress = -1;
static const char **refspec;
static int refspec_nr;
@ -260,7 +260,9 @@ int cmd_push(int argc, const char **argv, const char *prefix)
OPT_STRING( 0 , "exec", &receivepack, "receive-pack", "receive pack program"),
OPT_BIT('u', "set-upstream", &flags, "set upstream for git pull/status",
TRANSPORT_PUSH_SET_UPSTREAM),
OPT_BOOLEAN(0, "progress", &progress, "force progress reporting"),
OPT_BOOL(0, "progress", &progress, "force progress reporting"),
OPT_BIT(0, "prune", &flags, "prune locally removed refs",
TRANSPORT_PUSH_PRUNE),
OPT_END()
};

Просмотреть файл

@ -642,8 +642,10 @@ static void check_aliased_updates(struct command *commands)
}
sort_string_list(&ref_list);
for (cmd = commands; cmd; cmd = cmd->next)
check_aliased_update(cmd, &ref_list);
for (cmd = commands; cmd; cmd = cmd->next) {
if (!cmd->error_string)
check_aliased_update(cmd, &ref_list);
}
string_list_clear(&ref_list, 0);
}
@ -707,8 +709,10 @@ static void execute_commands(struct command *commands, const char *unpacker_erro
set_connectivity_errors(commands);
if (run_receive_hook(commands, pre_receive_hook, 0)) {
for (cmd = commands; cmd; cmd = cmd->next)
cmd->error_string = "pre-receive hook declined";
for (cmd = commands; cmd; cmd = cmd->next) {
if (!cmd->error_string)
cmd->error_string = "pre-receive hook declined";
}
return;
}
@ -717,9 +721,15 @@ static void execute_commands(struct command *commands, const char *unpacker_erro
free(head_name_to_free);
head_name = head_name_to_free = resolve_refdup("HEAD", sha1, 0, NULL);
for (cmd = commands; cmd; cmd = cmd->next)
if (!cmd->skip_update)
cmd->error_string = update(cmd);
for (cmd = commands; cmd; cmd = cmd->next) {
if (cmd->error_string)
continue;
if (cmd->skip_update)
continue;
cmd->error_string = update(cmd);
}
}
static struct command *read_head_info(void)

Просмотреть файл

@ -16,7 +16,7 @@ static const char * const builtin_remote_usage[] = {
"git remote [-v | --verbose] show [-n] <name>",
"git remote prune [-n | --dry-run] <name>",
"git remote [-v | --verbose] update [-p | --prune] [(<group> | <remote>)...]",
"git remote set-branches <name> [--add] <branch>...",
"git remote set-branches [--add] <name> <branch>...",
"git remote set-url <name> <newurl> [<oldurl>]",
"git remote set-url --add <name> <newurl>",
"git remote set-url --delete <name> <url>",

Просмотреть файл

@ -180,10 +180,10 @@ static void show_object(struct object *obj,
const struct name_path *path, const char *component,
void *cb_data)
{
struct rev_info *info = cb_data;
struct rev_list_info *info = cb_data;
finish_object(obj, path, component, cb_data);
if (info->verify_objects && !obj->parsed && obj->type != OBJ_COMMIT)
if (info->revs->verify_objects && !obj->parsed && obj->type != OBJ_COMMIT)
parse_object(obj->sha1);
show_object_with_name(stdout, obj, path, component);
}

Просмотреть файл

@ -58,7 +58,7 @@ static int pack_objects(int fd, struct ref *refs, struct extra_have_objects *ext
argv[i++] = "--thin";
if (args->use_ofs_delta)
argv[i++] = "--delta-base-offset";
if (args->quiet)
if (args->quiet || !args->progress)
argv[i++] = "-q";
if (args->progress)
argv[i++] = "--progress";
@ -250,6 +250,7 @@ int send_pack(struct send_pack_args *args,
int allow_deleting_refs = 0;
int status_report = 0;
int use_sideband = 0;
int quiet_supported = 0;
unsigned cmds_sent = 0;
int ret;
struct async demux;
@ -263,8 +264,8 @@ int send_pack(struct send_pack_args *args,
args->use_ofs_delta = 1;
if (server_supports("side-band-64k"))
use_sideband = 1;
if (!server_supports("quiet"))
args->quiet = 0;
if (server_supports("quiet"))
quiet_supported = 1;
if (!remote_refs) {
fprintf(stderr, "No refs in common and none specified; doing nothing.\n"
@ -302,17 +303,18 @@ int send_pack(struct send_pack_args *args,
} else {
char *old_hex = sha1_to_hex(ref->old_sha1);
char *new_hex = sha1_to_hex(ref->new_sha1);
int quiet = quiet_supported && (args->quiet || !args->progress);
if (!cmds_sent && (status_report || use_sideband || args->quiet)) {
packet_buf_write(&req_buf, "%s %s %s%c%s%s%s",
old_hex, new_hex, ref->name, 0,
status_report ? " report-status" : "",
use_sideband ? " side-band-64k" : "",
args->quiet ? " quiet" : "");
old_hex, new_hex, ref->name, 0,
status_report ? " report-status" : "",
use_sideband ? " side-band-64k" : "",
quiet ? " quiet" : "");
}
else
packet_buf_write(&req_buf, "%s %s %s",
old_hex, new_hex, ref->name);
old_hex, new_hex, ref->name);
ref->status = status_report ?
REF_STATUS_EXPECTING_REPORT :
REF_STATUS_OK;

Просмотреть файл

@ -15,11 +15,13 @@
#include "diff.h"
#include "revision.h"
#include "gpg-interface.h"
#include "sha1-array.h"
static const char * const git_tag_usage[] = {
"git tag [-a|-s|-u <key-id>] [-f] [-m <msg>|-F <file>] <tagname> [<head>]",
"git tag -d <tagname>...",
"git tag -l [-n[<num>]] [<pattern>...]",
"git tag -l [-n[<num>]] [--contains <commit>] [--points-at <object>] "
"\n\t\t[<pattern>...]",
"git tag -v <tagname>...",
NULL
};
@ -30,6 +32,8 @@ struct tag_filter {
struct commit_list *with_commit;
};
static struct sha1_array points_at;
static int match_pattern(const char **patterns, const char *ref)
{
/* no pattern means match everything */
@ -41,6 +45,24 @@ static int match_pattern(const char **patterns, const char *ref)
return 0;
}
static const unsigned char *match_points_at(const char *refname,
const unsigned char *sha1)
{
const unsigned char *tagged_sha1 = NULL;
struct object *obj;
if (sha1_array_lookup(&points_at, sha1) >= 0)
return sha1;
obj = parse_object(sha1);
if (!obj)
die(_("malformed object at '%s'"), refname);
if (obj->type == OBJ_TAG)
tagged_sha1 = ((struct tag *)obj)->tagged->sha1;
if (tagged_sha1 && sha1_array_lookup(&points_at, tagged_sha1) >= 0)
return tagged_sha1;
return NULL;
}
static int in_commit_list(const struct commit_list *want, struct commit *c)
{
for (; want; want = want->next)
@ -83,18 +105,51 @@ static int contains(struct commit *candidate, const struct commit_list *want)
return contains_recurse(candidate, want);
}
static void show_tag_lines(const unsigned char *sha1, int lines)
{
int i;
unsigned long size;
enum object_type type;
char *buf, *sp, *eol;
size_t len;
buf = read_sha1_file(sha1, &type, &size);
if (!buf)
die_errno("unable to read object %s", sha1_to_hex(sha1));
if (type != OBJ_COMMIT && type != OBJ_TAG)
goto free_return;
if (!size)
die("an empty %s object %s?",
typename(type), sha1_to_hex(sha1));
/* skip header */
sp = strstr(buf, "\n\n");
if (!sp)
goto free_return;
/* only take up to "lines" lines, and strip the signature from a tag */
if (type == OBJ_TAG)
size = parse_signature(buf, size);
for (i = 0, sp += 2; i < lines && sp < buf + size; i++) {
if (i)
printf("\n ");
eol = memchr(sp, '\n', size - (sp - buf));
len = eol ? eol - sp : size - (sp - buf);
fwrite(sp, len, 1, stdout);
if (!eol)
break;
sp = eol + 1;
}
free_return:
free(buf);
}
static int show_reference(const char *refname, const unsigned char *sha1,
int flag, void *cb_data)
{
struct tag_filter *filter = cb_data;
if (match_pattern(filter->patterns, refname)) {
int i;
unsigned long size;
enum object_type type;
char *buf, *sp, *eol;
size_t len;
if (filter->with_commit) {
struct commit *commit;
@ -105,38 +160,16 @@ static int show_reference(const char *refname, const unsigned char *sha1,
return 0;
}
if (points_at.nr && !match_points_at(refname, sha1))
return 0;
if (!filter->lines) {
printf("%s\n", refname);
return 0;
}
printf("%-15s ", refname);
buf = read_sha1_file(sha1, &type, &size);
if (!buf || !size)
return 0;
/* skip header */
sp = strstr(buf, "\n\n");
if (!sp) {
free(buf);
return 0;
}
/* only take up to "lines" lines, and strip the signature */
size = parse_signature(buf, size);
for (i = 0, sp += 2;
i < filter->lines && sp < buf + size;
i++) {
if (i)
printf("\n ");
eol = memchr(sp, '\n', size - (sp - buf));
len = eol ? eol - sp : size - (sp - buf);
fwrite(sp, len, 1, stdout);
if (!eol)
break;
sp = eol + 1;
}
show_tag_lines(sha1, filter->lines);
putchar('\n');
free(buf);
}
return 0;
@ -375,6 +408,23 @@ static int strbuf_check_tag_ref(struct strbuf *sb, const char *name)
return check_refname_format(sb->buf, 0);
}
static int parse_opt_points_at(const struct option *opt __attribute__((unused)),
const char *arg, int unset)
{
unsigned char sha1[20];
if (unset) {
sha1_array_clear(&points_at);
return 0;
}
if (!arg)
return error(_("switch 'points-at' requires an object"));
if (get_sha1(arg, sha1))
return error(_("malformed object name '%s'"), arg);
sha1_array_append(&points_at, sha1);
return 0;
}
int cmd_tag(int argc, const char **argv, const char *prefix)
{
struct strbuf buf = STRBUF_INIT;
@ -417,6 +467,10 @@ int cmd_tag(int argc, const char **argv, const char *prefix)
PARSE_OPT_LASTARG_DEFAULT,
parse_opt_with_commit, (intptr_t)"HEAD",
},
{
OPTION_CALLBACK, 0, "points-at", NULL, "object",
"print only tags of the object", 0, parse_opt_points_at
},
OPT_END()
};
@ -448,6 +502,8 @@ int cmd_tag(int argc, const char **argv, const char *prefix)
die(_("-n option is only allowed with -l."));
if (with_commit)
die(_("--contains option is only allowed with -l."));
if (points_at.nr)
die(_("--points-at option is only allowed with -l."));
if (delete)
return for_each_tag_name(argv, delete_tag);
if (verify)

Просмотреть файл

@ -23,23 +23,6 @@ static void add_to_ref_list(const unsigned char *sha1, const char *name,
list->nr++;
}
/* Eventually this should go to strbuf.[ch] */
static int strbuf_readline_fd(struct strbuf *sb, int fd)
{
strbuf_reset(sb);
while (1) {
char ch;
ssize_t len = xread(fd, &ch, 1);
if (len <= 0)
return len;
strbuf_addch(sb, ch);
if (ch == '\n')
break;
}
return 0;
}
static int parse_bundle_header(int fd, struct bundle_header *header,
const char *report_path)
{
@ -47,7 +30,7 @@ static int parse_bundle_header(int fd, struct bundle_header *header,
int status = 0;
/* The bundle header begins with the signature */
if (strbuf_readline_fd(&buf, fd) ||
if (strbuf_getwholeline_fd(&buf, fd, '\n') ||
strcmp(buf.buf, bundle_signature)) {
if (report_path)
error("'%s' does not look like a v2 bundle file",
@ -57,7 +40,7 @@ static int parse_bundle_header(int fd, struct bundle_header *header,
}
/* The bundle header ends with an empty line */
while (!strbuf_readline_fd(&buf, fd) &&
while (!strbuf_getwholeline_fd(&buf, fd, '\n') &&
buf.len && buf.buf[0] != '\n') {
unsigned char sha1[20];
int is_prereq = 0;
@ -251,7 +234,7 @@ int create_bundle(struct bundle_header *header, const char *path,
const char **argv_boundary = xmalloc((argc + 4) * sizeof(const char *));
const char **argv_pack = xmalloc(6 * sizeof(const char *));
int i, ref_count = 0;
char buffer[1024];
struct strbuf buf = STRBUF_INIT;
struct rev_info revs;
struct child_process rls;
FILE *rls_fout;
@ -283,20 +266,21 @@ int create_bundle(struct bundle_header *header, const char *path,
if (start_command(&rls))
return -1;
rls_fout = xfdopen(rls.out, "r");
while (fgets(buffer, sizeof(buffer), rls_fout)) {
while (strbuf_getwholeline(&buf, rls_fout, '\n') != EOF) {
unsigned char sha1[20];
if (buffer[0] == '-') {
write_or_die(bundle_fd, buffer, strlen(buffer));
if (!get_sha1_hex(buffer + 1, sha1)) {
if (buf.len > 0 && buf.buf[0] == '-') {
write_or_die(bundle_fd, buf.buf, buf.len);
if (!get_sha1_hex(buf.buf + 1, sha1)) {
struct object *object = parse_object(sha1);
object->flags |= UNINTERESTING;
add_pending_object(&revs, object, buffer);
add_pending_object(&revs, object, buf.buf);
}
} else if (!get_sha1_hex(buffer, sha1)) {
} else if (!get_sha1_hex(buf.buf, sha1)) {
struct object *object = parse_object(sha1);
object->flags |= SHOWN;
}
}
strbuf_release(&buf);
fclose(rls_fout);
if (finish_command(&rls))
return error("rev-list died");

Просмотреть файл

@ -150,15 +150,16 @@ void cache_tree_invalidate_path(struct cache_tree *it, const char *path)
}
static int verify_cache(struct cache_entry **cache,
int entries, int silent)
int entries, int flags)
{
int i, funny;
int silent = flags & WRITE_TREE_SILENT;
/* Verify that the tree is merged */
funny = 0;
for (i = 0; i < entries; i++) {
struct cache_entry *ce = cache[i];
if (ce_stage(ce) || (ce->ce_flags & CE_INTENT_TO_ADD)) {
if (ce_stage(ce)) {
if (silent)
return -1;
if (10 < ++funny) {
@ -241,10 +242,11 @@ static int update_one(struct cache_tree *it,
int entries,
const char *base,
int baselen,
int missing_ok,
int dryrun)
int flags)
{
struct strbuf buffer;
int missing_ok = flags & WRITE_TREE_MISSING_OK;
int dryrun = flags & WRITE_TREE_DRY_RUN;
int i;
if (0 <= it->entry_count && has_sha1_file(it->sha1))
@ -288,8 +290,7 @@ static int update_one(struct cache_tree *it,
cache + i, entries - i,
path,
baselen + sublen + 1,
missing_ok,
dryrun);
flags);
if (subcnt < 0)
return subcnt;
i += subcnt - 1;
@ -338,8 +339,8 @@ static int update_one(struct cache_tree *it,
mode, sha1_to_hex(sha1), entlen+baselen, path);
}
if (ce->ce_flags & CE_REMOVE)
continue; /* entry being removed */
if (ce->ce_flags & (CE_REMOVE | CE_INTENT_TO_ADD))
continue; /* entry being removed or placeholder */
strbuf_grow(&buffer, entlen + 100);
strbuf_addf(&buffer, "%o %.*s%c", mode, entlen, path + baselen, '\0');
@ -371,15 +372,13 @@ static int update_one(struct cache_tree *it,
int cache_tree_update(struct cache_tree *it,
struct cache_entry **cache,
int entries,
int missing_ok,
int dryrun,
int silent)
int flags)
{
int i;
i = verify_cache(cache, entries, silent);
i = verify_cache(cache, entries, flags);
if (i)
return i;
i = update_one(it, cache, entries, "", 0, missing_ok, dryrun);
i = update_one(it, cache, entries, "", 0, flags);
if (i < 0)
return i;
return 0;
@ -572,11 +571,9 @@ int write_cache_as_tree(unsigned char *sha1, int flags, const char *prefix)
was_valid = cache_tree_fully_valid(active_cache_tree);
if (!was_valid) {
int missing_ok = flags & WRITE_TREE_MISSING_OK;
if (cache_tree_update(active_cache_tree,
active_cache, active_nr,
missing_ok, 0, 0) < 0)
flags) < 0)
return WRITE_TREE_UNMERGED_INDEX;
if (0 <= newfd) {
if (!write_cache(newfd, active_cache, active_nr) &&
@ -672,10 +669,10 @@ int cache_tree_matches_traversal(struct cache_tree *root,
return 0;
}
int update_main_cache_tree (int silent)
int update_main_cache_tree(int flags)
{
if (!the_index.cache_tree)
the_index.cache_tree = cache_tree();
return cache_tree_update(the_index.cache_tree,
the_index.cache, the_index.cache_nr, 0, 0, silent);
the_index.cache, the_index.cache_nr, flags);
}

Просмотреть файл

@ -29,13 +29,15 @@ void cache_tree_write(struct strbuf *, struct cache_tree *root);
struct cache_tree *cache_tree_read(const char *buffer, unsigned long size);
int cache_tree_fully_valid(struct cache_tree *);
int cache_tree_update(struct cache_tree *, struct cache_entry **, int, int, int, int);
int cache_tree_update(struct cache_tree *, struct cache_entry **, int, int);
int update_main_cache_tree(int);
/* bitmasks to write_cache_as_tree flags */
#define WRITE_TREE_MISSING_OK 1
#define WRITE_TREE_IGNORE_CACHE_TREE 2
#define WRITE_TREE_DRY_RUN 4
#define WRITE_TREE_SILENT 8
/* error return codes */
#define WRITE_TREE_UNREADABLE_INDEX (-1)

14
cache.h
Просмотреть файл

@ -432,6 +432,7 @@ extern char *git_work_tree_cfg;
extern int is_inside_work_tree(void);
extern int have_git_dir(void);
extern const char *get_git_dir(void);
extern int is_git_directory(const char *path);
extern char *get_object_directory(void);
extern char *get_index_file(void);
extern char *get_graft_file(void);
@ -1114,6 +1115,8 @@ extern int git_config_from_file(config_fn_t fn, const char *, void *);
extern void git_config_push_parameter(const char *text);
extern int git_config_from_parameters(config_fn_t fn, void *data);
extern int git_config(config_fn_t fn, void *);
extern int git_config_with_options(config_fn_t fn, void *,
const char *filename, int respect_includes);
extern int git_config_early(config_fn_t fn, void *, const char *repo_config);
extern int git_parse_ulong(const char *, unsigned long *);
extern int git_config_int(const char *, const char *);
@ -1129,6 +1132,7 @@ extern int git_config_parse_key(const char *, char **, int *);
extern int git_config_set_multivar(const char *, const char *, const char *, int);
extern int git_config_set_multivar_in_file(const char *, const char *, const char *, const char *, int);
extern int git_config_rename_section(const char *, const char *);
extern int git_config_rename_section_in_file(const char *, const char *, const char *);
extern const char *git_etc_gitconfig(void);
extern int check_repository_format_version(const char *var, const char *value, void *cb);
extern int git_env_bool(const char *, int);
@ -1139,7 +1143,13 @@ extern const char *get_commit_output_encoding(void);
extern int git_config_parse_parameter(const char *, config_fn_t fn, void *data);
extern const char *config_exclusive_filename;
struct config_include_data {
int depth;
config_fn_t fn;
void *data;
};
#define CONFIG_INCLUDE_INIT { 0 }
extern int git_config_include(const char *name, const char *value, void *data);
#define MAX_GITNAME (1000)
extern char git_default_email[MAX_GITNAME];
@ -1176,6 +1186,8 @@ extern void setup_pager(void);
extern const char *pager_program;
extern int pager_in_use(void);
extern int pager_use_color;
extern int term_columns(void);
extern int decimal_width(int);
extern const char *editor_program;
extern const char *askpass_program;

Просмотреть файл

@ -15,14 +15,8 @@
* SOFTWARE.
*/
#include <errno.h>
#include <sys/types.h>
#include "../git-compat-util.h"
#include <stdio.h>
#include <string.h>
#ifndef NS_INADDRSZ
#define NS_INADDRSZ 4
#endif

Просмотреть файл

@ -15,14 +15,8 @@
* WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include <errno.h>
#include <sys/types.h>
#include "../git-compat-util.h"
#include <stdio.h>
#include <string.h>
#ifndef NS_INT16SZ
#define NS_INT16SZ 2
#endif

127
config.c
Просмотреть файл

@ -26,7 +26,68 @@ static config_file *cf;
static int zlib_compression_seen;
const char *config_exclusive_filename = NULL;
#define MAX_INCLUDE_DEPTH 10
static const char include_depth_advice[] =
"exceeded maximum include depth (%d) while including\n"
" %s\n"
"from\n"
" %s\n"
"Do you have circular includes?";
static int handle_path_include(const char *path, struct config_include_data *inc)
{
int ret = 0;
struct strbuf buf = STRBUF_INIT;
/*
* Use an absolute path as-is, but interpret relative paths
* based on the including config file.
*/
if (!is_absolute_path(path)) {
char *slash;
if (!cf || !cf->name)
return error("relative config includes must come from files");
slash = find_last_dir_sep(cf->name);
if (slash)
strbuf_add(&buf, cf->name, slash - cf->name + 1);
strbuf_addstr(&buf, path);
path = buf.buf;
}
if (!access(path, R_OK)) {
if (++inc->depth > MAX_INCLUDE_DEPTH)
die(include_depth_advice, MAX_INCLUDE_DEPTH, path,
cf && cf->name ? cf->name : "the command line");
ret = git_config_from_file(git_config_include, path, inc);
inc->depth--;
}
strbuf_release(&buf);
return ret;
}
int git_config_include(const char *var, const char *value, void *data)
{
struct config_include_data *inc = data;
const char *type;
int ret;
/*
* Pass along all values, including "include" directives; this makes it
* possible to query information on the includes themselves.
*/
ret = inc->fn(var, value, inc->data);
if (ret < 0)
return ret;
type = skip_prefix(var, "include.");
if (!type)
return ret;
if (!strcmp(type, "path"))
ret = handle_path_include(value, inc);
return ret;
}
static void lowercase(char *p)
{
@ -879,9 +940,6 @@ int git_config_early(config_fn_t fn, void *data, const char *repo_config)
int ret = 0, found = 0;
const char *home = NULL;
/* Setting $GIT_CONFIG makes git read _only_ the given config file. */
if (config_exclusive_filename)
return git_config_from_file(fn, config_exclusive_filename, data);
if (git_config_system() && !access(git_etc_gitconfig(), R_OK)) {
ret += git_config_from_file(fn, git_etc_gitconfig(),
data);
@ -917,10 +975,26 @@ int git_config_early(config_fn_t fn, void *data, const char *repo_config)
return ret == 0 ? found : ret;
}
int git_config(config_fn_t fn, void *data)
int git_config_with_options(config_fn_t fn, void *data,
const char *filename, int respect_includes)
{
char *repo_config = NULL;
int ret;
struct config_include_data inc = CONFIG_INCLUDE_INIT;
if (respect_includes) {
inc.fn = fn;
inc.data = data;
fn = git_config_include;
data = &inc;
}
/*
* If we have a specific filename, use it. Otherwise, follow the
* regular lookup sequence.
*/
if (filename)
return git_config_from_file(fn, filename, data);
repo_config = git_pathdup("config");
ret = git_config_early(fn, data, repo_config);
@ -929,6 +1003,11 @@ int git_config(config_fn_t fn, void *data)
return ret;
}
int git_config(config_fn_t fn, void *data)
{
return git_config_with_options(fn, data, NULL, 1);
}
/*
* Find all the stuff for git_config_set() below.
*/
@ -1233,6 +1312,7 @@ int git_config_set_multivar_in_file(const char *config_filename,
int fd = -1, in_fd;
int ret;
struct lock_file *lock = NULL;
char *filename_buf = NULL;
/* parse-key returns negative; flip the sign to feed exit(3) */
ret = 0 - git_config_parse_key(key, &store.key, &store.baselen);
@ -1241,6 +1321,8 @@ int git_config_set_multivar_in_file(const char *config_filename,
store.multi_replace = multi_replace;
if (!config_filename)
config_filename = filename_buf = git_pathdup("config");
/*
* The lock serves a purpose in addition to locking: the new
@ -1410,6 +1492,7 @@ int git_config_set_multivar_in_file(const char *config_filename,
out_free:
if (lock)
rollback_lock_file(lock);
free(filename_buf);
return ret;
write_err_out:
@ -1421,19 +1504,8 @@ write_err_out:
int git_config_set_multivar(const char *key, const char *value,
const char *value_regex, int multi_replace)
{
const char *config_filename;
char *buf = NULL;
int ret;
if (config_exclusive_filename)
config_filename = config_exclusive_filename;
else
config_filename = buf = git_pathdup("config");
ret = git_config_set_multivar_in_file(config_filename, key, value,
value_regex, multi_replace);
free(buf);
return ret;
return git_config_set_multivar_in_file(NULL, key, value, value_regex,
multi_replace);
}
static int section_name_match (const char *buf, const char *name)
@ -1476,19 +1548,19 @@ static int section_name_match (const char *buf, const char *name)
}
/* if new_name == NULL, the section is removed instead */
int git_config_rename_section(const char *old_name, const char *new_name)
int git_config_rename_section_in_file(const char *config_filename,
const char *old_name, const char *new_name)
{
int ret = 0, remove = 0;
char *config_filename;
char *filename_buf = NULL;
struct lock_file *lock = xcalloc(sizeof(struct lock_file), 1);
int out_fd;
char buf[1024];
FILE *config_file;
if (config_exclusive_filename)
config_filename = xstrdup(config_exclusive_filename);
else
config_filename = git_pathdup("config");
if (!config_filename)
config_filename = filename_buf = git_pathdup("config");
out_fd = hold_lock_file_for_update(lock, config_filename, 0);
if (out_fd < 0) {
ret = error("could not lock config file %s", config_filename);
@ -1552,10 +1624,15 @@ unlock_and_out:
if (commit_lock_file(lock) < 0)
ret = error("could not commit config file %s", config_filename);
out:
free(config_filename);
free(filename_buf);
return ret;
}
int git_config_rename_section(const char *old_name, const char *new_name)
{
return git_config_rename_section_in_file(NULL, old_name, new_name);
}
/*
* Call this to report error for your variable that should not
* get a boolean value (i.e. "[my] var" means "true").

Просмотреть файл

@ -74,3 +74,4 @@ SNPRINTF_RETURNS_BOGUS=@SNPRINTF_RETURNS_BOGUS@
NO_PTHREADS=@NO_PTHREADS@
PTHREAD_CFLAGS=@PTHREAD_CFLAGS@
PTHREAD_LIBS=@PTHREAD_LIBS@
CHARSET_LIB=@CHARSET_LIB@

Просмотреть файл

@ -640,7 +640,18 @@ AC_CHECK_LIB([c], [gettext],
[LIBC_CONTAINS_LIBINTL=YesPlease],
[LIBC_CONTAINS_LIBINTL=])
AC_SUBST(LIBC_CONTAINS_LIBINTL)
test -n "$LIBC_CONTAINS_LIBINTL" || LIBS="$LIBS -lintl"
#
# Define NO_GETTEXT if you don't want Git output to be translated.
# A translated Git requires GNU libintl or another gettext implementation
AC_CHECK_HEADER([libintl.h],
[NO_GETTEXT=],
[NO_GETTEXT=YesPlease])
AC_SUBST(NO_GETTEXT)
if test -z "$NO_GETTEXT"; then
test -n "$LIBC_CONTAINS_LIBINTL" || LIBS="$LIBS -lintl"
fi
## Checks for header files.
AC_MSG_NOTICE([CHECKS for header files])
@ -824,18 +835,21 @@ AC_CHECK_HEADER([paths.h],
[HAVE_PATHS_H=])
AC_SUBST(HAVE_PATHS_H)
#
# Define NO_GETTEXT if you don't want Git output to be translated.
# A translated Git requires GNU libintl or another gettext implementation
AC_CHECK_HEADER([libintl.h],
[NO_GETTEXT=],
[NO_GETTEXT=YesPlease])
AC_SUBST(NO_GETTEXT)
#
# Define HAVE_LIBCHARSET_H if have libcharset.h
AC_CHECK_HEADER([libcharset.h],
[HAVE_LIBCHARSET_H=YesPlease],
[HAVE_LIBCHARSET_H=])
AC_SUBST(HAVE_LIBCHARSET_H)
# Define CHARSET_LIB if libiconv does not export the locale_charset symbol
# and libcharset does
CHARSET_LIB=
AC_CHECK_LIB([iconv], [locale_charset],
[],
[AC_CHECK_LIB([charset], [locale_charset],
[CHARSET_LIB=-lcharset])
]
)
AC_SUBST(CHARSET_LIB)
#
# Define NO_STRCASESTR if you don't have strcasestr.
GIT_CHECK_FUNC(strcasestr,

Просмотреть файл

@ -60,18 +60,6 @@
# per-repository basis by setting the bash.showUpstream config
# variable.
#
#
# To submit patches:
#
# *) Read Documentation/SubmittingPatches
# *) Send all patches to the current maintainer:
#
# "Shawn O. Pearce" <spearce@spearce.org>
#
# *) Always CC the Git mailing list:
#
# git@vger.kernel.org
#
if [[ -n ${ZSH_VERSION-} ]]; then
autoload -U +X bashcompinit && bashcompinit
@ -298,13 +286,13 @@ __git_ps1 ()
fi
fi
if [ -n "${GIT_PS1_SHOWSTASHSTATE-}" ]; then
git rev-parse --verify refs/stash >/dev/null 2>&1 && s="$"
git rev-parse --verify refs/stash >/dev/null 2>&1 && s="$"
fi
if [ -n "${GIT_PS1_SHOWUNTRACKEDFILES-}" ]; then
if [ -n "$(git ls-files --others --exclude-standard)" ]; then
u="%"
fi
if [ -n "$(git ls-files --others --exclude-standard)" ]; then
u="%"
fi
fi
if [ -n "${GIT_PS1_SHOWUPSTREAM-}" ]; then
@ -313,7 +301,7 @@ __git_ps1 ()
fi
local f="$w$i$s$u"
printf "${1:- (%s)}" "$c${b##refs/heads/}${f:+ $f}$r$p"
printf -- "${1:- (%s)}" "$c${b##refs/heads/}${f:+ $f}$r$p"
fi
}
@ -495,11 +483,8 @@ fi
# 4: A suffix to be appended to each possible completion word (optional).
__gitcomp ()
{
local cur_="$cur"
local cur_="${3-$cur}"
if [ $# -gt 2 ]; then
cur_="$3"
fi
case "$cur_" in
--*=)
COMPREPLY=()
@ -524,18 +509,8 @@ __gitcomp ()
# appended.
__gitcomp_nl ()
{
local s=$'\n' IFS=' '$'\t'$'\n'
local cur_="$cur" suffix=" "
if [ $# -gt 2 ]; then
cur_="$3"
if [ $# -gt 3 ]; then
suffix="$4"
fi
fi
IFS=$s
COMPREPLY=($(compgen -P "${2-}" -S "$suffix" -W "$1" -- "$cur_"))
local IFS=$'\n'
COMPREPLY=($(compgen -P "${2-}" -S "${4- }" -W "$1" -- "${3-$cur}"))
}
__git_heads ()
@ -643,13 +618,8 @@ __git_refs_remotes ()
__git_remotes ()
{
local i ngoff IFS=$'\n' d="$(__gitdir)"
__git_shopt -q nullglob || ngoff=1
__git_shopt -s nullglob
for i in "$d/remotes"/*; do
echo ${i#$d/remotes/}
done
[ "$ngoff" ] && __git_shopt -u nullglob
local i IFS=$'\n' d="$(__gitdir)"
test -d "$d/remotes" && ls -1 "$d/remotes"
for i in $(git --git-dir="$d" config --get-regexp 'remote\..*\.url' 2>/dev/null); do
i="${i#remote.}"
echo "${i/.url*/}"
@ -676,7 +646,8 @@ __git_merge_strategies=
# is needed.
__git_compute_merge_strategies ()
{
: ${__git_merge_strategies:=$(__git_list_merge_strategies)}
test -n "$__git_merge_strategies" ||
__git_merge_strategies=$(__git_list_merge_strategies)
}
__git_complete_revlist_file ()
@ -854,7 +825,8 @@ __git_list_all_commands ()
__git_all_commands=
__git_compute_all_commands ()
{
: ${__git_all_commands:=$(__git_list_all_commands)}
test -n "$__git_all_commands" ||
__git_all_commands=$(__git_list_all_commands)
}
__git_list_porcelain_commands ()
@ -947,7 +919,8 @@ __git_porcelain_commands=
__git_compute_porcelain_commands ()
{
__git_compute_all_commands
: ${__git_porcelain_commands:=$(__git_list_porcelain_commands)}
test -n "$__git_porcelain_commands" ||
__git_porcelain_commands=$(__git_list_porcelain_commands)
}
__git_pretty_aliases ()
@ -1152,7 +1125,7 @@ _git_branch ()
__gitcomp "
--color --no-color --verbose --abbrev= --no-abbrev
--track --no-track --contains --merged --no-merged
--set-upstream --edit-description
--set-upstream --edit-description --list
"
;;
*)
@ -2527,7 +2500,7 @@ _git_svn ()
__gitcomp "
--merge --strategy= --verbose --dry-run
--fetch-all --no-rebase --commit-url
--revision $cmt_opts $fc_opts
--revision --interactive $cmt_opts $fc_opts
"
;;
set-tree,--*)
@ -2733,33 +2706,3 @@ if [ Cygwin = "$(uname -o 2>/dev/null)" ]; then
complete -o bashdefault -o default -o nospace -F _git git.exe 2>/dev/null \
|| complete -o default -o nospace -F _git git.exe
fi
if [[ -n ${ZSH_VERSION-} ]]; then
__git_shopt () {
local option
if [ $# -ne 2 ]; then
echo "USAGE: $0 (-q|-s|-u) <option>" >&2
return 1
fi
case "$2" in
nullglob)
option="$2"
;;
*)
echo "$0: invalid option: $2" >&2
return 1
esac
case "$1" in
-q) setopt | grep -q "$option" ;;
-u) unsetopt "$option" ;;
-s) setopt "$option" ;;
*)
echo "$0: invalid flag: $1" >&2
return 1
esac
}
else
__git_shopt () {
shopt "$@"
}
fi

Просмотреть файл

@ -14,13 +14,15 @@ Instead, this script post-processes the line-oriented diff, finds pairs
of lines, and highlights the differing segments. It's currently very
simple and stupid about doing these tasks. In particular:
1. It will only highlight a pair of lines if they are the only two
lines in a hunk. It could instead try to match up "before" and
"after" lines for a given hunk into pairs of similar lines.
However, this may end up visually distracting, as the paired
lines would have other highlighted lines in between them. And in
practice, the lines which most need attention called to their
small, hard-to-see changes are touching only a single line.
1. It will only highlight hunks in which the number of removed and
added lines is the same, and it will pair lines within the hunk by
position (so the first removed line is compared to the first added
line, and so forth). This is simple and tends to work well in
practice. More complex changes don't highlight well, so we tend to
exclude them due to the "same number of removed and added lines"
restriction. Or even if we do try to highlight them, they end up
not highlighting because of our "don't highlight if the whole line
would be highlighted" rule.
2. It will find the common prefix and suffix of two lines, and
consider everything in the middle to be "different". It could
@ -55,3 +57,96 @@ following in your git configuration:
show = diff-highlight | less
diff = diff-highlight | less
---------------------------------------------
Bugs
----
Because diff-highlight relies on heuristics to guess which parts of
changes are important, there are some cases where the highlighting is
more distracting than useful. Fortunately, these cases are rare in
practice, and when they do occur, the worst case is simply a little
extra highlighting. This section documents some cases known to be
sub-optimal, in case somebody feels like working on improving the
heuristics.
1. Two changes on the same line get highlighted in a blob. For example,
highlighting:
----------------------------------------------
-foo(buf, size);
+foo(obj->buf, obj->size);
----------------------------------------------
yields (where the inside of "+{}" would be highlighted):
----------------------------------------------
-foo(buf, size);
+foo(+{obj->buf, obj->}size);
----------------------------------------------
whereas a more semantically meaningful output would be:
----------------------------------------------
-foo(buf, size);
+foo(+{obj->}buf, +{obj->}size);
----------------------------------------------
Note that doing this right would probably involve a set of
content-specific boundary patterns, similar to word-diff. Otherwise
you get junk like:
-----------------------------------------------------
-this line has some -{i}nt-{ere}sti-{ng} text on it
+this line has some +{fa}nt+{a}sti+{c} text on it
-----------------------------------------------------
which is less readable than the current output.
2. The multi-line matching assumes that lines in the pre- and post-image
match by position. This is often the case, but can be fooled when a
line is removed from the top and a new one added at the bottom (or
vice versa). Unless the lines in the middle are also changed, diffs
will show this as two hunks, and it will not get highlighted at all
(which is good). But if the lines in the middle are changed, the
highlighting can be misleading. Here's a pathological case:
-----------------------------------------------------
-one
-two
-three
-four
+two 2
+three 3
+four 4
+five 5
-----------------------------------------------------
which gets highlighted as:
-----------------------------------------------------
-one
-t-{wo}
-three
-f-{our}
+two 2
+t+{hree 3}
+four 4
+f+{ive 5}
-----------------------------------------------------
because it matches "two" to "three 3", and so forth. It would be
nicer as:
-----------------------------------------------------
-one
-two
-three
-four
+two +{2}
+three +{3}
+four +{4}
+five 5
-----------------------------------------------------
which would probably involve pre-matching the lines into pairs
according to some heuristic.

Просмотреть файл

@ -1,28 +1,37 @@
#!/usr/bin/perl
use warnings FATAL => 'all';
use strict;
# Highlight by reversing foreground and background. You could do
# other things like bold or underline if you prefer.
my $HIGHLIGHT = "\x1b[7m";
my $UNHIGHLIGHT = "\x1b[27m";
my $COLOR = qr/\x1b\[[0-9;]*m/;
my $BORING = qr/$COLOR|\s/;
my @window;
my @removed;
my @added;
my $in_hunk;
while (<>) {
# We highlight only single-line changes, so we need
# a 4-line window to make a decision on whether
# to highlight.
push @window, $_;
next if @window < 4;
if ($window[0] =~ /^$COLOR*(\@| )/ &&
$window[1] =~ /^$COLOR*-/ &&
$window[2] =~ /^$COLOR*\+/ &&
$window[3] !~ /^$COLOR*\+/) {
print shift @window;
show_pair(shift @window, shift @window);
if (!$in_hunk) {
print;
$in_hunk = /^$COLOR*\@/;
}
elsif (/^$COLOR*-/) {
push @removed, $_;
}
elsif (/^$COLOR*\+/) {
push @added, $_;
}
else {
print shift @window;
show_hunk(\@removed, \@added);
@removed = ();
@added = ();
print;
$in_hunk = /^$COLOR*[\@ ]/;
}
# Most of the time there is enough output to keep things streaming,
@ -38,23 +47,40 @@ while (<>) {
}
}
# Special case a single-line hunk at the end of file.
if (@window == 3 &&
$window[0] =~ /^$COLOR*(\@| )/ &&
$window[1] =~ /^$COLOR*-/ &&
$window[2] =~ /^$COLOR*\+/) {
print shift @window;
show_pair(shift @window, shift @window);
}
# And then flush any remaining lines.
while (@window) {
print shift @window;
}
# Flush any queued hunk (this can happen when there is no trailing context in
# the final diff of the input).
show_hunk(\@removed, \@added);
exit 0;
sub show_pair {
sub show_hunk {
my ($a, $b) = @_;
# If one side is empty, then there is nothing to compare or highlight.
if (!@$a || !@$b) {
print @$a, @$b;
return;
}
# If we have mismatched numbers of lines on each side, we could try to
# be clever and match up similar lines. But for now we are simple and
# stupid, and only handle multi-line hunks that remove and add the same
# number of lines.
if (@$a != @$b) {
print @$a, @$b;
return;
}
my @queue;
for (my $i = 0; $i < @$a; $i++) {
my ($rm, $add) = highlight_pair($a->[$i], $b->[$i]);
print $rm;
push @queue, $add;
}
print @queue;
}
sub highlight_pair {
my @a = split_line(shift);
my @b = split_line(shift);
@ -101,8 +127,14 @@ sub show_pair {
}
}
print highlight(\@a, $pa, $sa);
print highlight(\@b, $pb, $sb);
if (is_pair_interesting(\@a, $pa, $sa, \@b, $pb, $sb)) {
return highlight_line(\@a, $pa, $sa),
highlight_line(\@b, $pb, $sb);
}
else {
return join('', @a),
join('', @b);
}
}
sub split_line {
@ -111,7 +143,7 @@ sub split_line {
split /($COLOR*)/;
}
sub highlight {
sub highlight_line {
my ($line, $prefix, $suffix) = @_;
return join('',
@ -122,3 +154,20 @@ sub highlight {
@{$line}[($suffix+1)..$#$line]
);
}
# Pairs are interesting to highlight only if we are going to end up
# highlighting a subset (i.e., not the whole line). Otherwise, the highlighting
# is just useless noise. We can detect this by finding either a matching prefix
# or suffix (disregarding boring bits like whitespace and colorization).
sub is_pair_interesting {
my ($a, $pa, $sa, $b, $pb, $sb) = @_;
my $prefix_a = join('', @$a[0..($pa-1)]);
my $prefix_b = join('', @$b[0..($pb-1)]);
my $suffix_a = join('', @$a[($sa+1)..$#$a]);
my $suffix_b = join('', @$b[($sb+1)..$#$b]);
return $prefix_a !~ /^$COLOR*-$BORING*$/ ||
$prefix_b !~ /^$COLOR*\+$BORING*$/ ||
$suffix_a !~ /^$BORING*$/ ||
$suffix_b !~ /^$BORING*$/;
}

Просмотреть файл

@ -10,7 +10,7 @@
import optparse, sys, os, marshal, subprocess, shelve
import tempfile, getopt, os.path, time, platform
import re
import re, shutil
verbose = False
@ -38,7 +38,7 @@ def p4_build_cmd(cmd):
host = gitConfig("git-p4.host")
if len(host) > 0:
real_cmd += ["-h", host]
real_cmd += ["-H", host]
client = gitConfig("git-p4.client")
if len(client) > 0:
@ -186,6 +186,47 @@ def split_p4_type(p4type):
mods = s[1]
return (base, mods)
#
# return the raw p4 type of a file (text, text+ko, etc)
#
def p4_type(file):
results = p4CmdList(["fstat", "-T", "headType", file])
return results[0]['headType']
#
# Given a type base and modifier, return a regexp matching
# the keywords that can be expanded in the file
#
def p4_keywords_regexp_for_type(base, type_mods):
if base in ("text", "unicode", "binary"):
kwords = None
if "ko" in type_mods:
kwords = 'Id|Header'
elif "k" in type_mods:
kwords = 'Id|Header|Author|Date|DateTime|Change|File|Revision'
else:
return None
pattern = r"""
\$ # Starts with a dollar, followed by...
(%s) # one of the keywords, followed by...
(:[^$]+)? # possibly an old expansion, followed by...
\$ # another dollar
""" % kwords
return pattern
else:
return None
#
# Given a file, return a regexp matching the possible
# RCS keywords that will be expanded, or None for files
# with kw expansion turned off.
#
def p4_keywords_regexp_for_file(file):
if not os.path.exists(file):
return None
else:
(type_base, type_mods) = split_p4_type(p4_type(file))
return p4_keywords_regexp_for_type(type_base, type_mods)
def setP4ExecBit(file, mode):
# Reopens an already open file and changes the execute bit to match
@ -555,6 +596,46 @@ def p4PathStartsWith(path, prefix):
return path.lower().startswith(prefix.lower())
return path.startswith(prefix)
def getClientSpec():
"""Look at the p4 client spec, create a View() object that contains
all the mappings, and return it."""
specList = p4CmdList("client -o")
if len(specList) != 1:
die('Output from "client -o" is %d lines, expecting 1' %
len(specList))
# dictionary of all client parameters
entry = specList[0]
# just the keys that start with "View"
view_keys = [ k for k in entry.keys() if k.startswith("View") ]
# hold this new View
view = View()
# append the lines, in order, to the view
for view_num in range(len(view_keys)):
k = "View%d" % view_num
if k not in view_keys:
die("Expected view key %s missing" % k)
view.append(entry[k])
return view
def getClientRoot():
"""Grab the client directory."""
output = p4CmdList("client -o")
if len(output) != 1:
die('Output from "client -o" is %d lines, expecting 1' % len(output))
entry = output[0]
if "Root" not in entry:
die('Client has no "Root"')
return entry["Root"]
class Command:
def __init__(self):
self.usage = "usage: %prog [options]"
@ -753,6 +834,29 @@ class P4Submit(Command, P4UserMap):
return result
def patchRCSKeywords(self, file, pattern):
# Attempt to zap the RCS keywords in a p4 controlled file matching the given pattern
(handle, outFileName) = tempfile.mkstemp(dir='.')
try:
outFile = os.fdopen(handle, "w+")
inFile = open(file, "r")
regexp = re.compile(pattern, re.VERBOSE)
for line in inFile.readlines():
line = regexp.sub(r'$\1$', line)
outFile.write(line)
inFile.close()
outFile.close()
# Forcibly overwrite the original file
os.unlink(file)
shutil.move(outFileName, file)
except:
# cleanup our temporary file
os.unlink(outFileName)
print "Failed to strip RCS keywords in %s" % file
raise
print "Patched up RCS keywords in %s" % file
def p4UserForCommit(self,id):
# Return the tuple (perforce user,git email) for a given git commit id
self.getUserMapFromPerforceServer()
@ -918,6 +1022,7 @@ class P4Submit(Command, P4UserMap):
filesToDelete = set()
editedFiles = set()
filesToChangeExecBit = {}
for line in diff:
diff = parseDiffTreeEntry(line)
modifier = diff['status']
@ -964,9 +1069,45 @@ class P4Submit(Command, P4UserMap):
patchcmd = diffcmd + " | git apply "
tryPatchCmd = patchcmd + "--check -"
applyPatchCmd = patchcmd + "--check --apply -"
patch_succeeded = True
if os.system(tryPatchCmd) != 0:
fixed_rcs_keywords = False
patch_succeeded = False
print "Unfortunately applying the change failed!"
# Patch failed, maybe it's just RCS keyword woes. Look through
# the patch to see if that's possible.
if gitConfig("git-p4.attemptRCSCleanup","--bool") == "true":
file = None
pattern = None
kwfiles = {}
for file in editedFiles | filesToDelete:
# did this file's delta contain RCS keywords?
pattern = p4_keywords_regexp_for_file(file)
if pattern:
# this file is a possibility...look for RCS keywords.
regexp = re.compile(pattern, re.VERBOSE)
for line in read_pipe_lines(["git", "diff", "%s^..%s" % (id, id), file]):
if regexp.search(line):
if verbose:
print "got keyword match on %s in %s in %s" % (pattern, line, file)
kwfiles[file] = pattern
break
for file in kwfiles:
if verbose:
print "zapping %s with %s" % (line,pattern)
self.patchRCSKeywords(file, kwfiles[file])
fixed_rcs_keywords = True
if fixed_rcs_keywords:
print "Retrying the patch with RCS keywords cleaned up"
if os.system(tryPatchCmd) == 0:
patch_succeeded = True
if not patch_succeeded:
print "What do you want to do?"
response = "x"
while response != "s" and response != "a" and response != "w":
@ -1119,11 +1260,20 @@ class P4Submit(Command, P4UserMap):
print "Internal error: cannot locate perforce depot path from existing branches"
sys.exit(128)
self.clientPath = p4Where(self.depotPath)
self.useClientSpec = False
if gitConfig("git-p4.useclientspec", "--bool") == "true":
self.useClientSpec = True
if self.useClientSpec:
self.clientSpecDirs = getClientSpec()
if len(self.clientPath) == 0:
print "Error: Cannot locate perforce checkout of %s in client view" % self.depotPath
sys.exit(128)
if self.useClientSpec:
# all files are relative to the client spec
self.clientPath = getClientRoot()
else:
self.clientPath = p4Where(self.depotPath)
if self.clientPath == "":
die("Error: Cannot locate perforce checkout of %s in client view" % self.depotPath)
print "Perforce checkout for depot path %s located at %s" % (self.depotPath, self.clientPath)
self.oldWorkingDirectory = os.getcwd()
@ -1429,6 +1579,7 @@ class P4Sync(Command, P4UserMap):
self.p4BranchesInGit = []
self.cloneExclude = []
self.useClientSpec = False
self.useClientSpec_from_options = False
self.clientSpecDirs = None
self.tempBranches = []
self.tempBranchLocation = "git-p4-tmp"
@ -1585,15 +1736,12 @@ class P4Sync(Command, P4UserMap):
# Note that we do not try to de-mangle keywords on utf16 files,
# even though in theory somebody may want that.
if type_base in ("text", "unicode", "binary"):
if "ko" in type_mods:
text = ''.join(contents)
text = re.sub(r'\$(Id|Header):[^$]*\$', r'$\1$', text)
contents = [ text ]
elif "k" in type_mods:
text = ''.join(contents)
text = re.sub(r'\$(Id|Header|Author|Date|DateTime|Change|File|Revision):[^$]*\$', r'$\1$', text)
contents = [ text ]
pattern = p4_keywords_regexp_for_type(type_base, type_mods)
if pattern:
regexp = re.compile(pattern, re.VERBOSE)
text = ''.join(contents)
text = regexp.sub(r'$\1$', text)
contents = [ text ]
self.gitStream.write("M %s inline %s\n" % (git_mode, relPath))
@ -2125,33 +2273,6 @@ class P4Sync(Command, P4UserMap):
print self.gitError.read()
def getClientSpec(self):
specList = p4CmdList("client -o")
if len(specList) != 1:
die('Output from "client -o" is %d lines, expecting 1' %
len(specList))
# dictionary of all client parameters
entry = specList[0]
# just the keys that start with "View"
view_keys = [ k for k in entry.keys() if k.startswith("View") ]
# hold this new View
view = View()
# append the lines, in order, to the view
for view_num in range(len(view_keys)):
k = "View%d" % view_num
if k not in view_keys:
die("Expected view key %s missing" % k)
view.append(entry[k])
self.clientSpecDirs = view
if self.verbose:
for i, m in enumerate(self.clientSpecDirs.mappings):
print "clientSpecDirs %d: %s" % (i, str(m))
def run(self, args):
self.depotPaths = []
self.changeRange = ""
@ -2184,11 +2305,15 @@ class P4Sync(Command, P4UserMap):
if not gitBranchExists(self.refPrefix + "HEAD") and self.importIntoRemotes and gitBranchExists(self.branch):
system("git symbolic-ref %sHEAD %s" % (self.refPrefix, self.branch))
if not self.useClientSpec:
# accept either the command-line option, or the configuration variable
if self.useClientSpec:
# will use this after clone to set the variable
self.useClientSpec_from_options = True
else:
if gitConfig("git-p4.useclientspec", "--bool") == "true":
self.useClientSpec = True
if self.useClientSpec:
self.getClientSpec()
self.clientSpecDirs = getClientSpec()
# TODO: should always look at previous commits,
# merge with previous imports, if possible.
@ -2509,6 +2634,10 @@ class P4Clone(P4Sync):
else:
print "Could not detect main branch. No checkout/master branch created."
# auto-set this variable if invoked with --use-client-spec
if self.useClientSpec_from_options:
system("git config --bool git-p4.useclientspec true")
return True
class P4Branches(Command):

Просмотреть файл

@ -85,7 +85,6 @@ prep_for_email()
oldrev=$(git rev-parse $1)
newrev=$(git rev-parse $2)
refname="$3"
maxlines=$4
# --- Interpret
# 0000->1234 (create)
@ -461,7 +460,7 @@ generate_delete_branch_email()
{
echo " was $oldrev"
echo ""
echo $LOGEND
echo $LOGBEGIN
git show -s --pretty=oneline $oldrev
echo $LOGEND
}
@ -561,7 +560,7 @@ generate_delete_atag_email()
{
echo " was $oldrev"
echo ""
echo $LOGEND
echo $LOGBEGIN
git show -s --pretty=oneline $oldrev
echo $LOGEND
}
@ -626,7 +625,7 @@ generate_delete_general_email()
{
echo " was $oldrev"
echo ""
echo $LOGEND
echo $LOGBEGIN
git show -s --pretty=oneline $oldrev
echo $LOGEND
}

Просмотреть файл

@ -2,6 +2,7 @@
#include "attr.h"
#include "run-command.h"
#include "quote.h"
#include "sigchain.h"
/*
* convert.c - convert a file when checking it out and checking it in.
@ -195,9 +196,17 @@ static int crlf_to_git(const char *path, const char *src, size_t len,
char *dst;
if (crlf_action == CRLF_BINARY ||
(crlf_action == CRLF_GUESS && auto_crlf == AUTO_CRLF_FALSE) || !len)
(crlf_action == CRLF_GUESS && auto_crlf == AUTO_CRLF_FALSE) ||
(src && !len))
return 0;
/*
* If we are doing a dry-run and have no source buffer, there is
* nothing to analyze; we must assume we would convert.
*/
if (!buf && !src)
return 1;
gather_stats(src, len, &stats);
if (crlf_action == CRLF_AUTO || crlf_action == CRLF_GUESS) {
@ -231,6 +240,13 @@ static int crlf_to_git(const char *path, const char *src, size_t len,
if (!stats.cr)
return 0;
/*
* At this point all of our source analysis is done, and we are sure we
* would convert. If we are in dry-run mode, we can give an answer.
*/
if (!buf)
return 1;
/* only grow if not in place */
if (strbuf_avail(buf) + buf->len < len)
strbuf_grow(buf, len - buf->len);
@ -360,12 +376,16 @@ static int filter_buffer(int in, int out, void *data)
if (start_command(&child_process))
return error("cannot fork to run external filter %s", params->cmd);
sigchain_push(SIGPIPE, SIG_IGN);
write_err = (write_in_full(child_process.in, params->src, params->size) < 0);
if (close(child_process.in))
write_err = 1;
if (write_err)
error("cannot feed the input to external filter %s", params->cmd);
sigchain_pop(SIGPIPE);
status = finish_command(&child_process);
if (status)
error("external filter %s failed %d", params->cmd, status);
@ -391,6 +411,9 @@ static int apply_filter(const char *path, const char *src, size_t len,
if (!cmd)
return 0;
if (!dst)
return 1;
memset(&async, 0, sizeof(async));
async.proc = filter_buffer;
async.data = &params;
@ -522,9 +545,12 @@ static int ident_to_git(const char *path, const char *src, size_t len,
{
char *dst, *dollar;
if (!ident || !count_ident(src, len))
if (!ident || (src && !count_ident(src, len)))
return 0;
if (!buf)
return 1;
/* only grow if not in place */
if (strbuf_avail(buf) + buf->len < len)
strbuf_grow(buf, len - buf->len);
@ -754,13 +780,13 @@ int convert_to_git(const char *path, const char *src, size_t len,
filter = ca.drv->clean;
ret |= apply_filter(path, src, len, dst, filter);
if (ret) {
if (ret && dst) {
src = dst->buf;
len = dst->len;
}
ca.crlf_action = input_crlf_action(ca.crlf_action, ca.eol_attr);
ret |= crlf_to_git(path, src, len, dst, ca.crlf_action, checksafe);
if (ret) {
if (ret && dst) {
src = dst->buf;
len = dst->len;
}

Просмотреть файл

@ -40,6 +40,11 @@ extern int convert_to_working_tree(const char *path, const char *src,
size_t len, struct strbuf *dst);
extern int renormalize_buffer(const char *path, const char *src, size_t len,
struct strbuf *dst);
static inline int would_convert_to_git(const char *path, const char *src,
size_t len, enum safe_crlf checksafe)
{
return convert_to_git(path, src, len, NULL, checksafe);
}
/*****************************************************************
*

Просмотреть файл

@ -3,7 +3,7 @@
*
* No surprises, and works with signed and unsigned chars.
*/
#include "cache.h"
#include "git-compat-util.h"
enum {
S = GIT_SPACE,

90
diff.c
Просмотреть файл

@ -177,11 +177,8 @@ int git_diff_basic_config(const char *var, const char *value, void *cb)
return 0;
}
switch (userdiff_config(var, value)) {
case 0: break;
case -1: return -1;
default: return 0;
}
if (userdiff_config(var, value) < 0)
return -1;
if (!prefixcmp(var, "diff.color.") || !prefixcmp(var, "color.diff.")) {
int slot = parse_diff_color_slot(var, 11);
@ -1276,13 +1273,15 @@ const char mime_boundary_leader[] = "------------";
static int scale_linear(int it, int width, int max_change)
{
if (!it)
return 0;
/*
* make sure that at least one '-' is printed if there were deletions,
* and likewise for '+'.
* make sure that at least one '-' or '+' is printed if
* there is any change to this path. The easiest way is to
* scale linearly as if the alloted width is one column shorter
* than it is, and then add 1 to the result.
*/
if (max_change < 2)
return it;
return ((it - 1) * (width - 1) + max_change - 1) / (max_change - 1);
return 1 + (it * (width - 1) / max_change);
}
static void show_name(FILE *file,
@ -1322,6 +1321,55 @@ static void fill_print_name(struct diffstat_file *file)
file->print_name = pname;
}
int print_stat_summary(FILE *fp, int files, int insertions, int deletions)
{
struct strbuf sb = STRBUF_INIT;
int ret;
if (!files) {
assert(insertions == 0 && deletions == 0);
return fputs(_(" 0 files changed\n"), fp);
}
strbuf_addf(&sb,
Q_(" %d file changed", " %d files changed", files),
files);
/*
* For binary diff, the caller may want to print "x files
* changed" with insertions == 0 && deletions == 0.
*
* Not omitting "0 insertions(+), 0 deletions(-)" in this case
* is probably less confusing (i.e skip over "2 files changed
* but nothing about added/removed lines? Is this a bug in Git?").
*/
if (insertions || deletions == 0) {
/*
* TRANSLATORS: "+" in (+) is a line addition marker;
* do not translate it.
*/
strbuf_addf(&sb,
Q_(", %d insertion(+)", ", %d insertions(+)",
insertions),
insertions);
}
if (deletions || insertions == 0) {
/*
* TRANSLATORS: "-" in (-) is a line removal marker;
* do not translate it.
*/
strbuf_addf(&sb,
Q_(", %d deletion(-)", ", %d deletions(-)",
deletions),
deletions);
}
strbuf_addch(&sb, '\n');
ret = fputs(sb.buf, fp);
strbuf_release(&sb);
return ret;
}
static void show_stats(struct diffstat_t *data, struct diff_options *options)
{
int i, len, add, del, adds = 0, dels = 0;
@ -1449,8 +1497,19 @@ static void show_stats(struct diffstat_t *data, struct diff_options *options)
dels += del;
if (width <= max_change) {
add = scale_linear(add, width, max_change);
del = scale_linear(del, width, max_change);
int total = add + del;
total = scale_linear(add + del, width, max_change);
if (total < 2 && add && del)
/* width >= 2 due to the sanity check */
total = 2;
if (add < del) {
add = scale_linear(add, width, max_change);
del = total - add;
} else {
del = scale_linear(del, width, max_change);
add = total - del;
}
}
fprintf(options->file, "%s", line_prefix);
show_name(options->file, prefix, name, len);
@ -1475,9 +1534,7 @@ static void show_stats(struct diffstat_t *data, struct diff_options *options)
extra_shown = 1;
}
fprintf(options->file, "%s", line_prefix);
fprintf(options->file,
" %d files changed, %d insertions(+), %d deletions(-)\n",
total_files, adds, dels);
print_stat_summary(options->file, total_files, adds, dels);
}
static void show_shortstats(struct diffstat_t *data, struct diff_options *options)
@ -1507,8 +1564,7 @@ static void show_shortstats(struct diffstat_t *data, struct diff_options *option
options->output_prefix_data);
fprintf(options->file, "%s", msg->buf);
}
fprintf(options->file, " %d files changed, %d insertions(+), %d deletions(-)\n",
total_files, adds, dels);
print_stat_summary(options->file, total_files, adds, dels);
}
static void show_numstat(struct diffstat_t *data, struct diff_options *options)

3
diff.h
Просмотреть файл

@ -324,4 +324,7 @@ extern struct userdiff_driver *get_textconv(struct diff_filespec *one);
extern int parse_rename_score(const char **cp_p);
extern int print_stat_summary(FILE *fp, int files,
int insertions, int deletions);
#endif /* DIFF_H */

Просмотреть файл

@ -202,7 +202,7 @@ check_patch_format () {
l1=
while test -z "$l1"
do
read l1
read l1 || break
done
read l2
read l3

Просмотреть файл

@ -463,6 +463,8 @@ static inline int has_extension(const char *filename, const char *ext)
#undef isdigit
#undef isalpha
#undef isalnum
#undef islower
#undef isupper
#undef tolower
#undef toupper
extern unsigned char sane_ctype[256];
@ -478,6 +480,8 @@ extern unsigned char sane_ctype[256];
#define isdigit(x) sane_istest(x,GIT_DIGIT)
#define isalpha(x) sane_istest(x,GIT_ALPHA)
#define isalnum(x) sane_istest(x,GIT_ALPHA | GIT_DIGIT)
#define islower(x) sane_iscase(x, 1)
#define isupper(x) sane_iscase(x, 0)
#define is_glob_special(x) sane_istest(x,GIT_GLOB_SPECIAL)
#define is_regex_special(x) sane_istest(x,GIT_GLOB_SPECIAL | GIT_REGEX_SPECIAL)
#define tolower(x) sane_case((unsigned char)(x), 0x20)
@ -491,6 +495,17 @@ static inline int sane_case(int x, int high)
return x;
}
static inline int sane_iscase(int x, int is_lower)
{
if (!sane_istest(x, GIT_ALPHA))
return 0;
if (is_lower)
return (x & 0x20) != 0;
else
return (x & 0x20) == 0;
}
static inline int strtoul_ui(char const *s, int base, unsigned int *result)
{
unsigned long ul;

Просмотреть файл

@ -40,7 +40,7 @@ test -f "$GIT_DIR/MERGE_HEAD" && die_merge
strategy_args= diffstat= no_commit= squash= no_ff= ff_only=
log_arg= verbosity= progress= recurse_submodules=
merge_args=
merge_args= edit=
curr_branch=$(git symbolic-ref -q HEAD)
curr_branch_short="${curr_branch#refs/heads/}"
rebase=$(git config --bool branch.$curr_branch_short.rebase)
@ -70,6 +70,10 @@ do
no_commit=--no-commit ;;
--c|--co|--com|--comm|--commi|--commit)
no_commit=--commit ;;
-e|--edit)
edit=--edit ;;
--no-edit)
edit=--no-edit ;;
--sq|--squ|--squa|--squas|--squash)
squash=--squash ;;
--no-sq|--no-squ|--no-squa|--no-squas|--no-squash)
@ -278,7 +282,7 @@ true)
eval="$eval --onto $merge_head ${oldremoteref:-$merge_head}"
;;
*)
eval="git-merge $diffstat $no_commit $squash $no_ff $ff_only"
eval="git-merge $diffstat $no_commit $edit $squash $no_ff $ff_only"
eval="$eval $log_arg $strategy_args $merge_args $verbosity $progress"
eval="$eval \"\$merge_name\" HEAD $merge_head"
;;

Просмотреть файл

@ -90,10 +90,13 @@ call_merge () {
finish_rb_merge () {
move_to_original_branch
git notes copy --for-rewrite=rebase < "$state_dir"/rewritten
if test -x "$GIT_DIR"/hooks/post-rewrite &&
test -s "$state_dir"/rewritten; then
"$GIT_DIR"/hooks/post-rewrite rebase < "$state_dir"/rewritten
if test -s "$state_dir"/rewritten
then
git notes copy --for-rewrite=rebase <"$state_dir"/rewritten
if test -x "$GIT_DIR"/hooks/post-rewrite
then
"$GIT_DIR"/hooks/post-rewrite rebase <"$state_dir"/rewritten
fi
fi
rm -r "$state_dir"
say All done.

Просмотреть файл

@ -1878,8 +1878,7 @@ sub cmt_sha2rev_batch {
sub working_head_info {
my ($head, $refs) = @_;
my @args = qw/log --no-color --no-decorate --first-parent
--pretty=medium/;
my @args = qw/rev-list --first-parent --pretty=medium/;
my ($fh, $ctx) = command_output_pipe(@args, $head);
my $hash;
my %max;
@ -2029,6 +2028,7 @@ use Carp qw/croak/;
use File::Path qw/mkpath/;
use File::Copy qw/copy/;
use IPC::Open3;
use Time::Local;
use Memoize; # core since 5.8.0, Jul 2002
use Memoize::Storable;
@ -3287,6 +3287,14 @@ sub get_untracked {
\@out;
}
sub get_tz {
# some systmes don't handle or mishandle %z, so be creative.
my $t = shift || time;
my $gm = timelocal(gmtime($t));
my $sign = qw( + + - )[ $t <=> $gm ];
return sprintf("%s%02d%02d", $sign, (gmtime(abs($t - $gm)))[2,1]);
}
# parse_svn_date(DATE)
# --------------------
# Given a date (in UTC) from Subversion, return a string in the format
@ -3319,8 +3327,7 @@ sub parse_svn_date {
delete $ENV{TZ};
}
my $our_TZ =
POSIX::strftime('%Z', $S, $M, $H, $d, $m - 1, $Y - 1900);
my $our_TZ = get_tz();
# This converts $epoch_in_UTC into our local timezone.
my ($sec, $min, $hour, $mday, $mon, $year,
@ -3920,7 +3927,7 @@ sub rebuild {
my ($base_rev, $head) = ($partial ? $self->rev_map_max_norebuild(1) :
(undef, undef));
my ($log, $ctx) =
command_output_pipe(qw/rev-list --pretty=raw --no-color --reverse/,
command_output_pipe(qw/rev-list --pretty=raw --reverse/,
($head ? "$head.." : "") . $self->refname,
'--');
my $metadata_url = $self->metadata_url;
@ -5130,7 +5137,7 @@ sub rmdirs {
}
sub open_or_add_dir {
my ($self, $full_path, $baton) = @_;
my ($self, $full_path, $baton, $deletions) = @_;
my $t = $self->{types}->{$full_path};
if (!defined $t) {
die "$full_path not known in r$self->{r} or we have a bug!\n";
@ -5139,7 +5146,7 @@ sub open_or_add_dir {
no warnings 'once';
# SVN::Node::none and SVN::Node::file are used only once,
# so we're shutting up Perl's warnings about them.
if ($t == $SVN::Node::none) {
if ($t == $SVN::Node::none || defined($deletions->{$full_path})) {
return $self->add_directory($full_path, $baton,
undef, -1, $self->{pool});
} elsif ($t == $SVN::Node::dir) {
@ -5154,17 +5161,18 @@ sub open_or_add_dir {
}
sub ensure_path {
my ($self, $path) = @_;
my ($self, $path, $deletions) = @_;
my $bat = $self->{bat};
my $repo_path = $self->repo_path($path);
return $bat->{''} unless (length $repo_path);
my @p = split m#/+#, $repo_path;
my $c = shift @p;
$bat->{$c} ||= $self->open_or_add_dir($c, $bat->{''});
$bat->{$c} ||= $self->open_or_add_dir($c, $bat->{''}, $deletions);
while (@p) {
my $c0 = $c;
$c .= '/' . shift @p;
$bat->{$c} ||= $self->open_or_add_dir($c, $bat->{$c0});
$bat->{$c} ||= $self->open_or_add_dir($c, $bat->{$c0}, $deletions);
}
return $bat->{$c};
}
@ -5221,9 +5229,9 @@ sub apply_autoprops {
}
sub A {
my ($self, $m) = @_;
my ($self, $m, $deletions) = @_;
my ($dir, $file) = split_path($m->{file_b});
my $pbat = $self->ensure_path($dir);
my $pbat = $self->ensure_path($dir, $deletions);
my $fbat = $self->add_file($self->repo_path($m->{file_b}), $pbat,
undef, -1);
print "\tA\t$m->{file_b}\n" unless $::_q;
@ -5233,9 +5241,9 @@ sub A {
}
sub C {
my ($self, $m) = @_;
my ($self, $m, $deletions) = @_;
my ($dir, $file) = split_path($m->{file_b});
my $pbat = $self->ensure_path($dir);
my $pbat = $self->ensure_path($dir, $deletions);
my $fbat = $self->add_file($self->repo_path($m->{file_b}), $pbat,
$self->url_path($m->{file_a}), $self->{r});
print "\tC\t$m->{file_a} => $m->{file_b}\n" unless $::_q;
@ -5252,9 +5260,9 @@ sub delete_entry {
}
sub R {
my ($self, $m) = @_;
my ($self, $m, $deletions) = @_;
my ($dir, $file) = split_path($m->{file_b});
my $pbat = $self->ensure_path($dir);
my $pbat = $self->ensure_path($dir, $deletions);
my $fbat = $self->add_file($self->repo_path($m->{file_b}), $pbat,
$self->url_path($m->{file_a}), $self->{r});
print "\tR\t$m->{file_a} => $m->{file_b}\n" unless $::_q;
@ -5263,14 +5271,14 @@ sub R {
$self->close_file($fbat,undef,$self->{pool});
($dir, $file) = split_path($m->{file_a});
$pbat = $self->ensure_path($dir);
$pbat = $self->ensure_path($dir, $deletions);
$self->delete_entry($m->{file_a}, $pbat);
}
sub M {
my ($self, $m) = @_;
my ($self, $m, $deletions) = @_;
my ($dir, $file) = split_path($m->{file_b});
my $pbat = $self->ensure_path($dir);
my $pbat = $self->ensure_path($dir, $deletions);
my $fbat = $self->open_file($self->repo_path($m->{file_b}),
$pbat,$self->{r},$self->{pool});
print "\t$m->{chg}\t$m->{file_b}\n" unless $::_q;
@ -5340,9 +5348,9 @@ sub chg_file {
}
sub D {
my ($self, $m) = @_;
my ($self, $m, $deletions) = @_;
my ($dir, $file) = split_path($m->{file_b});
my $pbat = $self->ensure_path($dir);
my $pbat = $self->ensure_path($dir, $deletions);
print "\tD\t$m->{file_b}\n" unless $::_q;
$self->delete_entry($m->{file_b}, $pbat);
}
@ -5374,11 +5382,19 @@ sub DESTROY {
sub apply_diff {
my ($self) = @_;
my $mods = $self->{mods};
my %o = ( D => 1, R => 0, C => -1, A => 3, M => 3, T => 3 );
my %o = ( D => 0, C => 1, R => 2, A => 3, M => 4, T => 5 );
my %deletions;
foreach my $m (@$mods) {
if ($m->{chg} eq "D") {
$deletions{$m->{file_b}} = 1;
}
}
foreach my $m (sort { $o{$a->{chg}} <=> $o{$b->{chg}} } @$mods) {
my $f = $m->{chg};
if (defined $o{$f}) {
$self->$f($m);
$self->$f($m, \%deletions);
} else {
fatal("Invalid change type: $f");
}
@ -5994,7 +6010,6 @@ package Git::SVN::Log;
use strict;
use warnings;
use POSIX qw/strftime/;
use Time::Local;
use constant commit_log_separator => ('-' x 72) . "\n";
use vars qw/$TZ $limit $color $pager $non_recursive $verbose $oneline
%rusers $show_commit $incremental/;
@ -6104,11 +6119,8 @@ sub run_pager {
}
sub format_svn_date {
# some systmes don't handle or mishandle %z, so be creative.
my $t = shift || time;
my $gm = timelocal(gmtime($t));
my $sign = qw( + + - )[ $t <=> $gm ];
my $gmoff = sprintf("%s%02d%02d", $sign, (gmtime(abs($t - $gm)))[2,1]);
my $gmoff = Git::SVN::get_tz($t);
return strftime("%Y-%m-%d %H:%M:%S $gmoff (%a, %d %b %Y)", localtime($t));
}

Просмотреть файл

@ -52,7 +52,7 @@ sub evaluate_uri {
# as base URL.
# Therefore, if we needed to strip PATH_INFO, then we know that we have
# to build the base URL ourselves:
our $path_info = $ENV{"PATH_INFO"};
our $path_info = decode_utf8($ENV{"PATH_INFO"});
if ($path_info) {
if ($my_url =~ s,\Q$path_info\E$,, &&
$my_uri =~ s,\Q$path_info\E$,, &&
@ -817,9 +817,9 @@ sub evaluate_query_params {
while (my ($name, $symbol) = each %cgi_param_mapping) {
if ($symbol eq 'opt') {
$input_params{$name} = [ $cgi->param($symbol) ];
$input_params{$name} = [ map { decode_utf8($_) } $cgi->param($symbol) ];
} else {
$input_params{$name} = $cgi->param($symbol);
$input_params{$name} = decode_utf8($cgi->param($symbol));
}
}
}
@ -2775,7 +2775,7 @@ sub git_populate_project_tagcloud {
}
my $cloud;
my $matched = $cgi->param('by_tag');
my $matched = $input_params{'ctag'};
if (eval { require HTML::TagCloud; 1; }) {
$cloud = HTML::TagCloud->new;
foreach my $ctag (sort keys %ctags_lc) {
@ -2987,6 +2987,10 @@ sub search_projects_list {
return @$projlist
unless ($tagfilter || $searchtext);
# searching projects require filling to be run before it;
fill_project_list_info($projlist,
$tagfilter ? 'ctags' : (),
$searchtext ? ('path', 'descr') : ());
my @projects;
PROJECT:
foreach my $pr (@$projlist) {
@ -3744,7 +3748,7 @@ sub get_page_title {
unless (defined $project) {
if (defined $project_filter) {
$title .= " - " . to_utf8($project_filter);
$title .= " - projects in '" . esc_path($project_filter) . "'";
}
return $title;
}
@ -3906,7 +3910,7 @@ sub print_search_form {
-values => ['commit', 'grep', 'author', 'committer', 'pickaxe']) .
$cgi->sup($cgi->a({-href => href(action=>"search_help")}, "?")) .
" search:\n",
$cgi->textfield(-name => "s", -value => $searchtext) . "\n" .
$cgi->textfield(-name => "s", -value => $searchtext, -override => 1) . "\n" .
"<span title=\"Extended regular expression\">" .
$cgi->checkbox(-name => 'sr', -value => 1, -label => 're',
-checked => $search_use_regexp) .
@ -5188,35 +5192,70 @@ sub git_project_search_form {
print "</div>\n";
}
# fills project list info (age, description, owner, category, forks)
# entry for given @keys needs filling if at least one of keys in list
# is not present in %$project_info
sub project_info_needs_filling {
my ($project_info, @keys) = @_;
# return List::MoreUtils::any { !exists $project_info->{$_} } @keys;
foreach my $key (@keys) {
if (!exists $project_info->{$key}) {
return 1;
}
}
return;
}
# fills project list info (age, description, owner, category, forks, etc.)
# for each project in the list, removing invalid projects from
# returned list
# returned list, or fill only specified info.
#
# Invalid projects are removed from the returned list if and only if you
# ask 'age' or 'age_string' to be filled, because they are the only fields
# that run unconditionally git command that requires repository, and
# therefore do always check if project repository is invalid.
#
# USAGE:
# * fill_project_list_info(\@project_list, 'descr_long', 'ctags')
# ensures that 'descr_long' and 'ctags' fields are filled
# * @project_list = fill_project_list_info(\@project_list)
# ensures that all fields are filled (and invalid projects removed)
#
# NOTE: modifies $projlist, but does not remove entries from it
sub fill_project_list_info {
my $projlist = shift;
my ($projlist, @wanted_keys) = @_;
my @projects;
my $filter_set = sub { return @_; };
if (@wanted_keys) {
my %wanted_keys = map { $_ => 1 } @wanted_keys;
$filter_set = sub { return grep { $wanted_keys{$_} } @_; };
}
my $show_ctags = gitweb_check_feature('ctags');
PROJECT:
foreach my $pr (@$projlist) {
my (@activity) = git_get_last_activity($pr->{'path'});
unless (@activity) {
next PROJECT;
if (project_info_needs_filling($pr, $filter_set->('age', 'age_string'))) {
my (@activity) = git_get_last_activity($pr->{'path'});
unless (@activity) {
next PROJECT;
}
($pr->{'age'}, $pr->{'age_string'}) = @activity;
}
($pr->{'age'}, $pr->{'age_string'}) = @activity;
if (!defined $pr->{'descr'}) {
if (project_info_needs_filling($pr, $filter_set->('descr', 'descr_long'))) {
my $descr = git_get_project_description($pr->{'path'}) || "";
$descr = to_utf8($descr);
$pr->{'descr_long'} = $descr;
$pr->{'descr'} = chop_str($descr, $projects_list_description_width, 5);
}
if (!defined $pr->{'owner'}) {
if (project_info_needs_filling($pr, $filter_set->('owner'))) {
$pr->{'owner'} = git_get_project_owner("$pr->{'path'}") || "";
}
if ($show_ctags) {
if ($show_ctags &&
project_info_needs_filling($pr, $filter_set->('ctags'))) {
$pr->{'ctags'} = git_get_project_ctags($pr->{'path'});
}
if ($projects_list_group_categories && !defined $pr->{'category'}) {
if ($projects_list_group_categories &&
project_info_needs_filling($pr, $filter_set->('category'))) {
my $cat = git_get_project_category($pr->{'path'}) ||
$project_list_default_category;
$pr->{'category'} = to_utf8($cat);
@ -5345,19 +5384,20 @@ sub git_project_list_body {
my $check_forks = gitweb_check_feature('forks');
my $show_ctags = gitweb_check_feature('ctags');
my $tagfilter = $show_ctags ? $cgi->param('by_tag') : undef;
my $tagfilter = $show_ctags ? $input_params{'ctag'} : undef;
$check_forks = undef
if ($tagfilter || $searchtext);
# filtering out forks before filling info allows to do less work
@projects = filter_forks_from_projects_list(\@projects)
if ($check_forks);
@projects = fill_project_list_info(\@projects);
# searching projects require filling to be run before it
# search_projects_list pre-fills required info
@projects = search_projects_list(\@projects,
'searchtext' => $searchtext,
'tagfilter' => $tagfilter)
if ($tagfilter || $searchtext);
# fill the rest
@projects = fill_project_list_info(\@projects);
$order ||= $default_projects_order;
$from = 0 unless defined $from;
@ -5633,7 +5673,7 @@ sub git_tags_body {
sub git_heads_body {
# uses global variable $project
my ($headlist, $head, $from, $to, $extra) = @_;
my ($headlist, $head_at, $from, $to, $extra) = @_;
$from = 0 unless defined $from;
$to = $#{$headlist} if (!defined $to || $#{$headlist} < $to);
@ -5642,7 +5682,7 @@ sub git_heads_body {
for (my $i = $from; $i <= $to; $i++) {
my $entry = $headlist->[$i];
my %ref = %$entry;
my $curr = $ref{'id'} eq $head;
my $curr = defined $head_at && $ref{'id'} eq $head_at;
if ($alternate) {
print "<tr class=\"dark\">\n";
} else {
@ -5915,9 +5955,10 @@ sub git_search_files {
my $alternate = 1;
my $matches = 0;
my $lastfile = '';
my $file_href;
while (my $line = <$fd>) {
chomp $line;
my ($file, $file_href, $lno, $ltext, $binary);
my ($file, $lno, $ltext, $binary);
last if ($matches++ > 1000);
if ($line =~ /^Binary file (.+) matches$/) {
$file = $1;
@ -6261,7 +6302,7 @@ sub git_tag {
sub git_blame_common {
my $format = shift || 'porcelain';
if ($format eq 'porcelain' && $cgi->param('js')) {
if ($format eq 'porcelain' && $input_params{'javascript'}) {
$format = 'incremental';
$action = 'blame_incremental'; # for page title etc
}

252
grep.c
Просмотреть файл

@ -79,7 +79,7 @@ static void compile_pcre_regexp(struct grep_pat *p, const struct grep_opt *opt)
{
const char *error;
int erroffset;
int options = 0;
int options = PCRE_MULTILINE;
if (opt->ignore_case)
options |= PCRE_CASELESS;
@ -807,38 +807,43 @@ static void show_line(struct grep_opt *opt, char *bol, char *eol,
}
#ifndef NO_PTHREADS
int grep_use_locks;
/*
* This lock protects access to the gitattributes machinery, which is
* not thread-safe.
*/
pthread_mutex_t grep_attr_mutex;
static inline void grep_attr_lock(struct grep_opt *opt)
static inline void grep_attr_lock(void)
{
if (opt->use_threads)
if (grep_use_locks)
pthread_mutex_lock(&grep_attr_mutex);
}
static inline void grep_attr_unlock(struct grep_opt *opt)
static inline void grep_attr_unlock(void)
{
if (opt->use_threads)
if (grep_use_locks)
pthread_mutex_unlock(&grep_attr_mutex);
}
/*
* Same as git_attr_mutex, but protecting the thread-unsafe object db access.
*/
pthread_mutex_t grep_read_mutex;
#else
#define grep_attr_lock(opt)
#define grep_attr_unlock(opt)
#define grep_attr_lock()
#define grep_attr_unlock()
#endif
static int match_funcname(struct grep_opt *opt, const char *name, char *bol, char *eol)
static int match_funcname(struct grep_opt *opt, struct grep_source *gs, char *bol, char *eol)
{
xdemitconf_t *xecfg = opt->priv;
if (xecfg && !xecfg->find_func) {
struct userdiff_driver *drv;
grep_attr_lock(opt);
drv = userdiff_find_by_path(name);
grep_attr_unlock(opt);
if (drv && drv->funcname.pattern) {
const struct userdiff_funcname *pe = &drv->funcname;
grep_source_load_driver(gs);
if (gs->driver->funcname.pattern) {
const struct userdiff_funcname *pe = &gs->driver->funcname;
xdiff_set_find_func(xecfg, pe->pattern, pe->cflags);
} else {
xecfg = opt->priv = NULL;
@ -858,33 +863,33 @@ static int match_funcname(struct grep_opt *opt, const char *name, char *bol, cha
return 0;
}
static void show_funcname_line(struct grep_opt *opt, const char *name,
char *buf, char *bol, unsigned lno)
static void show_funcname_line(struct grep_opt *opt, struct grep_source *gs,
char *bol, unsigned lno)
{
while (bol > buf) {
while (bol > gs->buf) {
char *eol = --bol;
while (bol > buf && bol[-1] != '\n')
while (bol > gs->buf && bol[-1] != '\n')
bol--;
lno--;
if (lno <= opt->last_shown)
break;
if (match_funcname(opt, name, bol, eol)) {
show_line(opt, bol, eol, name, lno, '=');
if (match_funcname(opt, gs, bol, eol)) {
show_line(opt, bol, eol, gs->name, lno, '=');
break;
}
}
}
static void show_pre_context(struct grep_opt *opt, const char *name, char *buf,
static void show_pre_context(struct grep_opt *opt, struct grep_source *gs,
char *bol, char *end, unsigned lno)
{
unsigned cur = lno, from = 1, funcname_lno = 0;
int funcname_needed = !!opt->funcname;
if (opt->funcbody && !match_funcname(opt, name, bol, end))
if (opt->funcbody && !match_funcname(opt, gs, bol, end))
funcname_needed = 2;
if (opt->pre_context < lno)
@ -893,14 +898,14 @@ static void show_pre_context(struct grep_opt *opt, const char *name, char *buf,
from = opt->last_shown + 1;
/* Rewind. */
while (bol > buf &&
while (bol > gs->buf &&
cur > (funcname_needed == 2 ? opt->last_shown + 1 : from)) {
char *eol = --bol;
while (bol > buf && bol[-1] != '\n')
while (bol > gs->buf && bol[-1] != '\n')
bol--;
cur--;
if (funcname_needed && match_funcname(opt, name, bol, eol)) {
if (funcname_needed && match_funcname(opt, gs, bol, eol)) {
funcname_lno = cur;
funcname_needed = 0;
}
@ -908,7 +913,7 @@ static void show_pre_context(struct grep_opt *opt, const char *name, char *buf,
/* We need to look even further back to find a function signature. */
if (opt->funcname && funcname_needed)
show_funcname_line(opt, name, buf, bol, cur);
show_funcname_line(opt, gs, bol, cur);
/* Back forward. */
while (cur < lno) {
@ -916,7 +921,7 @@ static void show_pre_context(struct grep_opt *opt, const char *name, char *buf,
while (*eol != '\n')
eol++;
show_line(opt, bol, eol, name, cur, sign);
show_line(opt, bol, eol, gs->name, cur, sign);
bol = eol + 1;
cur++;
}
@ -983,11 +988,10 @@ static void std_output(struct grep_opt *opt, const void *buf, size_t size)
fwrite(buf, size, 1, stdout);
}
static int grep_buffer_1(struct grep_opt *opt, const char *name,
char *buf, unsigned long size, int collect_hits)
static int grep_source_1(struct grep_opt *opt, struct grep_source *gs, int collect_hits)
{
char *bol = buf;
unsigned long left = size;
char *bol;
unsigned long left;
unsigned lno = 1;
unsigned last_hit = 0;
int binary_match_only = 0;
@ -1017,11 +1021,11 @@ static int grep_buffer_1(struct grep_opt *opt, const char *name,
switch (opt->binary) {
case GREP_BINARY_DEFAULT:
if (buffer_is_binary(buf, size))
if (grep_source_is_binary(gs))
binary_match_only = 1;
break;
case GREP_BINARY_NOMATCH:
if (buffer_is_binary(buf, size))
if (grep_source_is_binary(gs))
return 0; /* Assume unmatch */
break;
case GREP_BINARY_TEXT:
@ -1035,6 +1039,11 @@ static int grep_buffer_1(struct grep_opt *opt, const char *name,
try_lookahead = should_lookahead(opt);
if (grep_source_load(gs) < 0)
return 0;
bol = gs->buf;
left = gs->size;
while (left) {
char *eol, ch;
int hit;
@ -1083,14 +1092,14 @@ static int grep_buffer_1(struct grep_opt *opt, const char *name,
if (opt->status_only)
return 1;
if (opt->name_only) {
show_name(opt, name);
show_name(opt, gs->name);
return 1;
}
if (opt->count)
goto next_line;
if (binary_match_only) {
opt->output(opt, "Binary file ", 12);
output_color(opt, name, strlen(name),
output_color(opt, gs->name, strlen(gs->name),
opt->color_filename);
opt->output(opt, " matches\n", 9);
return 1;
@ -1099,23 +1108,23 @@ static int grep_buffer_1(struct grep_opt *opt, const char *name,
* pre-context lines, we would need to show them.
*/
if (opt->pre_context || opt->funcbody)
show_pre_context(opt, name, buf, bol, eol, lno);
show_pre_context(opt, gs, bol, eol, lno);
else if (opt->funcname)
show_funcname_line(opt, name, buf, bol, lno);
show_line(opt, bol, eol, name, lno, ':');
show_funcname_line(opt, gs, bol, lno);
show_line(opt, bol, eol, gs->name, lno, ':');
last_hit = lno;
if (opt->funcbody)
show_function = 1;
goto next_line;
}
if (show_function && match_funcname(opt, name, bol, eol))
if (show_function && match_funcname(opt, gs, bol, eol))
show_function = 0;
if (show_function ||
(last_hit && lno <= last_hit + opt->post_context)) {
/* If the last hit is within the post context,
* we need to show this line.
*/
show_line(opt, bol, eol, name, lno, '-');
show_line(opt, bol, eol, gs->name, lno, '-');
}
next_line:
@ -1133,7 +1142,7 @@ static int grep_buffer_1(struct grep_opt *opt, const char *name,
return 0;
if (opt->unmatch_name_only) {
/* We did not see any hit, so we want to show this */
show_name(opt, name);
show_name(opt, gs->name);
return 1;
}
@ -1147,7 +1156,7 @@ static int grep_buffer_1(struct grep_opt *opt, const char *name,
*/
if (opt->count && count) {
char buf[32];
output_color(opt, name, strlen(name), opt->color_filename);
output_color(opt, gs->name, strlen(gs->name), opt->color_filename);
output_sep(opt, ':');
snprintf(buf, sizeof(buf), "%u\n", count);
opt->output(opt, buf, strlen(buf));
@ -1182,23 +1191,174 @@ static int chk_hit_marker(struct grep_expr *x)
}
}
int grep_buffer(struct grep_opt *opt, const char *name, char *buf, unsigned long size)
int grep_source(struct grep_opt *opt, struct grep_source *gs)
{
/*
* we do not have to do the two-pass grep when we do not check
* buffer-wide "all-match".
*/
if (!opt->all_match)
return grep_buffer_1(opt, name, buf, size, 0);
return grep_source_1(opt, gs, 0);
/* Otherwise the toplevel "or" terms hit a bit differently.
* We first clear hit markers from them.
*/
clr_hit_marker(opt->pattern_expression);
grep_buffer_1(opt, name, buf, size, 1);
grep_source_1(opt, gs, 1);
if (!chk_hit_marker(opt->pattern_expression))
return 0;
return grep_buffer_1(opt, name, buf, size, 0);
return grep_source_1(opt, gs, 0);
}
int grep_buffer(struct grep_opt *opt, char *buf, unsigned long size)
{
struct grep_source gs;
int r;
grep_source_init(&gs, GREP_SOURCE_BUF, NULL, NULL);
gs.buf = buf;
gs.size = size;
r = grep_source(opt, &gs);
grep_source_clear(&gs);
return r;
}
void grep_source_init(struct grep_source *gs, enum grep_source_type type,
const char *name, const void *identifier)
{
gs->type = type;
gs->name = name ? xstrdup(name) : NULL;
gs->buf = NULL;
gs->size = 0;
gs->driver = NULL;
switch (type) {
case GREP_SOURCE_FILE:
gs->identifier = xstrdup(identifier);
break;
case GREP_SOURCE_SHA1:
gs->identifier = xmalloc(20);
memcpy(gs->identifier, identifier, 20);
break;
case GREP_SOURCE_BUF:
gs->identifier = NULL;
}
}
void grep_source_clear(struct grep_source *gs)
{
free(gs->name);
gs->name = NULL;
free(gs->identifier);
gs->identifier = NULL;
grep_source_clear_data(gs);
}
void grep_source_clear_data(struct grep_source *gs)
{
switch (gs->type) {
case GREP_SOURCE_FILE:
case GREP_SOURCE_SHA1:
free(gs->buf);
gs->buf = NULL;
gs->size = 0;
break;
case GREP_SOURCE_BUF:
/* leave user-provided buf intact */
break;
}
}
static int grep_source_load_sha1(struct grep_source *gs)
{
enum object_type type;
grep_read_lock();
gs->buf = read_sha1_file(gs->identifier, &type, &gs->size);
grep_read_unlock();
if (!gs->buf)
return error(_("'%s': unable to read %s"),
gs->name,
sha1_to_hex(gs->identifier));
return 0;
}
static int grep_source_load_file(struct grep_source *gs)
{
const char *filename = gs->identifier;
struct stat st;
char *data;
size_t size;
int i;
if (lstat(filename, &st) < 0) {
err_ret:
if (errno != ENOENT)
error(_("'%s': %s"), filename, strerror(errno));
return -1;
}
if (!S_ISREG(st.st_mode))
return -1;
size = xsize_t(st.st_size);
i = open(filename, O_RDONLY);
if (i < 0)
goto err_ret;
data = xmalloc(size + 1);
if (st.st_size != read_in_full(i, data, size)) {
error(_("'%s': short read %s"), filename, strerror(errno));
close(i);
free(data);
return -1;
}
close(i);
data[size] = 0;
gs->buf = data;
gs->size = size;
return 0;
}
int grep_source_load(struct grep_source *gs)
{
if (gs->buf)
return 0;
switch (gs->type) {
case GREP_SOURCE_FILE:
return grep_source_load_file(gs);
case GREP_SOURCE_SHA1:
return grep_source_load_sha1(gs);
case GREP_SOURCE_BUF:
return gs->buf ? 0 : -1;
}
die("BUG: invalid grep_source type");
}
void grep_source_load_driver(struct grep_source *gs)
{
if (gs->driver)
return;
grep_attr_lock();
gs->driver = userdiff_find_by_path(gs->name);
if (!gs->driver)
gs->driver = userdiff_find_by_name("default");
grep_attr_unlock();
}
int grep_source_is_binary(struct grep_source *gs)
{
grep_source_load_driver(gs);
if (gs->driver->binary != -1)
return gs->driver->binary;
if (!grep_source_load(gs))
return buffer_is_binary(gs->buf, gs->size);
return 0;
}

48
grep.h
Просмотреть файл

@ -9,6 +9,7 @@ typedef int pcre_extra;
#endif
#include "kwset.h"
#include "thread-utils.h"
#include "userdiff.h"
enum grep_pat_token {
GREP_PATTERN,
@ -116,7 +117,6 @@ struct grep_opt {
int show_hunk_mark;
int file_break;
int heading;
int use_threads;
void *priv;
void (*output)(struct grep_opt *opt, const void *data, size_t size);
@ -128,7 +128,33 @@ extern void append_grep_pattern(struct grep_opt *opt, const char *pat, const cha
extern void append_header_grep_pattern(struct grep_opt *, enum grep_header_field, const char *);
extern void compile_grep_patterns(struct grep_opt *opt);
extern void free_grep_patterns(struct grep_opt *opt);
extern int grep_buffer(struct grep_opt *opt, const char *name, char *buf, unsigned long size);
extern int grep_buffer(struct grep_opt *opt, char *buf, unsigned long size);
struct grep_source {
char *name;
enum grep_source_type {
GREP_SOURCE_SHA1,
GREP_SOURCE_FILE,
GREP_SOURCE_BUF,
} type;
void *identifier;
char *buf;
unsigned long size;
struct userdiff_driver *driver;
};
void grep_source_init(struct grep_source *gs, enum grep_source_type type,
const char *name, const void *identifier);
int grep_source_load(struct grep_source *gs);
void grep_source_clear_data(struct grep_source *gs);
void grep_source_clear(struct grep_source *gs);
void grep_source_load_driver(struct grep_source *gs);
int grep_source_is_binary(struct grep_source *gs);
int grep_source(struct grep_opt *opt, struct grep_source *gs);
extern struct grep_opt *grep_opt_dup(const struct grep_opt *opt);
extern int grep_threads_ok(const struct grep_opt *opt);
@ -138,7 +164,25 @@ extern int grep_threads_ok(const struct grep_opt *opt);
* Mutex used around access to the attributes machinery if
* opt->use_threads. Must be initialized/destroyed by callers!
*/
extern int grep_use_locks;
extern pthread_mutex_t grep_attr_mutex;
extern pthread_mutex_t grep_read_mutex;
static inline void grep_read_lock(void)
{
if (grep_use_locks)
pthread_mutex_lock(&grep_read_mutex);
}
static inline void grep_read_unlock(void)
{
if (grep_use_locks)
pthread_mutex_unlock(&grep_read_mutex);
}
#else
#define grep_read_lock()
#define grep_read_unlock()
#endif
#endif

22
help.c
Просмотреть файл

@ -5,28 +5,6 @@
#include "help.h"
#include "common-cmds.h"
/* most GUI terminals set COLUMNS (although some don't export it) */
static int term_columns(void)
{
char *col_string = getenv("COLUMNS");
int n_cols;
if (col_string && (n_cols = atoi(col_string)) > 0)
return n_cols;
#ifdef TIOCGWINSZ
{
struct winsize ws;
if (!ioctl(1, TIOCGWINSZ, &ws)) {
if (ws.ws_col)
return ws.ws_col;
}
}
#endif
return 80;
}
void add_cmdname(struct cmdnames *cmds, const char *name, int len)
{
struct cmdname *ent = xmalloc(sizeof(*ent) + len + 1);

Просмотреть файл

@ -190,27 +190,27 @@ void clear_mailmap(struct string_list *map)
int map_user(struct string_list *map,
char *email, int maxlen_email, char *name, int maxlen_name)
{
char *p;
char *end_of_email;
struct string_list_item *item;
struct mailmap_entry *me;
char buf[1024], *mailbuf;
int i;
/* figure out space requirement for email */
p = strchr(email, '>');
if (!p) {
end_of_email = strchr(email, '>');
if (!end_of_email) {
/* email passed in might not be wrapped in <>, but end with a \0 */
p = memchr(email, '\0', maxlen_email);
if (!p)
end_of_email = memchr(email, '\0', maxlen_email);
if (!end_of_email)
return 0;
}
if (p - email + 1 < sizeof(buf))
if (end_of_email - email + 1 < sizeof(buf))
mailbuf = buf;
else
mailbuf = xmalloc(p - email + 1);
mailbuf = xmalloc(end_of_email - email + 1);
/* downcase the email address */
for (i = 0; i < p - email; i++)
for (i = 0; i < end_of_email - email; i++)
mailbuf[i] = tolower(email[i]);
mailbuf[i] = 0;
@ -236,6 +236,8 @@ int map_user(struct string_list *map,
}
if (maxlen_email && mi->email)
strlcpy(email, mi->email, maxlen_email);
else
*end_of_email = '\0';
if (maxlen_name && mi->name)
strlcpy(name, mi->name, maxlen_name);
debug_mm("map_user: to '%s' <%s>\n", name, mi->email ? mi->email : "");

Просмотреть файл

@ -264,7 +264,7 @@ struct tree *write_tree_from_memory(struct merge_options *o)
if (!cache_tree_fully_valid(active_cache_tree) &&
cache_tree_update(active_cache_tree,
active_cache, active_nr, 0, 0, 0) < 0)
active_cache, active_nr, 0) < 0)
die("error building trees");
result = lookup_tree(active_cache_tree->sha1);

Просмотреть файл

@ -23,7 +23,7 @@ check_meld_for_output_version () {
meld_path="$(git config mergetool.meld.path)"
meld_path="${meld_path:-meld}"
if "$meld_path" --output /dev/null --help >/dev/null 2>&1
if "$meld_path" --help 2>&1 | grep -e --output >/dev/null
then
meld_has_output_option=true
else

49
pager.c
Просмотреть файл

@ -76,6 +76,12 @@ void setup_pager(void)
if (!pager)
return;
/*
* force computing the width of the terminal before we redirect
* the standard output to the pager.
*/
(void) term_columns();
setenv("GIT_PAGER_IN_USE", "true", 1);
/* spawn the pager */
@ -110,3 +116,46 @@ int pager_in_use(void)
env = getenv("GIT_PAGER_IN_USE");
return env ? git_config_bool("GIT_PAGER_IN_USE", env) : 0;
}
/*
* Return cached value (if set) or $COLUMNS environment variable (if
* set and positive) or ioctl(1, TIOCGWINSZ).ws_col (if positive),
* and default to 80 if all else fails.
*/
int term_columns(void)
{
static int term_columns_at_startup;
char *col_string;
int n_cols;
if (term_columns_at_startup)
return term_columns_at_startup;
term_columns_at_startup = 80;
col_string = getenv("COLUMNS");
if (col_string && (n_cols = atoi(col_string)) > 0)
term_columns_at_startup = n_cols;
#ifdef TIOCGWINSZ
else {
struct winsize ws;
if (!ioctl(1, TIOCGWINSZ, &ws) && ws.ws_col)
term_columns_at_startup = ws.ws_col;
}
#endif
return term_columns_at_startup;
}
/*
* How many columns do we need to show this number in decimal?
*/
int decimal_width(int number)
{
int i, width;
for (width = 1, i = 10; i <= number; width++)
i *= 10;
return width;
}

7
path.c
Просмотреть файл

@ -293,7 +293,7 @@ const char *enter_repo(const char *path, int strict)
if (!strict) {
static const char *suffix[] = {
".git/.git", "/.git", ".git", "", NULL,
"/.git", "", ".git/.git", ".git", NULL,
};
const char *gitfile;
int len = strlen(path);
@ -324,8 +324,11 @@ const char *enter_repo(const char *path, int strict)
return NULL;
len = strlen(used_path);
for (i = 0; suffix[i]; i++) {
struct stat st;
strcpy(used_path + len, suffix[i]);
if (!access(used_path, F_OK)) {
if (!stat(used_path, &st) &&
(S_ISREG(st.st_mode) ||
(S_ISDIR(st.st_mode) && is_git_directory(used_path)))) {
strcat(validated_path, suffix[i]);
break;
}

Просмотреть файл

@ -9,6 +9,7 @@ static char *do_askpass(const char *cmd, const char *prompt)
struct child_process pass;
const char *args[3];
static struct strbuf buffer = STRBUF_INIT;
int err = 0;
args[0] = cmd;
args[1] = prompt;
@ -19,25 +20,30 @@ static char *do_askpass(const char *cmd, const char *prompt)
pass.out = -1;
if (start_command(&pass))
exit(1);
return NULL;
strbuf_reset(&buffer);
if (strbuf_read(&buffer, pass.out, 20) < 0)
die("failed to get '%s' from %s\n", prompt, cmd);
err = 1;
close(pass.out);
if (finish_command(&pass))
exit(1);
err = 1;
if (err) {
error("unable to read askpass response from '%s'", cmd);
strbuf_release(&buffer);
return NULL;
}
strbuf_setlen(&buffer, strcspn(buffer.buf, "\r\n"));
return buffer.buf;
return strbuf_detach(&buffer, NULL);
}
char *git_prompt(const char *prompt, int flags)
{
char *r;
char *r = NULL;
if (flags & PROMPT_ASKPASS) {
const char *askpass;
@ -48,12 +54,15 @@ char *git_prompt(const char *prompt, int flags)
if (!askpass)
askpass = getenv("SSH_ASKPASS");
if (askpass && *askpass)
return do_askpass(askpass, prompt);
r = do_askpass(askpass, prompt);
}
r = git_terminal_prompt(prompt, flags & PROMPT_ECHO);
if (!r)
die_errno("could not read '%s'", prompt);
r = git_terminal_prompt(prompt, flags & PROMPT_ECHO);
if (!r) {
/* prompts already contain ": " at the end */
die("could not read %s%s", prompt, strerror(errno));
}
return r;
}

Просмотреть файл

@ -1120,11 +1120,16 @@ int refresh_index(struct index_state *istate, unsigned int flags, const char **p
struct cache_entry *ce, *new;
int cache_errno = 0;
int changed = 0;
int filtered = 0;
ce = istate->cache[i];
if (ignore_submodules && S_ISGITLINK(ce->ce_mode))
continue;
if (pathspec &&
!match_pathspec(pathspec, ce->name, strlen(ce->name), 0, seen))
filtered = 1;
if (ce_stage(ce)) {
while ((i < istate->cache_nr) &&
! strcmp(istate->cache[i]->name, ce->name))
@ -1132,12 +1137,14 @@ int refresh_index(struct index_state *istate, unsigned int flags, const char **p
i--;
if (allow_unmerged)
continue;
show_file(unmerged_fmt, ce->name, in_porcelain, &first, header_msg);
if (!filtered)
show_file(unmerged_fmt, ce->name, in_porcelain,
&first, header_msg);
has_errors = 1;
continue;
}
if (pathspec && !match_pathspec(pathspec, ce->name, strlen(ce->name), 0, seen))
if (filtered)
continue;
new = refresh_cache_ent(istate, ce, options, &cache_errno, &changed);

23
refs.c
Просмотреть файл

@ -183,12 +183,6 @@ static struct ref_cache {
static struct ref_entry *current_ref;
/*
* Never call sort_ref_array() on the extra_refs, because it is
* allowed to contain entries with duplicate names.
*/
static struct ref_array extra_refs;
static void clear_ref_array(struct ref_array *array)
{
int i;
@ -289,16 +283,6 @@ static void read_packed_refs(FILE *f, struct ref_array *array)
}
}
void add_extra_ref(const char *refname, const unsigned char *sha1, int flag)
{
add_ref(&extra_refs, create_ref_entry(refname, sha1, flag, 0));
}
void clear_extra_refs(void)
{
clear_ref_array(&extra_refs);
}
static struct ref_array *get_packed_refs(struct ref_cache *refs)
{
if (!refs->did_packed) {
@ -733,16 +717,11 @@ fallback:
static int do_for_each_ref(const char *submodule, const char *base, each_ref_fn fn,
int trim, int flags, void *cb_data)
{
int retval = 0, i, p = 0, l = 0;
int retval = 0, p = 0, l = 0;
struct ref_cache *refs = get_ref_cache(submodule);
struct ref_array *packed = get_packed_refs(refs);
struct ref_array *loose = get_loose_refs(refs);
struct ref_array *extra = &extra_refs;
for (i = 0; i < extra->nr; i++)
retval = do_one_ref(base, fn, trim, flags, cb_data, extra->refs[i]);
sort_ref_array(packed);
sort_ref_array(loose);
while (p < packed->nr && l < loose->nr) {

8
refs.h
Просмотреть файл

@ -56,14 +56,6 @@ extern void warn_dangling_symref(FILE *fp, const char *msg_fmt, const char *refn
*/
extern void add_packed_ref(const char *refname, const unsigned char *sha1);
/*
* Extra refs will be listed by for_each_ref() before any actual refs
* for the duration of this process or until clear_extra_refs() is
* called. Only extra refs added before for_each_ref() is called will
* be listed on a given call of for_each_ref().
*/
extern void add_extra_ref(const char *refname, const unsigned char *sha1, int flags);
extern void clear_extra_refs(void);
extern int ref_exists(const char *);
extern int peel_ref(const char *refname, unsigned char *sha1);

107
remote.c
Просмотреть файл

@ -8,6 +8,8 @@
#include "tag.h"
#include "string-list.h"
enum map_direction { FROM_SRC, FROM_DST };
static struct refspec s_tag_refspec = {
0,
1,
@ -978,16 +980,20 @@ static void tail_link_ref(struct ref *ref, struct ref ***tail)
*tail = &ref->next;
}
static struct ref *alloc_delete_ref(void)
{
struct ref *ref = alloc_ref("(delete)");
hashclr(ref->new_sha1);
return ref;
}
static struct ref *try_explicit_object_name(const char *name)
{
unsigned char sha1[20];
struct ref *ref;
if (!*name) {
ref = alloc_ref("(delete)");
hashclr(ref->new_sha1);
return ref;
}
if (!*name)
return alloc_delete_ref();
if (get_sha1(name, sha1))
return NULL;
ref = alloc_ref(name);
@ -1110,10 +1116,11 @@ static int match_explicit_refs(struct ref *src, struct ref *dst,
return errs;
}
static const struct refspec *check_pattern_match(const struct refspec *rs,
int rs_nr,
const struct ref *src)
static char *get_ref_match(const struct refspec *rs, int rs_nr, const struct ref *ref,
int send_mirror, int direction, const struct refspec **ret_pat)
{
const struct refspec *pat;
char *name;
int i;
int matching_refs = -1;
for (i = 0; i < rs_nr; i++) {
@ -1123,14 +1130,36 @@ static const struct refspec *check_pattern_match(const struct refspec *rs,
continue;
}
if (rs[i].pattern && match_name_with_pattern(rs[i].src, src->name,
NULL, NULL))
return rs + i;
if (rs[i].pattern) {
const char *dst_side = rs[i].dst ? rs[i].dst : rs[i].src;
int match;
if (direction == FROM_SRC)
match = match_name_with_pattern(rs[i].src, ref->name, dst_side, &name);
else
match = match_name_with_pattern(dst_side, ref->name, rs[i].src, &name);
if (match) {
matching_refs = i;
break;
}
}
}
if (matching_refs != -1)
return rs + matching_refs;
else
if (matching_refs == -1)
return NULL;
pat = rs + matching_refs;
if (pat->matching) {
/*
* "matching refs"; traditionally we pushed everything
* including refs outside refs/heads/ hierarchy, but
* that does not make much sense these days.
*/
if (!send_mirror && prefixcmp(ref->name, "refs/heads/"))
return NULL;
name = xstrdup(ref->name);
}
if (ret_pat)
*ret_pat = pat;
return name;
}
static struct ref **tail_ref(struct ref **head)
@ -1155,9 +1184,10 @@ int match_push_refs(struct ref *src, struct ref **dst,
struct refspec *rs;
int send_all = flags & MATCH_REFS_ALL;
int send_mirror = flags & MATCH_REFS_MIRROR;
int send_prune = flags & MATCH_REFS_PRUNE;
int errs;
static const char *default_refspec[] = { ":", NULL };
struct ref **dst_tail = tail_ref(dst);
struct ref *ref, **dst_tail = tail_ref(dst);
if (!nr_refspec) {
nr_refspec = 1;
@ -1167,39 +1197,23 @@ int match_push_refs(struct ref *src, struct ref **dst,
errs = match_explicit_refs(src, *dst, &dst_tail, rs, nr_refspec);
/* pick the remainder */
for ( ; src; src = src->next) {
for (ref = src; ref; ref = ref->next) {
struct ref *dst_peer;
const struct refspec *pat = NULL;
char *dst_name;
if (src->peer_ref)
if (ref->peer_ref)
continue;
pat = check_pattern_match(rs, nr_refspec, src);
if (!pat)
dst_name = get_ref_match(rs, nr_refspec, ref, send_mirror, FROM_SRC, &pat);
if (!dst_name)
continue;
if (pat->matching) {
/*
* "matching refs"; traditionally we pushed everything
* including refs outside refs/heads/ hierarchy, but
* that does not make much sense these days.
*/
if (!send_mirror && prefixcmp(src->name, "refs/heads/"))
continue;
dst_name = xstrdup(src->name);
} else {
const char *dst_side = pat->dst ? pat->dst : pat->src;
if (!match_name_with_pattern(pat->src, src->name,
dst_side, &dst_name))
die("Didn't think it matches any more");
}
dst_peer = find_ref_by_name(*dst, dst_name);
if (dst_peer) {
if (dst_peer->peer_ref)
/* We're already sending something to this ref. */
goto free_name;
} else {
if (pat->matching && !(send_all || send_mirror))
/*
@ -1211,13 +1225,30 @@ int match_push_refs(struct ref *src, struct ref **dst,
/* Create a new one and link it */
dst_peer = make_linked_ref(dst_name, &dst_tail);
hashcpy(dst_peer->new_sha1, src->new_sha1);
hashcpy(dst_peer->new_sha1, ref->new_sha1);
}
dst_peer->peer_ref = copy_ref(src);
dst_peer->peer_ref = copy_ref(ref);
dst_peer->force = pat->force;
free_name:
free(dst_name);
}
if (send_prune) {
/* check for missing refs on the remote */
for (ref = *dst; ref; ref = ref->next) {
char *src_name;
if (ref->peer_ref)
/* We're already sending something to this ref. */
continue;
src_name = get_ref_match(rs, nr_refspec, ref, send_mirror, FROM_DST, NULL);
if (src_name) {
if (!find_ref_by_name(src, src_name))
ref->peer_ref = alloc_delete_ref();
free(src_name);
}
}
}
if (errs)
return -1;
return 0;

Просмотреть файл

@ -145,7 +145,8 @@ int branch_merge_matches(struct branch *, int n, const char *);
enum match_refs_flags {
MATCH_REFS_NONE = 0,
MATCH_REFS_ALL = (1 << 0),
MATCH_REFS_MIRROR = (1 << 1)
MATCH_REFS_MIRROR = (1 << 1),
MATCH_REFS_PRUNE = (1 << 2)
};
/* Reporting of tracking info */

Просмотреть файл

@ -2149,7 +2149,6 @@ static int commit_match(struct commit *commit, struct rev_info *opt)
if (!opt->grep_filter.pattern_list && !opt->grep_filter.header_list)
return 1;
return grep_buffer(&opt->grep_filter,
NULL, /* we say nothing, not even filename */
commit->buffer, strlen(commit->buffer));
}

Просмотреть файл

@ -247,7 +247,7 @@ const char **get_pathspec(const char *prefix, const char **pathspec)
* a proper "ref:", or a regular file HEAD that has a properly
* formatted sha1 object name.
*/
static int is_git_directory(const char *suspect)
int is_git_directory(const char *suspect)
{
char path[PATH_MAX];
size_t len = strlen(suspect);

Просмотреть файл

@ -54,6 +54,8 @@ static struct cached_object empty_tree = {
0
};
static struct packed_git *last_found_pack;
static struct cached_object *find_cached_object(const unsigned char *sha1)
{
int i;
@ -720,6 +722,8 @@ void free_pack_by_name(const char *pack_name)
close_pack_index(p);
free(p->bad_object_sha1);
*pp = p->next;
if (last_found_pack == p)
last_found_pack = NULL;
free(p);
return;
}
@ -1202,6 +1206,11 @@ void *map_sha1_file(const unsigned char *sha1, unsigned long *size)
if (!fstat(fd, &st)) {
*size = xsize_t(st.st_size);
if (!*size) {
/* mmap() is forbidden on empty files */
error("object file %s is empty", sha1_file_name(sha1));
return NULL;
}
map = xmmap(NULL, *size, PROT_READ, MAP_PRIVATE, fd, 0);
}
close(fd);
@ -2010,54 +2019,58 @@ int is_pack_valid(struct packed_git *p)
return !open_packed_git(p);
}
static int fill_pack_entry(const unsigned char *sha1,
struct pack_entry *e,
struct packed_git *p)
{
off_t offset;
if (p->num_bad_objects) {
unsigned i;
for (i = 0; i < p->num_bad_objects; i++)
if (!hashcmp(sha1, p->bad_object_sha1 + 20 * i))
return 0;
}
offset = find_pack_entry_one(sha1, p);
if (!offset)
return 0;
/*
* We are about to tell the caller where they can locate the
* requested object. We better make sure the packfile is
* still here and can be accessed before supplying that
* answer, as it may have been deleted since the index was
* loaded!
*/
if (!is_pack_valid(p)) {
warning("packfile %s cannot be accessed", p->pack_name);
return 0;
}
e->offset = offset;
e->p = p;
hashcpy(e->sha1, sha1);
return 1;
}
static int find_pack_entry(const unsigned char *sha1, struct pack_entry *e)
{
static struct packed_git *last_found = (void *)1;
struct packed_git *p;
off_t offset;
prepare_packed_git();
if (!packed_git)
return 0;
p = (last_found == (void *)1) ? packed_git : last_found;
do {
if (p->num_bad_objects) {
unsigned i;
for (i = 0; i < p->num_bad_objects; i++)
if (!hashcmp(sha1, p->bad_object_sha1 + 20 * i))
goto next;
}
if (last_found_pack && fill_pack_entry(sha1, e, last_found_pack))
return 1;
offset = find_pack_entry_one(sha1, p);
if (offset) {
/*
* We are about to tell the caller where they can
* locate the requested object. We better make
* sure the packfile is still here and can be
* accessed before supplying that answer, as
* it may have been deleted since the index
* was loaded!
*/
if (!is_pack_valid(p)) {
warning("packfile %s cannot be accessed", p->pack_name);
goto next;
}
e->offset = offset;
e->p = p;
hashcpy(e->sha1, sha1);
last_found = p;
return 1;
}
for (p = packed_git; p; p = p->next) {
if (p == last_found_pack || !fill_pack_entry(sha1, e, p))
continue;
next:
if (p == last_found)
p = packed_git;
else
p = p->next;
if (p == last_found)
p = p->next;
} while (p);
last_found_pack = p;
return 1;
}
return 0;
}
@ -2687,10 +2700,13 @@ static int index_core(unsigned char *sha1, int fd, size_t size,
* This also bypasses the usual "convert-to-git" dance, and that is on
* purpose. We could write a streaming version of the converting
* functions and insert that before feeding the data to fast-import
* (or equivalent in-core API described above), but the primary
* motivation for trying to stream from the working tree file and to
* avoid mmaping it in core is to deal with large binary blobs, and
* by definition they do _not_ want to get any conversion.
* (or equivalent in-core API described above). However, that is
* somewhat complicated, as we do not know the size of the filter
* result, which we need to know beforehand when writing a git object.
* Since the primary motivation for trying to stream from the working
* tree file and to avoid mmaping it in core is to deal with large
* binary blobs, they generally do not want to get any conversion, and
* callers should avoid this code path when filters are requested.
*/
static int index_stream(unsigned char *sha1, int fd, size_t size,
enum object_type type, const char *path,
@ -2707,7 +2723,8 @@ int index_fd(unsigned char *sha1, int fd, struct stat *st,
if (!S_ISREG(st->st_mode))
ret = index_pipe(sha1, fd, type, path, flags);
else if (size <= big_file_threshold || type != OBJ_BLOB)
else if (size <= big_file_threshold || type != OBJ_BLOB ||
(path && would_convert_to_git(path, NULL, 0, 0)))
ret = index_core(sha1, fd, size, type, path, flags);
else
ret = index_stream(sha1, fd, size, type, path, flags);

Просмотреть файл

@ -383,6 +383,22 @@ int strbuf_getline(struct strbuf *sb, FILE *fp, int term)
return 0;
}
int strbuf_getwholeline_fd(struct strbuf *sb, int fd, int term)
{
strbuf_reset(sb);
while (1) {
char ch;
ssize_t len = xread(fd, &ch, 1);
if (len <= 0)
return EOF;
strbuf_addch(sb, ch);
if (ch == term)
break;
}
return 0;
}
int strbuf_read_file(struct strbuf *sb, const char *path, size_t hint)
{
int fd, len;

Просмотреть файл

@ -116,6 +116,7 @@ extern int strbuf_readlink(struct strbuf *sb, const char *path, size_t hint);
extern int strbuf_getwholeline(struct strbuf *, FILE *, int);
extern int strbuf_getline(struct strbuf *, FILE *, int);
extern int strbuf_getwholeline_fd(struct strbuf *, int, int);
extern void stripspace(struct strbuf *buf, int skip_comments);
extern int launch_editor(const char *path, struct strbuf *buffer, const char *const *env);

Просмотреть файл

@ -73,6 +73,9 @@ gitweb-test:
valgrind:
$(MAKE) GIT_TEST_OPTS="$(GIT_TEST_OPTS) --valgrind"
perf:
$(MAKE) -C perf/ all
# Smoke testing targets
-include ../GIT-VERSION-FILE
uname_S := $(shell sh -c 'uname -s 2>/dev/null || echo unknown')
@ -111,4 +114,4 @@ smoke_report: smoke
http://smoke.git.nix.is/app/projects/process_add_report/1 \
| grep -v ^Redirecting
.PHONY: pre-clean $(T) aggregate-results clean valgrind smoke smoke_report
.PHONY: pre-clean $(T) aggregate-results clean valgrind perf

Просмотреть файл

@ -671,76 +671,3 @@ Then, at the top-level:
That'll generate a detailed cover report in the "cover_db_html"
directory, which you can then copy to a webserver, or inspect locally
in a browser.
Smoke testing
-------------
The Git test suite has support for smoke testing. Smoke testing is
when you submit the results of a test run to a central server for
analysis and aggregation.
Running a smoke tester is an easy and valuable way of contributing to
Git development, particularly if you have access to an uncommon OS on
obscure hardware.
After building Git you can generate a smoke report like this in the
"t" directory:
make clean smoke
You can also pass arguments via the environment. This should make it
faster:
GIT_TEST_OPTS='--root=/dev/shm' TEST_JOBS=10 make clean smoke
The "smoke" target will run the Git test suite with Perl's
"TAP::Harness" module, and package up the results in a .tar.gz archive
with "TAP::Harness::Archive". The former is included with Perl v5.10.1
or later, but you'll need to install the latter from the CPAN. See the
"Test coverage" section above for how you might do that.
Once the "smoke" target finishes you'll see a message like this:
TAP Archive created at <path to git>/t/test-results/git-smoke.tar.gz
To upload the smoke report you need to have curl(1) installed, then
do:
make smoke_report
To upload the report anonymously. Hopefully that'll return something
like "Reported #7 added.".
If you're going to be uploading reports frequently please request a
user account by E-Mailing gitsmoke@v.nix.is. Once you have a username
and password you'll be able to do:
SMOKE_USERNAME=<username> SMOKE_PASSWORD=<password> make smoke_report
You can also add an additional comment to attach to the report, and/or
a comma separated list of tags:
SMOKE_USERNAME=<username> SMOKE_PASSWORD=<password> \
SMOKE_COMMENT=<comment> SMOKE_TAGS=<tags> \
make smoke_report
Once the report is uploaded it'll be made available at
http://smoke.git.nix.is, here's an overview of Recent Smoke Reports
for Git:
http://smoke.git.nix.is/app/projects/smoke_reports/1
The reports will also be mirrored to GitHub every few hours:
http://github.com/gitsmoke/smoke-reports
The Smolder SQLite database is also mirrored and made available for
download:
http://github.com/gitsmoke/smoke-database
Note that the database includes hashed (with crypt()) user passwords
and E-Mail addresses. Don't use a valuable password for the smoke
service if you have an account, or an E-Mail address you don't want to
be publicly known. The user accounts are just meant to be convenient
labels, they're not meant to be secure.

Просмотреть файл

@ -1,21 +0,0 @@
#!/usr/bin/perl
use strict;
use warnings;
use Getopt::Long ();
use TAP::Harness::Archive;
Getopt::Long::Parser->new(
config => [ qw/ pass_through / ],
)->getoptions(
'jobs:1' => \(my $jobs = $ENV{TEST_JOBS}),
'archive=s' => \my $archive,
) or die "$0: Couldn't getoptions()";
TAP::Harness::Archive->new({
jobs => $jobs,
archive => $archive,
($ENV{GIT_TEST_OPTS}
? (test_args => [ split /\s+/, $ENV{GIT_TEST_OPTS} ])
: ()),
extra_properties => {},
})->runtests(@ARGV);

2
t/perf/.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1,2 @@
build/
test-results/

15
t/perf/Makefile Normal file
Просмотреть файл

@ -0,0 +1,15 @@
-include ../../config.mak
export GIT_TEST_OPTIONS
all: perf
perf: pre-clean
./run
pre-clean:
rm -rf test-results
clean:
rm -rf build "trash directory".* test-results
.PHONY: all perf pre-clean clean

146
t/perf/README Normal file
Просмотреть файл

@ -0,0 +1,146 @@
Git performance tests
=====================
This directory holds performance testing scripts for git tools. The
first part of this document describes the various ways in which you
can run them.
When fixing the tools or adding enhancements, you are strongly
encouraged to add tests in this directory to cover what you are
trying to fix or enhance. The later part of this short document
describes how your test scripts should be organized.
Running Tests
-------------
The easiest way to run tests is to say "make". This runs all
the tests on the current git repository.
=== Running 2 tests in this tree ===
[...]
Test this tree
---------------------------------------------------------
0001.1: rev-list --all 0.54(0.51+0.02)
0001.2: rev-list --all --objects 6.14(5.99+0.11)
7810.1: grep worktree, cheap regex 0.16(0.16+0.35)
7810.2: grep worktree, expensive regex 7.90(29.75+0.37)
7810.3: grep --cached, cheap regex 3.07(3.02+0.25)
7810.4: grep --cached, expensive regex 9.39(30.57+0.24)
You can compare multiple repositories and even git revisions with the
'run' script:
$ ./run . origin/next /path/to/git-tree p0001-rev-list.sh
where . stands for the current git tree. The full invocation is
./run [<revision|directory>...] [--] [<test-script>...]
A '.' argument is implied if you do not pass any other
revisions/directories.
You can also manually test this or another git build tree, and then
call the aggregation script to summarize the results:
$ ./p0001-rev-list.sh
[...]
$ GIT_BUILD_DIR=/path/to/other/git ./p0001-rev-list.sh
[...]
$ ./aggregate.perl . /path/to/other/git ./p0001-rev-list.sh
aggregate.perl has the same invocation as 'run', it just does not run
anything beforehand.
You can set the following variables (also in your config.mak):
GIT_PERF_REPEAT_COUNT
Number of times a test should be repeated for best-of-N
measurements. Defaults to 5.
GIT_PERF_MAKE_OPTS
Options to use when automatically building a git tree for
performance testing. E.g., -j6 would be useful.
GIT_PERF_REPO
GIT_PERF_LARGE_REPO
Repositories to copy for the performance tests. The normal
repo should be at least git.git size. The large repo should
probably be about linux-2.6.git size for optimal results.
Both default to the git.git you are running from.
You can also pass the options taken by ordinary git tests; the most
useful one is:
--root=<directory>::
Create "trash" directories used to store all temporary data during
testing under <directory>, instead of the t/ directory.
Using this option with a RAM-based filesystem (such as tmpfs)
can massively speed up the test suite.
Naming Tests
------------
The performance test files are named as:
pNNNN-commandname-details.sh
where N is a decimal digit. The same conventions for choosing NNNN as
for normal tests apply.
Writing Tests
-------------
The perf script starts much like a normal test script, except it
sources perf-lib.sh:
#!/bin/sh
#
# Copyright (c) 2005 Junio C Hamano
#
test_description='xxx performance test'
. ./perf-lib.sh
After that you will want to use some of the following:
test_perf_default_repo # sets up a "normal" repository
test_perf_large_repo # sets up a "large" repository
test_perf_default_repo sub # ditto, in a subdir "sub"
test_checkout_worktree # if you need the worktree too
At least one of the first two is required!
You can use test_expect_success as usual. For actual performance
tests, use
test_perf 'descriptive string' '
command1 &&
command2
'
test_perf spawns a subshell, for lack of better options. This means
that
* you _must_ export all variables that you need in the subshell
* you _must_ flag all variables that you want to persist from the
subshell with 'test_export':
test_perf 'descriptive string' '
foo=$(git rev-parse HEAD) &&
test_export foo
'
The so-exported variables are automatically marked for export in the
shell executing the perf test. For your convenience, test_export is
the same as export in the main shell.
This feature relies on a bit of magic using 'set' and 'source'.
While we have tried to make sure that it can cope with embedded
whitespace and other special characters, it will not work with
multi-line data.

166
t/perf/aggregate.perl Executable file
Просмотреть файл

@ -0,0 +1,166 @@
#!/usr/bin/perl
use strict;
use warnings;
use Git;
sub get_times {
my $name = shift;
open my $fh, "<", $name or return undef;
my $line = <$fh>;
return undef if not defined $line;
close $fh or die "cannot close $name: $!";
$line =~ /^(?:(\d+):)?(\d+):(\d+(?:\.\d+)?) (\d+(?:\.\d+)?) (\d+(?:\.\d+)?)$/
or die "bad input line: $line";
my $rt = ((defined $1 ? $1 : 0.0)*60+$2)*60+$3;
return ($rt, $4, $5);
}
sub format_times {
my ($r, $u, $s, $firstr) = @_;
if (!defined $r) {
return "<missing>";
}
my $out = sprintf "%.2f(%.2f+%.2f)", $r, $u, $s;
if (defined $firstr) {
if ($firstr > 0) {
$out .= sprintf " %+.1f%%", 100.0*($r-$firstr)/$firstr;
} elsif ($r == 0) {
$out .= " =";
} else {
$out .= " +inf";
}
}
return $out;
}
my (@dirs, %dirnames, %dirabbrevs, %prefixes, @tests);
while (scalar @ARGV) {
my $arg = $ARGV[0];
my $dir;
last if -f $arg or $arg eq "--";
if (! -d $arg) {
my $rev = Git::command_oneline(qw(rev-parse --verify), $arg);
$dir = "build/".$rev;
} else {
$arg =~ s{/*$}{};
$dir = $arg;
$dirabbrevs{$dir} = $dir;
}
push @dirs, $dir;
$dirnames{$dir} = $arg;
my $prefix = $dir;
$prefix =~ tr/^a-zA-Z0-9/_/c;
$prefixes{$dir} = $prefix . '.';
shift @ARGV;
}
if (not @dirs) {
@dirs = ('.');
}
$dirnames{'.'} = $dirabbrevs{'.'} = "this tree";
$prefixes{'.'} = '';
shift @ARGV if scalar @ARGV and $ARGV[0] eq "--";
@tests = @ARGV;
if (not @tests) {
@tests = glob "p????-*.sh";
}
my @subtests;
my %shorttests;
for my $t (@tests) {
$t =~ s{(?:.*/)?(p(\d+)-[^/]+)\.sh$}{$1} or die "bad test name: $t";
my $n = $2;
my $fname = "test-results/$t.subtests";
open my $fp, "<", $fname or die "cannot open $fname: $!";
for (<$fp>) {
chomp;
/^(\d+)$/ or die "malformed subtest line: $_";
push @subtests, "$t.$1";
$shorttests{"$t.$1"} = "$n.$1";
}
close $fp or die "cannot close $fname: $!";
}
sub read_descr {
my $name = shift;
open my $fh, "<", $name or return "<error reading description>";
my $line = <$fh>;
close $fh or die "cannot close $name";
chomp $line;
return $line;
}
my %descrs;
my $descrlen = 4; # "Test"
for my $t (@subtests) {
$descrs{$t} = $shorttests{$t}.": ".read_descr("test-results/$t.descr");
$descrlen = length $descrs{$t} if length $descrs{$t}>$descrlen;
}
sub have_duplicate {
my %seen;
for (@_) {
return 1 if exists $seen{$_};
$seen{$_} = 1;
}
return 0;
}
sub have_slash {
for (@_) {
return 1 if m{/};
}
return 0;
}
my %newdirabbrevs = %dirabbrevs;
while (!have_duplicate(values %newdirabbrevs)) {
%dirabbrevs = %newdirabbrevs;
last if !have_slash(values %dirabbrevs);
%newdirabbrevs = %dirabbrevs;
for (values %newdirabbrevs) {
s{^[^/]*/}{};
}
}
my %times;
my @colwidth = ((0)x@dirs);
for my $i (0..$#dirs) {
my $d = $dirs[$i];
my $w = length (exists $dirabbrevs{$d} ? $dirabbrevs{$d} : $dirnames{$d});
$colwidth[$i] = $w if $w > $colwidth[$i];
}
for my $t (@subtests) {
my $firstr;
for my $i (0..$#dirs) {
my $d = $dirs[$i];
$times{$prefixes{$d}.$t} = [get_times("test-results/$prefixes{$d}$t.times")];
my ($r,$u,$s) = @{$times{$prefixes{$d}.$t}};
my $w = length format_times($r,$u,$s,$firstr);
$colwidth[$i] = $w if $w > $colwidth[$i];
$firstr = $r unless defined $firstr;
}
}
my $totalwidth = 3*@dirs+$descrlen;
$totalwidth += $_ for (@colwidth);
printf "%-${descrlen}s", "Test";
for my $i (0..$#dirs) {
my $d = $dirs[$i];
printf " %-$colwidth[$i]s", (exists $dirabbrevs{$d} ? $dirabbrevs{$d} : $dirnames{$d});
}
print "\n";
print "-"x$totalwidth, "\n";
for my $t (@subtests) {
printf "%-${descrlen}s", $descrs{$t};
my $firstr;
for my $i (0..$#dirs) {
my $d = $dirs[$i];
my ($r,$u,$s) = @{$times{$prefixes{$d}.$t}};
printf " %-$colwidth[$i]s", format_times($r,$u,$s,$firstr);
$firstr = $r unless defined $firstr;
}
print "\n";
}

21
t/perf/min_time.perl Executable file
Просмотреть файл

@ -0,0 +1,21 @@
#!/usr/bin/perl
my $minrt = 1e100;
my $min;
while (<>) {
# [h:]m:s.xx U.xx S.xx
/^(?:(\d+):)?(\d+):(\d+(?:\.\d+)?) (\d+(?:\.\d+)?) (\d+(?:\.\d+)?)$/
or die "bad input line: $_";
my $rt = ((defined $1 ? $1 : 0.0)*60+$2)*60+$3;
if ($rt < $minrt) {
$min = $_;
$minrt = $rt;
}
}
if (!defined $min) {
die "no input found";
}
print $min;

41
t/perf/p0000-perf-lib-sanity.sh Executable file
Просмотреть файл

@ -0,0 +1,41 @@
#!/bin/sh
test_description='Tests whether perf-lib facilities work'
. ./perf-lib.sh
test_perf_default_repo
test_perf 'test_perf_default_repo works' '
foo=$(git rev-parse HEAD) &&
test_export foo
'
test_checkout_worktree
test_perf 'test_checkout_worktree works' '
wt=$(find . | wc -l) &&
idx=$(git ls-files | wc -l) &&
test $wt -gt $idx
'
baz=baz
test_export baz
test_expect_success 'test_export works' '
echo "$foo" &&
test "$foo" = "$(git rev-parse HEAD)" &&
echo "$baz" &&
test "$baz" = baz
'
test_perf 'export a weird var' '
bar="weird # variable" &&
test_export bar
'
test_expect_success 'test_export works with weird vars' '
echo "$bar" &&
test "$bar" = "weird # variable"
'
test_done

17
t/perf/p0001-rev-list.sh Executable file
Просмотреть файл

@ -0,0 +1,17 @@
#!/bin/sh
test_description="Tests history walking performance"
. ./perf-lib.sh
test_perf_default_repo
test_perf 'rev-list --all' '
git rev-list --all >/dev/null
'
test_perf 'rev-list --all --objects' '
git rev-list --all --objects >/dev/null
'
test_done

23
t/perf/p7810-grep.sh Executable file
Просмотреть файл

@ -0,0 +1,23 @@
#!/bin/sh
test_description="git-grep performance in various modes"
. ./perf-lib.sh
test_perf_large_repo
test_checkout_worktree
test_perf 'grep worktree, cheap regex' '
git grep some_nonexistent_string || :
'
test_perf 'grep worktree, expensive regex' '
git grep "^.* *some_nonexistent_string$" || :
'
test_perf 'grep --cached, cheap regex' '
git grep --cached some_nonexistent_string || :
'
test_perf 'grep --cached, expensive regex' '
git grep --cached "^.* *some_nonexistent_string$" || :
'
test_done

198
t/perf/perf-lib.sh Normal file
Просмотреть файл

@ -0,0 +1,198 @@
#!/bin/sh
#
# Copyright (c) 2011 Thomas Rast
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see http://www.gnu.org/licenses/ .
# do the --tee work early; it otherwise confuses our careful
# GIT_BUILD_DIR mangling
case "$GIT_TEST_TEE_STARTED, $* " in
done,*)
# do not redirect again
;;
*' --tee '*|*' --va'*)
mkdir -p test-results
BASE=test-results/$(basename "$0" .sh)
(GIT_TEST_TEE_STARTED=done ${SHELL-sh} "$0" "$@" 2>&1;
echo $? > $BASE.exit) | tee $BASE.out
test "$(cat $BASE.exit)" = 0
exit
;;
esac
TEST_DIRECTORY=$(pwd)/..
TEST_OUTPUT_DIRECTORY=$(pwd)
if test -z "$GIT_TEST_INSTALLED"; then
perf_results_prefix=
else
perf_results_prefix=$(printf "%s" "${GIT_TEST_INSTALLED%/bin-wrappers}" | tr -c "[a-zA-Z0-9]" "[_*]")"."
# make the tested dir absolute
GIT_TEST_INSTALLED=$(cd "$GIT_TEST_INSTALLED" && pwd)
fi
TEST_NO_CREATE_REPO=t
. ../test-lib.sh
perf_results_dir=$TEST_OUTPUT_DIRECTORY/test-results
mkdir -p "$perf_results_dir"
rm -f "$perf_results_dir"/$(basename "$0" .sh).subtests
if test -z "$GIT_PERF_REPEAT_COUNT"; then
GIT_PERF_REPEAT_COUNT=3
fi
die_if_build_dir_not_repo () {
if ! ( cd "$TEST_DIRECTORY/.." &&
git rev-parse --build-dir >/dev/null 2>&1 ); then
error "No $1 defined, and your build directory is not a repo"
fi
}
if test -z "$GIT_PERF_REPO"; then
die_if_build_dir_not_repo '$GIT_PERF_REPO'
GIT_PERF_REPO=$TEST_DIRECTORY/..
fi
if test -z "$GIT_PERF_LARGE_REPO"; then
die_if_build_dir_not_repo '$GIT_PERF_LARGE_REPO'
GIT_PERF_LARGE_REPO=$TEST_DIRECTORY/..
fi
test_perf_create_repo_from () {
test "$#" = 2 ||
error "bug in the test script: not 2 parameters to test-create-repo"
repo="$1"
source="$2"
source_git=$source/$(cd "$source" && git rev-parse --git-dir)
mkdir -p "$repo/.git"
(
cd "$repo/.git" &&
{ cp -Rl "$source_git/objects" . 2>/dev/null ||
cp -R "$source_git/objects" .; } &&
for stuff in "$source_git"/*; do
case "$stuff" in
*/objects|*/hooks|*/config)
;;
*)
cp -R "$stuff" . || break
;;
esac
done &&
cd .. &&
git init -q &&
mv .git/hooks .git/hooks-disabled 2>/dev/null
) || error "failed to copy repository '$source' to '$repo'"
}
# call at least one of these to establish an appropriately-sized repository
test_perf_default_repo () {
test_perf_create_repo_from "${1:-$TRASH_DIRECTORY}" "$GIT_PERF_REPO"
}
test_perf_large_repo () {
if test "$GIT_PERF_LARGE_REPO" = "$GIT_BUILD_DIR"; then
echo "warning: \$GIT_PERF_LARGE_REPO is \$GIT_BUILD_DIR." >&2
echo "warning: This will work, but may not be a sufficiently large repo" >&2
echo "warning: for representative measurements." >&2
fi
test_perf_create_repo_from "${1:-$TRASH_DIRECTORY}" "$GIT_PERF_LARGE_REPO"
}
test_checkout_worktree () {
git checkout-index -u -a ||
error "git checkout-index failed"
}
# Performance tests should never fail. If they do, stop immediately
immediate=t
test_run_perf_ () {
test_cleanup=:
test_export_="test_cleanup"
export test_cleanup test_export_
/usr/bin/time -f "%E %U %S" -o test_time.$i "$SHELL" -c '
. '"$TEST_DIRECTORY"/../test-lib-functions.sh'
test_export () {
[ $# != 0 ] || return 0
test_export_="$test_export_\\|$1"
shift
test_export "$@"
}
'"$1"'
ret=$?
set | sed -n "s'"/'/'\\\\''/g"';s/^\\($test_export_\\)/export '"'&'"'/p" >test_vars
exit $ret' >&3 2>&4
eval_ret=$?
if test $eval_ret = 0 || test -n "$expecting_failure"
then
test_eval_ "$test_cleanup"
. ./test_vars || error "failed to load updated environment"
fi
if test "$verbose" = "t" && test -n "$HARNESS_ACTIVE"; then
echo ""
fi
return "$eval_ret"
}
test_perf () {
test "$#" = 3 && { test_prereq=$1; shift; } || test_prereq=
test "$#" = 2 ||
error "bug in the test script: not 2 or 3 parameters to test-expect-success"
export test_prereq
if ! test_skip "$@"
then
base=$(basename "$0" .sh)
echo "$test_count" >>"$perf_results_dir"/$base.subtests
echo "$1" >"$perf_results_dir"/$base.$test_count.descr
if test -z "$verbose"; then
echo -n "perf $test_count - $1:"
else
echo "perf $test_count - $1:"
fi
for i in $(seq 1 $GIT_PERF_REPEAT_COUNT); do
say >&3 "running: $2"
if test_run_perf_ "$2"
then
if test -z "$verbose"; then
echo -n " $i"
else
echo "* timing run $i/$GIT_PERF_REPEAT_COUNT:"
fi
else
test -z "$verbose" && echo
test_failure_ "$@"
break
fi
done
if test -z "$verbose"; then
echo " ok"
else
test_ok_ "$1"
fi
base="$perf_results_dir"/"$perf_results_prefix$(basename "$0" .sh)"."$test_count"
"$TEST_DIRECTORY"/perf/min_time.perl test_time.* >"$base".times
fi
echo >&3 ""
}
# We extend test_done to print timings at the end (./run disables this
# and does it after running everything)
test_at_end_hook_ () {
if test -z "$GIT_PERF_AGGREGATING_LATER"; then
( cd "$TEST_DIRECTORY"/perf && ./aggregate.perl $(basename "$0") )
fi
}
test_export () {
export "$@"
}

82
t/perf/run Executable file
Просмотреть файл

@ -0,0 +1,82 @@
#!/bin/sh
case "$1" in
--help)
echo "usage: $0 [other_git_tree...] [--] [test_scripts]"
exit 0
;;
esac
die () {
echo >&2 "error: $*"
exit 1
}
run_one_dir () {
if test $# -eq 0; then
set -- p????-*.sh
fi
echo "=== Running $# tests in ${GIT_TEST_INSTALLED:-this tree} ==="
for t in "$@"; do
./$t $GIT_TEST_OPTS
done
}
unpack_git_rev () {
rev=$1
mkdir -p build/$rev
(cd "$(git rev-parse --show-cdup)" && git archive --format=tar $rev) |
(cd build/$rev && tar x)
}
build_git_rev () {
rev=$1
cp ../../config.mak build/$rev/config.mak
(cd build/$rev && make $GIT_PERF_MAKE_OPTS) ||
die "failed to build revision '$mydir'"
}
run_dirs_helper () {
mydir=${1%/}
shift
while test $# -gt 0 -a "$1" != -- -a ! -f "$1"; do
shift
done
if test $# -gt 0 -a "$1" = --; then
shift
fi
if [ ! -d "$mydir" ]; then
rev=$(git rev-parse --verify "$mydir" 2>/dev/null) ||
die "'$mydir' is neither a directory nor a valid revision"
if [ ! -d build/$rev ]; then
unpack_git_rev $rev
fi
build_git_rev $rev
mydir=build/$rev
fi
if test "$mydir" = .; then
unset GIT_TEST_INSTALLED
else
GIT_TEST_INSTALLED="$mydir/bin-wrappers"
export GIT_TEST_INSTALLED
fi
run_one_dir "$@"
}
run_dirs () {
while test $# -gt 0 -a "$1" != -- -a ! -f "$1"; do
run_dirs_helper "$@"
shift
done
}
GIT_PERF_AGGREGATING_LATER=t
export GIT_PERF_AGGREGATING_LATER
cd "$(dirname $0)"
. ../../GIT-BUILD-OPTIONS
if test $# = 0 -o "$1" = -- -o -f "$1"; then
set -- . "$@"
fi
run_dirs "$@"
./aggregate.perl "$@"

86
t/t1051-large-conversion.sh Executable file
Просмотреть файл

@ -0,0 +1,86 @@
#!/bin/sh
test_description='test conversion filters on large files'
. ./test-lib.sh
set_attr() {
test_when_finished 'rm -f .gitattributes' &&
echo "* $*" >.gitattributes
}
check_input() {
git read-tree --empty &&
git add small large &&
git cat-file blob :small >small.index &&
git cat-file blob :large | head -n 1 >large.index &&
test_cmp small.index large.index
}
check_output() {
rm -f small large &&
git checkout small large &&
head -n 1 large >large.head &&
test_cmp small large.head
}
test_expect_success 'setup input tests' '
printf "\$Id: foo\$\\r\\n" >small &&
cat small small >large &&
git config core.bigfilethreshold 20 &&
git config filter.test.clean "sed s/.*/CLEAN/"
'
test_expect_success 'autocrlf=true converts on input' '
test_config core.autocrlf true &&
check_input
'
test_expect_success 'eol=crlf converts on input' '
set_attr eol=crlf &&
check_input
'
test_expect_success 'ident converts on input' '
set_attr ident &&
check_input
'
test_expect_success 'user-defined filters convert on input' '
set_attr filter=test &&
check_input
'
test_expect_success 'setup output tests' '
echo "\$Id\$" >small &&
cat small small >large &&
git add small large &&
git config core.bigfilethreshold 7 &&
git config filter.test.smudge "sed s/.*/SMUDGE/"
'
test_expect_success 'autocrlf=true converts on output' '
test_config core.autocrlf true &&
check_output
'
test_expect_success 'eol=crlf converts on output' '
set_attr eol=crlf &&
check_output
'
test_expect_success 'user-defined filters convert on output' '
set_attr filter=test &&
check_output
'
test_expect_success 'ident converts on output' '
set_attr ident &&
rm -f small large &&
git checkout small large &&
sed -n "s/Id: .*/Id: SHA/p" <small >small.clean &&
head -n 1 large >large.head &&
sed -n "s/Id: .*/Id: SHA/p" <large.head >large.clean &&
test_cmp small.clean large.clean
'
test_done

Просмотреть файл

@ -156,7 +156,7 @@ Updating VARIABLE..VARIABLE
FASTFORWARD (no commit created; -m option ignored)
example | 1 +
hello | 1 +
2 files changed, 2 insertions(+), 0 deletions(-)
2 files changed, 2 insertions(+)
EOF
test_expect_success 'git resolve' '

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше