2006-08-14 08:58:19 +04:00
|
|
|
/*
|
2009-03-26 00:22:13 +03:00
|
|
|
(See Documentation/git-fast-import.txt for maintained documentation.)
|
2006-08-14 08:58:19 +04:00
|
|
|
Format of STDIN stream:
|
|
|
|
|
|
|
|
stream ::= cmd*;
|
|
|
|
|
|
|
|
cmd ::= new_blob
|
2006-08-15 04:16:28 +04:00
|
|
|
| new_commit
|
2006-08-14 08:58:19 +04:00
|
|
|
| new_tag
|
2006-08-27 14:20:49 +04:00
|
|
|
| reset_branch
|
2007-02-06 00:05:11 +03:00
|
|
|
| checkpoint
|
2007-08-01 18:23:08 +04:00
|
|
|
| progress
|
2006-08-14 08:58:19 +04:00
|
|
|
;
|
|
|
|
|
2006-08-15 04:16:28 +04:00
|
|
|
new_blob ::= 'blob' lf
|
2007-07-15 12:57:40 +04:00
|
|
|
mark?
|
2006-08-15 04:16:28 +04:00
|
|
|
file_content;
|
|
|
|
file_content ::= data;
|
2006-08-14 08:58:19 +04:00
|
|
|
|
2006-08-15 04:16:28 +04:00
|
|
|
new_commit ::= 'commit' sp ref_str lf
|
2006-08-25 02:45:26 +04:00
|
|
|
mark?
|
2009-12-30 18:03:48 +03:00
|
|
|
('author' (sp name)? sp '<' email '>' sp when lf)?
|
|
|
|
'committer' (sp name)? sp '<' email '>' sp when lf
|
2006-08-25 02:45:26 +04:00
|
|
|
commit_msg
|
2013-09-04 23:04:31 +04:00
|
|
|
('from' sp commit-ish lf)?
|
|
|
|
('merge' sp commit-ish lf)*
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
(file_change | ls)*
|
2007-08-01 10:22:53 +04:00
|
|
|
lf?;
|
2006-08-15 04:16:28 +04:00
|
|
|
commit_msg ::= data;
|
2006-08-14 08:58:19 +04:00
|
|
|
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
ls ::= 'ls' sp '"' quoted(path) '"' lf;
|
|
|
|
|
2007-07-15 09:40:37 +04:00
|
|
|
file_change ::= file_clr
|
|
|
|
| file_del
|
|
|
|
| file_rnm
|
|
|
|
| file_cpy
|
|
|
|
| file_obm
|
|
|
|
| file_inm;
|
2007-02-07 10:03:03 +03:00
|
|
|
file_clr ::= 'deleteall' lf;
|
2007-01-18 23:17:58 +03:00
|
|
|
file_del ::= 'D' sp path_str lf;
|
2007-07-10 06:58:23 +04:00
|
|
|
file_rnm ::= 'R' sp path_str sp path_str lf;
|
2007-07-15 09:40:37 +04:00
|
|
|
file_cpy ::= 'C' sp path_str sp path_str lf;
|
2007-01-18 23:17:58 +03:00
|
|
|
file_obm ::= 'M' sp mode sp (hexsha1 | idnum) sp path_str lf;
|
|
|
|
file_inm ::= 'M' sp mode sp 'inline' sp path_str lf
|
|
|
|
data;
|
2013-09-04 23:04:31 +04:00
|
|
|
note_obm ::= 'N' sp (hexsha1 | idnum) sp commit-ish lf;
|
|
|
|
note_inm ::= 'N' sp 'inline' sp commit-ish lf
|
2009-10-09 14:22:02 +04:00
|
|
|
data;
|
2006-08-15 04:16:28 +04:00
|
|
|
|
|
|
|
new_tag ::= 'tag' sp tag_str lf
|
2013-09-04 23:04:31 +04:00
|
|
|
'from' sp commit-ish lf
|
2009-12-30 18:03:48 +03:00
|
|
|
('tagger' (sp name)? sp '<' email '>' sp when lf)?
|
2006-08-15 04:16:28 +04:00
|
|
|
tag_msg;
|
|
|
|
tag_msg ::= data;
|
|
|
|
|
2007-01-12 06:28:39 +03:00
|
|
|
reset_branch ::= 'reset' sp ref_str lf
|
2013-09-04 23:04:31 +04:00
|
|
|
('from' sp commit-ish lf)?
|
2007-08-01 10:22:53 +04:00
|
|
|
lf?;
|
2006-08-27 14:20:49 +04:00
|
|
|
|
2007-01-15 14:35:41 +03:00
|
|
|
checkpoint ::= 'checkpoint' lf
|
2007-08-01 10:22:53 +04:00
|
|
|
lf?;
|
2007-01-15 14:35:41 +03:00
|
|
|
|
2007-08-01 18:23:08 +04:00
|
|
|
progress ::= 'progress' sp not_lf* lf
|
|
|
|
lf?;
|
|
|
|
|
2006-08-15 04:16:28 +04:00
|
|
|
# note: the first idnum in a stream should be 1 and subsequent
|
|
|
|
# idnums should not have gaps between values as this will cause
|
|
|
|
# the stream parser to reserve space for the gapped values. An
|
2007-07-15 12:57:40 +04:00
|
|
|
# idnum can be updated in the future to a new object by issuing
|
2006-08-15 04:16:28 +04:00
|
|
|
# a new mark directive with the old idnum.
|
2007-07-15 12:57:40 +04:00
|
|
|
#
|
2006-08-15 04:16:28 +04:00
|
|
|
mark ::= 'mark' sp idnum lf;
|
2007-01-18 21:14:27 +03:00
|
|
|
data ::= (delimited_data | exact_data)
|
2007-08-01 08:24:25 +04:00
|
|
|
lf?;
|
2007-01-18 21:14:27 +03:00
|
|
|
|
|
|
|
# note: delim may be any string but must not contain lf.
|
|
|
|
# data_line may contain any data but must not be exactly
|
|
|
|
# delim.
|
|
|
|
delimited_data ::= 'data' sp '<<' delim lf
|
|
|
|
(data_line lf)*
|
2007-07-15 12:57:40 +04:00
|
|
|
delim lf;
|
2006-08-15 04:16:28 +04:00
|
|
|
|
|
|
|
# note: declen indicates the length of binary_data in bytes.
|
2009-04-17 22:13:30 +04:00
|
|
|
# declen does not include the lf preceding the binary data.
|
2006-08-15 04:16:28 +04:00
|
|
|
#
|
2007-01-18 21:14:27 +03:00
|
|
|
exact_data ::= 'data' sp declen lf
|
|
|
|
binary_data;
|
2006-08-15 04:16:28 +04:00
|
|
|
|
|
|
|
# note: quoted strings are C-style quoting supporting \c for
|
|
|
|
# common escapes of 'c' (e..g \n, \t, \\, \") or \nnn where nnn
|
2007-07-15 12:57:40 +04:00
|
|
|
# is the signed byte value in octal. Note that the only
|
2006-08-15 04:16:28 +04:00
|
|
|
# characters which must actually be escaped to protect the
|
|
|
|
# stream formatting is: \, " and LF. Otherwise these values
|
2007-07-15 12:57:40 +04:00
|
|
|
# are UTF8.
|
2006-08-15 04:16:28 +04:00
|
|
|
#
|
2013-09-04 23:04:31 +04:00
|
|
|
commit-ish ::= (ref_str | hexsha1 | sha1exp_str | idnum);
|
2007-02-06 04:30:37 +03:00
|
|
|
ref_str ::= ref;
|
|
|
|
sha1exp_str ::= sha1exp;
|
|
|
|
tag_str ::= tag;
|
2006-08-15 04:16:28 +04:00
|
|
|
path_str ::= path | '"' quoted(path) '"' ;
|
2007-01-18 23:17:58 +03:00
|
|
|
mode ::= '100644' | '644'
|
|
|
|
| '100755' | '755'
|
2007-02-06 23:46:11 +03:00
|
|
|
| '120000'
|
2007-01-18 23:17:58 +03:00
|
|
|
;
|
2006-08-15 04:16:28 +04:00
|
|
|
|
|
|
|
declen ::= # unsigned 32 bit value, ascii base10 notation;
|
2007-01-17 08:33:18 +03:00
|
|
|
bigint ::= # unsigned integer value, ascii base10 notation;
|
2006-08-14 08:58:19 +04:00
|
|
|
binary_data ::= # file content, not interpreted;
|
2006-08-15 04:16:28 +04:00
|
|
|
|
2007-02-06 22:58:30 +03:00
|
|
|
when ::= raw_when | rfc2822_when;
|
|
|
|
raw_when ::= ts sp tz;
|
|
|
|
rfc2822_when ::= # Valid RFC 2822 date and time;
|
|
|
|
|
2006-08-14 08:58:19 +04:00
|
|
|
sp ::= # ASCII space character;
|
|
|
|
lf ::= # ASCII newline (LF) character;
|
2006-08-15 04:16:28 +04:00
|
|
|
|
|
|
|
# note: a colon (':') must precede the numerical value assigned to
|
2007-07-15 12:57:40 +04:00
|
|
|
# an idnum. This is to distinguish it from a ref or tag name as
|
2006-08-15 04:16:28 +04:00
|
|
|
# GIT does not permit ':' in ref or tag strings.
|
2007-07-15 12:57:40 +04:00
|
|
|
#
|
2007-01-17 08:33:18 +03:00
|
|
|
idnum ::= ':' bigint;
|
2006-08-15 04:16:28 +04:00
|
|
|
path ::= # GIT style file path, e.g. "a/b/c";
|
|
|
|
ref ::= # GIT ref name, e.g. "refs/heads/MOZ_GECKO_EXPERIMENT";
|
|
|
|
tag ::= # GIT tag name, e.g. "FIREFOX_1_5";
|
2006-08-14 08:58:19 +04:00
|
|
|
sha1exp ::= # Any valid GIT SHA1 expression;
|
|
|
|
hexsha1 ::= # SHA1 in hexadecimal format;
|
2006-08-15 04:16:28 +04:00
|
|
|
|
|
|
|
# note: name and email are UTF8 strings, however name must not
|
2007-07-15 12:57:40 +04:00
|
|
|
# contain '<' or lf and email must not contain any of the
|
2006-08-15 04:16:28 +04:00
|
|
|
# following: '<', '>', lf.
|
2007-07-15 12:57:40 +04:00
|
|
|
#
|
2006-08-15 04:16:28 +04:00
|
|
|
name ::= # valid GIT author/committer name;
|
2006-08-14 08:58:19 +04:00
|
|
|
email ::= # valid GIT author/committer email;
|
2006-08-15 04:16:28 +04:00
|
|
|
ts ::= # time since the epoch in seconds, ascii base10 notation;
|
|
|
|
tz ::= # GIT style timezone;
|
2007-08-01 08:05:15 +04:00
|
|
|
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
# note: comments, ls and cat requests may appear anywhere
|
2010-11-28 22:45:58 +03:00
|
|
|
# in the input, except within a data command. Any form
|
|
|
|
# of the data command always escapes the related input
|
|
|
|
# from comment processing.
|
2007-08-01 08:05:15 +04:00
|
|
|
#
|
|
|
|
# In case it is not clear, the '#' that starts the comment
|
2009-04-23 01:15:56 +04:00
|
|
|
# must be the first character on that line (an lf
|
2009-04-17 22:13:30 +04:00
|
|
|
# preceded it).
|
2007-08-01 08:05:15 +04:00
|
|
|
#
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
|
2010-11-28 22:45:58 +03:00
|
|
|
cat_blob ::= 'cat-blob' sp (hexsha1 | idnum) lf;
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
ls_tree ::= 'ls' sp (hexsha1 | idnum) sp path_str lf;
|
2010-11-28 22:45:58 +03:00
|
|
|
|
2007-08-01 08:05:15 +04:00
|
|
|
comment ::= '#' not_lf* lf;
|
|
|
|
not_lf ::= # Any byte that is not ASCII newline (LF);
|
2006-08-14 08:58:19 +04:00
|
|
|
*/
|
|
|
|
|
2006-08-05 10:04:21 +04:00
|
|
|
#include "builtin.h"
|
|
|
|
#include "cache.h"
|
|
|
|
#include "object.h"
|
|
|
|
#include "blob.h"
|
2006-08-14 08:58:19 +04:00
|
|
|
#include "tree.h"
|
2007-02-07 00:08:06 +03:00
|
|
|
#include "commit.h"
|
2006-08-05 10:04:21 +04:00
|
|
|
#include "delta.h"
|
|
|
|
#include "pack.h"
|
2006-08-14 08:58:19 +04:00
|
|
|
#include "refs.h"
|
2006-08-05 10:04:21 +04:00
|
|
|
#include "csum-file.h"
|
2006-08-15 04:16:28 +04:00
|
|
|
#include "quote.h"
|
2009-01-18 15:00:12 +03:00
|
|
|
#include "exec_cmd.h"
|
2010-10-03 13:56:46 +04:00
|
|
|
#include "dir.h"
|
2006-08-05 10:04:21 +04:00
|
|
|
|
2007-01-17 10:42:43 +03:00
|
|
|
#define PACK_ID_BITS 16
|
|
|
|
#define MAX_PACK_ID ((1<<PACK_ID_BITS)-1)
|
2007-11-14 07:48:42 +03:00
|
|
|
#define DEPTH_BITS 13
|
|
|
|
#define MAX_DEPTH ((1<<DEPTH_BITS)-1)
|
2007-01-17 10:42:43 +03:00
|
|
|
|
2011-08-14 22:32:24 +04:00
|
|
|
/*
|
|
|
|
* We abuse the setuid bit on directories to mean "do not delta".
|
|
|
|
*/
|
|
|
|
#define NO_DELTA S_ISUID
|
|
|
|
|
2011-03-16 10:08:34 +03:00
|
|
|
struct object_entry {
|
2010-02-17 22:05:51 +03:00
|
|
|
struct pack_idx_entry idx;
|
2006-08-08 08:03:59 +04:00
|
|
|
struct object_entry *next;
|
2007-11-14 07:48:42 +03:00
|
|
|
uint32_t type : TYPE_BITS,
|
|
|
|
pack_id : PACK_ID_BITS,
|
|
|
|
depth : DEPTH_BITS;
|
2006-08-08 08:03:59 +04:00
|
|
|
};
|
|
|
|
|
2011-03-16 10:08:34 +03:00
|
|
|
struct object_entry_pool {
|
2006-08-14 08:58:19 +04:00
|
|
|
struct object_entry_pool *next_pool;
|
2006-08-08 08:03:59 +04:00
|
|
|
struct object_entry *next_free;
|
|
|
|
struct object_entry *end;
|
2006-08-08 08:46:13 +04:00
|
|
|
struct object_entry entries[FLEX_ARRAY]; /* more */
|
2006-08-08 08:03:59 +04:00
|
|
|
};
|
|
|
|
|
2011-03-16 10:08:34 +03:00
|
|
|
struct mark_set {
|
2006-08-23 12:17:45 +04:00
|
|
|
union {
|
|
|
|
struct object_entry *marked[1024];
|
|
|
|
struct mark_set *sets[1024];
|
|
|
|
} data;
|
2007-01-17 08:57:23 +03:00
|
|
|
unsigned int shift;
|
2006-08-23 12:17:45 +04:00
|
|
|
};
|
|
|
|
|
2011-03-16 10:08:34 +03:00
|
|
|
struct last_object {
|
2007-09-17 16:00:38 +04:00
|
|
|
struct strbuf data;
|
2010-02-17 22:05:54 +03:00
|
|
|
off_t offset;
|
2006-08-08 11:36:45 +04:00
|
|
|
unsigned int depth;
|
2007-09-17 16:00:38 +04:00
|
|
|
unsigned no_swap : 1;
|
2006-08-08 08:46:13 +04:00
|
|
|
};
|
|
|
|
|
2011-03-16 10:08:34 +03:00
|
|
|
struct mem_pool {
|
2006-08-14 08:58:19 +04:00
|
|
|
struct mem_pool *next_pool;
|
|
|
|
char *next_free;
|
|
|
|
char *end;
|
2007-12-15 07:39:16 +03:00
|
|
|
uintmax_t space[FLEX_ARRAY]; /* more */
|
2006-08-14 08:58:19 +04:00
|
|
|
};
|
|
|
|
|
2011-03-16 10:08:34 +03:00
|
|
|
struct atom_str {
|
2006-08-14 08:58:19 +04:00
|
|
|
struct atom_str *next_atom;
|
2007-02-06 00:34:56 +03:00
|
|
|
unsigned short str_len;
|
2006-08-14 08:58:19 +04:00
|
|
|
char str_dat[FLEX_ARRAY]; /* more */
|
|
|
|
};
|
|
|
|
|
|
|
|
struct tree_content;
|
2011-03-16 10:08:34 +03:00
|
|
|
struct tree_entry {
|
2006-08-14 08:58:19 +04:00
|
|
|
struct tree_content *tree;
|
2009-05-01 13:06:36 +04:00
|
|
|
struct atom_str *name;
|
2011-03-16 10:08:34 +03:00
|
|
|
struct tree_entry_ms {
|
2007-02-06 00:34:56 +03:00
|
|
|
uint16_t mode;
|
2006-08-28 20:22:50 +04:00
|
|
|
unsigned char sha1[20];
|
|
|
|
} versions[2];
|
2006-08-08 11:36:45 +04:00
|
|
|
};
|
|
|
|
|
2011-03-16 10:08:34 +03:00
|
|
|
struct tree_content {
|
2006-08-14 08:58:19 +04:00
|
|
|
unsigned int entry_capacity; /* must match avail_tree_content */
|
|
|
|
unsigned int entry_count;
|
2006-08-28 20:22:50 +04:00
|
|
|
unsigned int delta_depth;
|
2006-08-14 08:58:19 +04:00
|
|
|
struct tree_entry *entries[FLEX_ARRAY]; /* more */
|
|
|
|
};
|
|
|
|
|
2011-03-16 10:08:34 +03:00
|
|
|
struct avail_tree_content {
|
2006-08-14 08:58:19 +04:00
|
|
|
unsigned int entry_capacity; /* must match tree_content */
|
|
|
|
struct avail_tree_content *next_avail;
|
2006-08-08 11:36:45 +04:00
|
|
|
};
|
|
|
|
|
2011-03-16 10:08:34 +03:00
|
|
|
struct branch {
|
2006-08-14 08:58:19 +04:00
|
|
|
struct branch *table_next_branch;
|
|
|
|
struct branch *active_next_branch;
|
2006-08-08 11:36:45 +04:00
|
|
|
const char *name;
|
2006-08-14 08:58:19 +04:00
|
|
|
struct tree_entry branch_tree;
|
2007-01-17 10:42:43 +03:00
|
|
|
uintmax_t last_commit;
|
2009-12-07 14:27:24 +03:00
|
|
|
uintmax_t num_notes;
|
2007-03-05 20:31:09 +03:00
|
|
|
unsigned active : 1;
|
|
|
|
unsigned pack_id : PACK_ID_BITS;
|
2006-08-14 08:58:19 +04:00
|
|
|
unsigned char sha1[20];
|
2006-08-08 11:36:45 +04:00
|
|
|
};
|
|
|
|
|
2011-03-16 10:08:34 +03:00
|
|
|
struct tag {
|
2006-08-24 11:12:13 +04:00
|
|
|
struct tag *next_tag;
|
|
|
|
const char *name;
|
2007-01-17 00:18:44 +03:00
|
|
|
unsigned int pack_id;
|
2006-08-24 11:12:13 +04:00
|
|
|
unsigned char sha1[20];
|
|
|
|
};
|
|
|
|
|
2011-03-16 10:08:34 +03:00
|
|
|
struct hash_list {
|
2007-01-12 06:21:38 +03:00
|
|
|
struct hash_list *next;
|
|
|
|
unsigned char sha1[20];
|
|
|
|
};
|
2006-08-14 08:58:19 +04:00
|
|
|
|
2007-02-06 22:58:30 +03:00
|
|
|
typedef enum {
|
|
|
|
WHENSPEC_RAW = 1,
|
|
|
|
WHENSPEC_RFC2822,
|
2010-05-14 13:31:35 +04:00
|
|
|
WHENSPEC_NOW
|
2007-02-06 22:58:30 +03:00
|
|
|
} whenspec_type;
|
|
|
|
|
2011-03-16 10:08:34 +03:00
|
|
|
struct recent_command {
|
2007-08-03 12:47:04 +04:00
|
|
|
struct recent_command *prev;
|
|
|
|
struct recent_command *next;
|
|
|
|
char *buf;
|
|
|
|
};
|
|
|
|
|
2007-01-16 08:33:19 +03:00
|
|
|
/* Configured limits on output */
|
2006-08-23 10:00:31 +04:00
|
|
|
static unsigned long max_depth = 10;
|
2010-02-17 22:05:54 +03:00
|
|
|
static off_t max_packsize;
|
2007-02-07 00:08:06 +03:00
|
|
|
static int force_update;
|
2008-01-21 07:36:54 +03:00
|
|
|
static int pack_compression_level = Z_DEFAULT_COMPRESSION;
|
|
|
|
static int pack_compression_seen;
|
2007-01-16 08:33:19 +03:00
|
|
|
|
|
|
|
/* Stats and misc. counters */
|
|
|
|
static uintmax_t alloc_count;
|
|
|
|
static uintmax_t marks_set_count;
|
|
|
|
static uintmax_t object_count_by_type[1 << TYPE_BITS];
|
|
|
|
static uintmax_t duplicate_count_by_type[1 << TYPE_BITS];
|
|
|
|
static uintmax_t delta_count_by_type[1 << TYPE_BITS];
|
2011-08-20 23:04:11 +04:00
|
|
|
static uintmax_t delta_count_attempts_by_type[1 << TYPE_BITS];
|
2007-01-16 12:55:41 +03:00
|
|
|
static unsigned long object_count;
|
2006-08-08 11:36:45 +04:00
|
|
|
static unsigned long branch_count;
|
2006-08-23 12:31:12 +04:00
|
|
|
static unsigned long branch_load_count;
|
2007-02-07 00:08:06 +03:00
|
|
|
static int failure;
|
2007-02-12 03:45:56 +03:00
|
|
|
static FILE *pack_edges;
|
2009-12-04 20:06:54 +03:00
|
|
|
static unsigned int show_stats = 1;
|
2009-12-04 20:06:57 +03:00
|
|
|
static int global_argc;
|
2013-04-27 23:19:47 +04:00
|
|
|
static char **global_argv;
|
2006-08-08 08:46:13 +04:00
|
|
|
|
2006-08-14 08:58:19 +04:00
|
|
|
/* Memory pools */
|
|
|
|
static size_t mem_pool_alloc = 2*1024*1024 - sizeof(struct mem_pool);
|
|
|
|
static size_t total_allocd;
|
|
|
|
static struct mem_pool *mem_pool;
|
|
|
|
|
2006-08-15 04:16:28 +04:00
|
|
|
/* Atom management */
|
2006-08-14 08:58:19 +04:00
|
|
|
static unsigned int atom_table_sz = 4451;
|
|
|
|
static unsigned int atom_cnt;
|
|
|
|
static struct atom_str **atom_table;
|
|
|
|
|
|
|
|
/* The .pack file being generated */
|
2011-02-26 02:43:25 +03:00
|
|
|
static struct pack_idx_option pack_idx_opts;
|
2007-01-15 14:35:41 +03:00
|
|
|
static unsigned int pack_id;
|
2010-02-17 22:05:52 +03:00
|
|
|
static struct sha1file *pack_file;
|
2007-01-14 14:20:23 +03:00
|
|
|
static struct packed_git *pack_data;
|
2007-01-15 14:35:41 +03:00
|
|
|
static struct packed_git **all_packs;
|
2010-02-17 22:05:54 +03:00
|
|
|
static off_t pack_size;
|
2006-08-08 08:46:13 +04:00
|
|
|
|
|
|
|
/* Table of objects we've written. */
|
2006-08-28 20:22:50 +04:00
|
|
|
static unsigned int object_entry_alloc = 5000;
|
2006-08-14 08:58:19 +04:00
|
|
|
static struct object_entry_pool *blocks;
|
|
|
|
static struct object_entry *object_table[1 << 16];
|
2006-08-23 12:17:45 +04:00
|
|
|
static struct mark_set *marks;
|
2009-12-04 20:06:55 +03:00
|
|
|
static const char *export_marks_file;
|
|
|
|
static const char *import_marks_file;
|
2009-12-04 20:06:59 +03:00
|
|
|
static int import_marks_file_from_stream;
|
2011-01-15 09:31:46 +03:00
|
|
|
static int import_marks_file_ignore_missing;
|
2009-12-04 20:07:00 +03:00
|
|
|
static int relative_marks_paths;
|
2006-08-08 08:46:13 +04:00
|
|
|
|
|
|
|
/* Our last blob */
|
2007-09-17 16:00:38 +04:00
|
|
|
static struct last_object last_blob = { STRBUF_INIT, 0, 0, 0 };
|
2006-08-14 08:58:19 +04:00
|
|
|
|
|
|
|
/* Tree management */
|
|
|
|
static unsigned int tree_entry_alloc = 1000;
|
|
|
|
static void *avail_tree_entry;
|
|
|
|
static unsigned int avail_tree_table_sz = 100;
|
|
|
|
static struct avail_tree_content **avail_tree_table;
|
2007-09-17 15:48:17 +04:00
|
|
|
static struct strbuf old_tree = STRBUF_INIT;
|
|
|
|
static struct strbuf new_tree = STRBUF_INIT;
|
2006-08-06 21:51:39 +04:00
|
|
|
|
2006-08-08 11:36:45 +04:00
|
|
|
/* Branch data */
|
2006-08-23 10:00:31 +04:00
|
|
|
static unsigned long max_active_branches = 5;
|
|
|
|
static unsigned long cur_active_branches;
|
|
|
|
static unsigned long branch_table_sz = 1039;
|
2006-08-14 08:58:19 +04:00
|
|
|
static struct branch **branch_table;
|
|
|
|
static struct branch *active_branches;
|
|
|
|
|
2006-08-24 11:12:13 +04:00
|
|
|
/* Tag data */
|
|
|
|
static struct tag *first_tag;
|
|
|
|
static struct tag *last_tag;
|
|
|
|
|
2006-08-15 04:16:28 +04:00
|
|
|
/* Input stream parsing */
|
2007-02-06 22:58:30 +03:00
|
|
|
static whenspec_type whenspec = WHENSPEC_RAW;
|
2007-09-06 15:20:07 +04:00
|
|
|
static struct strbuf command_buf = STRBUF_INIT;
|
2007-08-01 10:22:53 +04:00
|
|
|
static int unread_command_buf;
|
2007-08-03 12:47:04 +04:00
|
|
|
static struct recent_command cmd_hist = {&cmd_hist, &cmd_hist, NULL};
|
|
|
|
static struct recent_command *cmd_tail = &cmd_hist;
|
|
|
|
static struct recent_command *rc_free;
|
|
|
|
static unsigned int cmd_save = 100;
|
2007-01-16 08:33:19 +03:00
|
|
|
static uintmax_t next_mark;
|
2007-09-17 15:48:17 +04:00
|
|
|
static struct strbuf new_data = STRBUF_INIT;
|
2009-12-04 20:06:56 +03:00
|
|
|
static int seen_data_command;
|
2011-07-16 17:03:32 +04:00
|
|
|
static int require_explicit_termination;
|
2006-08-15 04:16:28 +04:00
|
|
|
|
2010-11-22 11:16:02 +03:00
|
|
|
/* Signal handling */
|
|
|
|
static volatile sig_atomic_t checkpoint_requested;
|
|
|
|
|
2010-11-28 22:45:01 +03:00
|
|
|
/* Where to write output of cat-blob commands */
|
|
|
|
static int cat_blob_fd = STDOUT_FILENO;
|
|
|
|
|
2009-12-04 20:06:57 +03:00
|
|
|
static void parse_argv(void);
|
2010-11-28 22:45:58 +03:00
|
|
|
static void parse_cat_blob(void);
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
static void parse_ls(struct branch *b);
|
2006-08-15 04:16:28 +04:00
|
|
|
|
2007-08-03 10:00:37 +04:00
|
|
|
static void write_branch_report(FILE *rpt, struct branch *b)
|
|
|
|
{
|
|
|
|
fprintf(rpt, "%s:\n", b->name);
|
|
|
|
|
|
|
|
fprintf(rpt, " status :");
|
|
|
|
if (b->active)
|
|
|
|
fputs(" active", rpt);
|
|
|
|
if (b->branch_tree.tree)
|
|
|
|
fputs(" loaded", rpt);
|
|
|
|
if (is_null_sha1(b->branch_tree.versions[1].sha1))
|
|
|
|
fputs(" dirty", rpt);
|
|
|
|
fputc('\n', rpt);
|
|
|
|
|
|
|
|
fprintf(rpt, " tip commit : %s\n", sha1_to_hex(b->sha1));
|
|
|
|
fprintf(rpt, " old tree : %s\n", sha1_to_hex(b->branch_tree.versions[0].sha1));
|
|
|
|
fprintf(rpt, " cur tree : %s\n", sha1_to_hex(b->branch_tree.versions[1].sha1));
|
|
|
|
fprintf(rpt, " commit clock: %" PRIuMAX "\n", b->last_commit);
|
|
|
|
|
|
|
|
fputs(" last pack : ", rpt);
|
|
|
|
if (b->pack_id < MAX_PACK_ID)
|
|
|
|
fprintf(rpt, "%u", b->pack_id);
|
|
|
|
fputc('\n', rpt);
|
|
|
|
|
|
|
|
fputc('\n', rpt);
|
|
|
|
}
|
|
|
|
|
2008-02-14 09:34:40 +03:00
|
|
|
static void dump_marks_helper(FILE *, uintmax_t, struct mark_set *);
|
|
|
|
|
2007-08-21 07:38:14 +04:00
|
|
|
static void write_crash_report(const char *err)
|
2007-08-03 10:00:37 +04:00
|
|
|
{
|
2008-08-31 16:09:39 +04:00
|
|
|
char *loc = git_path("fast_import_crash_%"PRIuMAX, (uintmax_t) getpid());
|
2007-08-03 10:00:37 +04:00
|
|
|
FILE *rpt = fopen(loc, "w");
|
|
|
|
struct branch *b;
|
|
|
|
unsigned long lu;
|
2007-08-03 12:47:04 +04:00
|
|
|
struct recent_command *rc;
|
2007-08-03 10:00:37 +04:00
|
|
|
|
|
|
|
if (!rpt) {
|
|
|
|
error("can't write crash report %s: %s", loc, strerror(errno));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
fprintf(stderr, "fast-import: dumping crash report to %s\n", loc);
|
|
|
|
|
|
|
|
fprintf(rpt, "fast-import crash report:\n");
|
2008-08-31 16:09:39 +04:00
|
|
|
fprintf(rpt, " fast-import process: %"PRIuMAX"\n", (uintmax_t) getpid());
|
|
|
|
fprintf(rpt, " parent process : %"PRIuMAX"\n", (uintmax_t) getppid());
|
2007-08-03 10:00:37 +04:00
|
|
|
fprintf(rpt, " at %s\n", show_date(time(NULL), 0, DATE_LOCAL));
|
|
|
|
fputc('\n', rpt);
|
|
|
|
|
|
|
|
fputs("fatal: ", rpt);
|
2007-08-21 07:38:14 +04:00
|
|
|
fputs(err, rpt);
|
2007-08-03 10:00:37 +04:00
|
|
|
fputc('\n', rpt);
|
|
|
|
|
2007-08-03 12:47:04 +04:00
|
|
|
fputc('\n', rpt);
|
|
|
|
fputs("Most Recent Commands Before Crash\n", rpt);
|
|
|
|
fputs("---------------------------------\n", rpt);
|
|
|
|
for (rc = cmd_hist.next; rc != &cmd_hist; rc = rc->next) {
|
|
|
|
if (rc->next == &cmd_hist)
|
|
|
|
fputs("* ", rpt);
|
|
|
|
else
|
|
|
|
fputs(" ", rpt);
|
|
|
|
fputs(rc->buf, rpt);
|
|
|
|
fputc('\n', rpt);
|
|
|
|
}
|
|
|
|
|
2007-08-03 10:00:37 +04:00
|
|
|
fputc('\n', rpt);
|
|
|
|
fputs("Active Branch LRU\n", rpt);
|
|
|
|
fputs("-----------------\n", rpt);
|
|
|
|
fprintf(rpt, " active_branches = %lu cur, %lu max\n",
|
|
|
|
cur_active_branches,
|
|
|
|
max_active_branches);
|
|
|
|
fputc('\n', rpt);
|
|
|
|
fputs(" pos clock name\n", rpt);
|
|
|
|
fputs(" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n", rpt);
|
|
|
|
for (b = active_branches, lu = 0; b; b = b->active_next_branch)
|
|
|
|
fprintf(rpt, " %2lu) %6" PRIuMAX" %s\n",
|
|
|
|
++lu, b->last_commit, b->name);
|
|
|
|
|
|
|
|
fputc('\n', rpt);
|
|
|
|
fputs("Inactive Branches\n", rpt);
|
|
|
|
fputs("-----------------\n", rpt);
|
|
|
|
for (lu = 0; lu < branch_table_sz; lu++) {
|
|
|
|
for (b = branch_table[lu]; b; b = b->table_next_branch)
|
|
|
|
write_branch_report(rpt, b);
|
|
|
|
}
|
|
|
|
|
2008-02-14 09:34:36 +03:00
|
|
|
if (first_tag) {
|
|
|
|
struct tag *tg;
|
|
|
|
fputc('\n', rpt);
|
|
|
|
fputs("Annotated Tags\n", rpt);
|
|
|
|
fputs("--------------\n", rpt);
|
|
|
|
for (tg = first_tag; tg; tg = tg->next_tag) {
|
|
|
|
fputs(sha1_to_hex(tg->sha1), rpt);
|
|
|
|
fputc(' ', rpt);
|
|
|
|
fputs(tg->name, rpt);
|
|
|
|
fputc('\n', rpt);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-02-14 09:34:40 +03:00
|
|
|
fputc('\n', rpt);
|
|
|
|
fputs("Marks\n", rpt);
|
|
|
|
fputs("-----\n", rpt);
|
2009-12-04 20:06:55 +03:00
|
|
|
if (export_marks_file)
|
|
|
|
fprintf(rpt, " exported to %s\n", export_marks_file);
|
2008-02-14 09:34:40 +03:00
|
|
|
else
|
|
|
|
dump_marks_helper(rpt, 0, marks);
|
|
|
|
|
2007-08-03 10:00:37 +04:00
|
|
|
fputc('\n', rpt);
|
|
|
|
fputs("-------------------\n", rpt);
|
|
|
|
fputs("END OF CRASH REPORT\n", rpt);
|
|
|
|
fclose(rpt);
|
|
|
|
}
|
|
|
|
|
2008-02-14 09:34:43 +03:00
|
|
|
static void end_packfile(void);
|
|
|
|
static void unkeep_all_packs(void);
|
|
|
|
static void dump_marks(void);
|
|
|
|
|
2007-08-03 10:00:37 +04:00
|
|
|
static NORETURN void die_nicely(const char *err, va_list params)
|
|
|
|
{
|
|
|
|
static int zombie;
|
2007-08-21 07:38:14 +04:00
|
|
|
char message[2 * PATH_MAX];
|
2007-08-03 10:00:37 +04:00
|
|
|
|
2007-08-21 07:38:14 +04:00
|
|
|
vsnprintf(message, sizeof(message), err, params);
|
2007-08-03 10:00:37 +04:00
|
|
|
fputs("fatal: ", stderr);
|
2007-08-21 07:38:14 +04:00
|
|
|
fputs(message, stderr);
|
2007-08-03 10:00:37 +04:00
|
|
|
fputc('\n', stderr);
|
|
|
|
|
|
|
|
if (!zombie) {
|
|
|
|
zombie = 1;
|
2007-08-21 07:38:14 +04:00
|
|
|
write_crash_report(message);
|
2008-02-14 09:34:43 +03:00
|
|
|
end_packfile();
|
|
|
|
unkeep_all_packs();
|
|
|
|
dump_marks();
|
2007-08-03 10:00:37 +04:00
|
|
|
}
|
|
|
|
exit(128);
|
|
|
|
}
|
2006-08-08 11:36:45 +04:00
|
|
|
|
2010-11-22 11:16:02 +03:00
|
|
|
#ifndef SIGUSR1 /* Windows, for example */
|
|
|
|
|
|
|
|
static void set_checkpoint_signal(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
|
|
|
#else
|
|
|
|
|
|
|
|
static void checkpoint_signal(int signo)
|
|
|
|
{
|
|
|
|
checkpoint_requested = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_checkpoint_signal(void)
|
|
|
|
{
|
|
|
|
struct sigaction sa;
|
|
|
|
|
|
|
|
memset(&sa, 0, sizeof(sa));
|
|
|
|
sa.sa_handler = checkpoint_signal;
|
|
|
|
sigemptyset(&sa.sa_mask);
|
|
|
|
sa.sa_flags = SA_RESTART;
|
|
|
|
sigaction(SIGUSR1, &sa, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
2007-01-15 08:16:23 +03:00
|
|
|
static void alloc_objects(unsigned int cnt)
|
2006-08-06 21:51:39 +04:00
|
|
|
{
|
2006-08-14 08:58:19 +04:00
|
|
|
struct object_entry_pool *b;
|
2006-08-08 08:03:59 +04:00
|
|
|
|
2006-08-14 08:58:19 +04:00
|
|
|
b = xmalloc(sizeof(struct object_entry_pool)
|
2006-08-08 08:03:59 +04:00
|
|
|
+ cnt * sizeof(struct object_entry));
|
2006-08-14 08:58:19 +04:00
|
|
|
b->next_pool = blocks;
|
2006-08-08 08:03:59 +04:00
|
|
|
b->next_free = b->entries;
|
|
|
|
b->end = b->entries + cnt;
|
|
|
|
blocks = b;
|
|
|
|
alloc_count += cnt;
|
|
|
|
}
|
2006-08-06 21:51:39 +04:00
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static struct object_entry *new_object(unsigned char *sha1)
|
2006-08-06 21:51:39 +04:00
|
|
|
{
|
2006-08-08 08:03:59 +04:00
|
|
|
struct object_entry *e;
|
2006-08-06 21:51:39 +04:00
|
|
|
|
2006-08-08 08:03:59 +04:00
|
|
|
if (blocks->next_free == blocks->end)
|
2006-08-14 08:58:19 +04:00
|
|
|
alloc_objects(object_entry_alloc);
|
2006-08-06 21:51:39 +04:00
|
|
|
|
2006-08-08 08:03:59 +04:00
|
|
|
e = blocks->next_free++;
|
2010-02-17 22:05:51 +03:00
|
|
|
hashcpy(e->idx.sha1, sha1);
|
2006-08-08 08:03:59 +04:00
|
|
|
return e;
|
2006-08-06 21:51:39 +04:00
|
|
|
}
|
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static struct object_entry *find_object(unsigned char *sha1)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
|
|
|
unsigned int h = sha1[0] << 8 | sha1[1];
|
|
|
|
struct object_entry *e;
|
|
|
|
for (e = object_table[h]; e; e = e->next)
|
2010-02-17 22:05:51 +03:00
|
|
|
if (!hashcmp(sha1, e->idx.sha1))
|
2006-08-14 08:58:19 +04:00
|
|
|
return e;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static struct object_entry *insert_object(unsigned char *sha1)
|
2006-08-06 21:51:39 +04:00
|
|
|
{
|
|
|
|
unsigned int h = sha1[0] << 8 | sha1[1];
|
2006-08-08 08:03:59 +04:00
|
|
|
struct object_entry *e = object_table[h];
|
2006-08-06 21:51:39 +04:00
|
|
|
|
|
|
|
while (e) {
|
2010-02-17 22:05:51 +03:00
|
|
|
if (!hashcmp(sha1, e->idx.sha1))
|
2006-08-06 21:51:39 +04:00
|
|
|
return e;
|
|
|
|
e = e->next;
|
|
|
|
}
|
|
|
|
|
|
|
|
e = new_object(sha1);
|
2010-11-23 10:53:48 +03:00
|
|
|
e->next = object_table[h];
|
2010-02-17 22:05:51 +03:00
|
|
|
e->idx.offset = 0;
|
2010-11-23 10:53:48 +03:00
|
|
|
object_table[h] = e;
|
2006-08-06 21:51:39 +04:00
|
|
|
return e;
|
|
|
|
}
|
2006-08-05 10:04:21 +04:00
|
|
|
|
2006-08-14 08:58:19 +04:00
|
|
|
static unsigned int hc_str(const char *s, size_t len)
|
|
|
|
{
|
|
|
|
unsigned int r = 0;
|
|
|
|
while (len-- > 0)
|
|
|
|
r = r * 31 + *s++;
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static void *pool_alloc(size_t len)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
|
|
|
struct mem_pool *p;
|
|
|
|
void *r;
|
|
|
|
|
2008-12-14 05:08:22 +03:00
|
|
|
/* round up to a 'uintmax_t' alignment */
|
|
|
|
if (len & (sizeof(uintmax_t) - 1))
|
|
|
|
len += sizeof(uintmax_t) - (len & (sizeof(uintmax_t) - 1));
|
|
|
|
|
2006-08-14 08:58:19 +04:00
|
|
|
for (p = mem_pool; p; p = p->next_pool)
|
|
|
|
if ((p->end - p->next_free >= len))
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (!p) {
|
|
|
|
if (len >= (mem_pool_alloc/2)) {
|
|
|
|
total_allocd += len;
|
|
|
|
return xmalloc(len);
|
|
|
|
}
|
|
|
|
total_allocd += sizeof(struct mem_pool) + mem_pool_alloc;
|
|
|
|
p = xmalloc(sizeof(struct mem_pool) + mem_pool_alloc);
|
|
|
|
p->next_pool = mem_pool;
|
2007-12-15 07:39:16 +03:00
|
|
|
p->next_free = (char *) p->space;
|
2006-08-14 08:58:19 +04:00
|
|
|
p->end = p->next_free + mem_pool_alloc;
|
|
|
|
mem_pool = p;
|
|
|
|
}
|
|
|
|
|
|
|
|
r = p->next_free;
|
|
|
|
p->next_free += len;
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static void *pool_calloc(size_t count, size_t size)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
|
|
|
size_t len = count * size;
|
|
|
|
void *r = pool_alloc(len);
|
|
|
|
memset(r, 0, len);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static char *pool_strdup(const char *s)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
|
|
|
char *r = pool_alloc(strlen(s) + 1);
|
|
|
|
strcpy(r, s);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2007-01-16 08:33:19 +03:00
|
|
|
static void insert_mark(uintmax_t idnum, struct object_entry *oe)
|
2006-08-23 12:17:45 +04:00
|
|
|
{
|
|
|
|
struct mark_set *s = marks;
|
|
|
|
while ((idnum >> s->shift) >= 1024) {
|
|
|
|
s = pool_calloc(1, sizeof(struct mark_set));
|
|
|
|
s->shift = marks->shift + 10;
|
|
|
|
s->data.sets[0] = marks;
|
|
|
|
marks = s;
|
|
|
|
}
|
|
|
|
while (s->shift) {
|
2007-01-16 08:33:19 +03:00
|
|
|
uintmax_t i = idnum >> s->shift;
|
2006-08-23 12:17:45 +04:00
|
|
|
idnum -= i << s->shift;
|
|
|
|
if (!s->data.sets[i]) {
|
|
|
|
s->data.sets[i] = pool_calloc(1, sizeof(struct mark_set));
|
|
|
|
s->data.sets[i]->shift = s->shift - 10;
|
|
|
|
}
|
|
|
|
s = s->data.sets[i];
|
|
|
|
}
|
|
|
|
if (!s->data.marked[idnum])
|
|
|
|
marks_set_count++;
|
|
|
|
s->data.marked[idnum] = oe;
|
|
|
|
}
|
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static struct object_entry *find_mark(uintmax_t idnum)
|
2006-08-23 12:17:45 +04:00
|
|
|
{
|
2007-01-16 08:33:19 +03:00
|
|
|
uintmax_t orig_idnum = idnum;
|
2006-08-23 12:17:45 +04:00
|
|
|
struct mark_set *s = marks;
|
|
|
|
struct object_entry *oe = NULL;
|
|
|
|
if ((idnum >> s->shift) < 1024) {
|
|
|
|
while (s && s->shift) {
|
2007-01-16 08:33:19 +03:00
|
|
|
uintmax_t i = idnum >> s->shift;
|
2006-08-23 12:17:45 +04:00
|
|
|
idnum -= i << s->shift;
|
|
|
|
s = s->data.sets[i];
|
|
|
|
}
|
|
|
|
if (s)
|
|
|
|
oe = s->data.marked[idnum];
|
|
|
|
}
|
|
|
|
if (!oe)
|
2007-02-21 04:34:56 +03:00
|
|
|
die("mark :%" PRIuMAX " not declared", orig_idnum);
|
2006-08-23 12:17:45 +04:00
|
|
|
return oe;
|
|
|
|
}
|
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static struct atom_str *to_atom(const char *s, unsigned short len)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
|
|
|
unsigned int hc = hc_str(s, len) % atom_table_sz;
|
|
|
|
struct atom_str *c;
|
|
|
|
|
|
|
|
for (c = atom_table[hc]; c; c = c->next_atom)
|
|
|
|
if (c->str_len == len && !strncmp(s, c->str_dat, len))
|
|
|
|
return c;
|
|
|
|
|
|
|
|
c = pool_alloc(sizeof(struct atom_str) + len + 1);
|
|
|
|
c->str_len = len;
|
|
|
|
strncpy(c->str_dat, s, len);
|
|
|
|
c->str_dat[len] = 0;
|
|
|
|
c->next_atom = atom_table[hc];
|
|
|
|
atom_table[hc] = c;
|
|
|
|
atom_cnt++;
|
|
|
|
return c;
|
|
|
|
}
|
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static struct branch *lookup_branch(const char *name)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
|
|
|
unsigned int hc = hc_str(name, strlen(name)) % branch_table_sz;
|
|
|
|
struct branch *b;
|
|
|
|
|
|
|
|
for (b = branch_table[hc]; b; b = b->table_next_branch)
|
|
|
|
if (!strcmp(name, b->name))
|
|
|
|
return b;
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static struct branch *new_branch(const char *name)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
|
|
|
unsigned int hc = hc_str(name, strlen(name)) % branch_table_sz;
|
2009-05-01 13:06:36 +04:00
|
|
|
struct branch *b = lookup_branch(name);
|
2006-08-14 08:58:19 +04:00
|
|
|
|
|
|
|
if (b)
|
|
|
|
die("Invalid attempt to create duplicate branch: %s", name);
|
2011-09-16 01:10:25 +04:00
|
|
|
if (check_refname_format(name, REFNAME_ALLOW_ONELEVEL))
|
2006-08-15 04:16:28 +04:00
|
|
|
die("Branch name doesn't conform to GIT standards: %s", name);
|
2006-08-14 08:58:19 +04:00
|
|
|
|
|
|
|
b = pool_calloc(1, sizeof(struct branch));
|
|
|
|
b->name = pool_strdup(name);
|
|
|
|
b->table_next_branch = branch_table[hc];
|
2006-08-29 06:06:13 +04:00
|
|
|
b->branch_tree.versions[0].mode = S_IFDIR;
|
|
|
|
b->branch_tree.versions[1].mode = S_IFDIR;
|
2009-12-07 14:27:24 +03:00
|
|
|
b->num_notes = 0;
|
2007-03-05 20:31:09 +03:00
|
|
|
b->active = 0;
|
2007-01-17 10:42:43 +03:00
|
|
|
b->pack_id = MAX_PACK_ID;
|
2006-08-14 08:58:19 +04:00
|
|
|
branch_table[hc] = b;
|
|
|
|
branch_count++;
|
|
|
|
return b;
|
|
|
|
}
|
|
|
|
|
|
|
|
static unsigned int hc_entries(unsigned int cnt)
|
|
|
|
{
|
|
|
|
cnt = cnt & 7 ? (cnt / 8) + 1 : cnt / 8;
|
|
|
|
return cnt < avail_tree_table_sz ? cnt : avail_tree_table_sz - 1;
|
|
|
|
}
|
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static struct tree_content *new_tree_content(unsigned int cnt)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
|
|
|
struct avail_tree_content *f, *l = NULL;
|
|
|
|
struct tree_content *t;
|
|
|
|
unsigned int hc = hc_entries(cnt);
|
|
|
|
|
|
|
|
for (f = avail_tree_table[hc]; f; l = f, f = f->next_avail)
|
|
|
|
if (f->entry_capacity >= cnt)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (f) {
|
|
|
|
if (l)
|
|
|
|
l->next_avail = f->next_avail;
|
|
|
|
else
|
|
|
|
avail_tree_table[hc] = f->next_avail;
|
|
|
|
} else {
|
|
|
|
cnt = cnt & 7 ? ((cnt / 8) + 1) * 8 : cnt;
|
|
|
|
f = pool_alloc(sizeof(*t) + sizeof(t->entries[0]) * cnt);
|
|
|
|
f->entry_capacity = cnt;
|
|
|
|
}
|
|
|
|
|
|
|
|
t = (struct tree_content*)f;
|
|
|
|
t->entry_count = 0;
|
2006-08-28 20:22:50 +04:00
|
|
|
t->delta_depth = 0;
|
2006-08-14 08:58:19 +04:00
|
|
|
return t;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void release_tree_entry(struct tree_entry *e);
|
|
|
|
static void release_tree_content(struct tree_content *t)
|
|
|
|
{
|
|
|
|
struct avail_tree_content *f = (struct avail_tree_content*)t;
|
|
|
|
unsigned int hc = hc_entries(f->entry_capacity);
|
2006-08-23 09:33:47 +04:00
|
|
|
f->next_avail = avail_tree_table[hc];
|
|
|
|
avail_tree_table[hc] = f;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void release_tree_content_recursive(struct tree_content *t)
|
|
|
|
{
|
2006-08-14 08:58:19 +04:00
|
|
|
unsigned int i;
|
|
|
|
for (i = 0; i < t->entry_count; i++)
|
|
|
|
release_tree_entry(t->entries[i]);
|
2006-08-23 09:33:47 +04:00
|
|
|
release_tree_content(t);
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static struct tree_content *grow_tree_content(
|
2006-08-14 08:58:19 +04:00
|
|
|
struct tree_content *t,
|
|
|
|
int amt)
|
|
|
|
{
|
|
|
|
struct tree_content *r = new_tree_content(t->entry_count + amt);
|
|
|
|
r->entry_count = t->entry_count;
|
2006-08-28 20:22:50 +04:00
|
|
|
r->delta_depth = t->delta_depth;
|
2006-08-14 08:58:19 +04:00
|
|
|
memcpy(r->entries,t->entries,t->entry_count*sizeof(t->entries[0]));
|
|
|
|
release_tree_content(t);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2007-02-06 08:43:59 +03:00
|
|
|
static struct tree_entry *new_tree_entry(void)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
|
|
|
struct tree_entry *e;
|
|
|
|
|
|
|
|
if (!avail_tree_entry) {
|
|
|
|
unsigned int n = tree_entry_alloc;
|
2006-08-25 22:53:32 +04:00
|
|
|
total_allocd += n * sizeof(struct tree_entry);
|
2006-08-14 08:58:19 +04:00
|
|
|
avail_tree_entry = e = xmalloc(n * sizeof(struct tree_entry));
|
2006-08-27 06:38:02 +04:00
|
|
|
while (n-- > 1) {
|
2006-08-14 08:58:19 +04:00
|
|
|
*((void**)e) = e + 1;
|
|
|
|
e++;
|
|
|
|
}
|
2006-08-27 07:37:31 +04:00
|
|
|
*((void**)e) = NULL;
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
e = avail_tree_entry;
|
|
|
|
avail_tree_entry = *((void**)e);
|
|
|
|
return e;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void release_tree_entry(struct tree_entry *e)
|
|
|
|
{
|
|
|
|
if (e->tree)
|
2006-08-23 09:33:47 +04:00
|
|
|
release_tree_content_recursive(e->tree);
|
2006-08-14 08:58:19 +04:00
|
|
|
*((void**)e) = avail_tree_entry;
|
|
|
|
avail_tree_entry = e;
|
|
|
|
}
|
|
|
|
|
2007-07-15 09:40:37 +04:00
|
|
|
static struct tree_content *dup_tree_content(struct tree_content *s)
|
|
|
|
{
|
|
|
|
struct tree_content *d;
|
|
|
|
struct tree_entry *a, *b;
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
if (!s)
|
|
|
|
return NULL;
|
|
|
|
d = new_tree_content(s->entry_count);
|
|
|
|
for (i = 0; i < s->entry_count; i++) {
|
|
|
|
a = s->entries[i];
|
|
|
|
b = new_tree_entry();
|
|
|
|
memcpy(b, a, sizeof(*a));
|
|
|
|
if (a->tree && is_null_sha1(b->versions[1].sha1))
|
|
|
|
b->tree = dup_tree_content(a->tree);
|
|
|
|
else
|
|
|
|
b->tree = NULL;
|
|
|
|
d->entries[i] = b;
|
|
|
|
}
|
|
|
|
d->entry_count = s->entry_count;
|
|
|
|
d->delta_depth = s->delta_depth;
|
|
|
|
|
|
|
|
return d;
|
|
|
|
}
|
|
|
|
|
2007-01-17 09:47:25 +03:00
|
|
|
static void start_packfile(void)
|
2007-01-15 12:39:05 +03:00
|
|
|
{
|
2011-12-21 05:18:21 +04:00
|
|
|
static char tmp_file[PATH_MAX];
|
2007-01-15 14:35:41 +03:00
|
|
|
struct packed_git *p;
|
2007-01-15 12:39:05 +03:00
|
|
|
struct pack_header hdr;
|
2007-01-16 09:20:57 +03:00
|
|
|
int pack_fd;
|
2007-01-15 12:39:05 +03:00
|
|
|
|
2011-12-21 05:18:21 +04:00
|
|
|
pack_fd = odb_mkstemp(tmp_file, sizeof(tmp_file),
|
2009-02-25 10:11:29 +03:00
|
|
|
"pack/tmp_pack_XXXXXX");
|
2011-12-21 05:18:21 +04:00
|
|
|
p = xcalloc(1, sizeof(*p) + strlen(tmp_file) + 2);
|
|
|
|
strcpy(p->pack_name, tmp_file);
|
2007-01-15 14:35:41 +03:00
|
|
|
p->pack_fd = pack_fd;
|
2011-03-02 21:01:54 +03:00
|
|
|
p->do_not_close = 1;
|
2010-02-17 22:05:52 +03:00
|
|
|
pack_file = sha1fd(pack_fd, p->pack_name);
|
2007-01-15 12:39:05 +03:00
|
|
|
|
|
|
|
hdr.hdr_signature = htonl(PACK_SIGNATURE);
|
|
|
|
hdr.hdr_version = htonl(2);
|
|
|
|
hdr.hdr_entries = 0;
|
2010-02-17 22:05:52 +03:00
|
|
|
sha1write(pack_file, &hdr, sizeof(hdr));
|
2007-01-15 14:35:41 +03:00
|
|
|
|
|
|
|
pack_data = p;
|
2007-01-15 12:39:05 +03:00
|
|
|
pack_size = sizeof(hdr);
|
|
|
|
object_count = 0;
|
2007-01-15 14:35:41 +03:00
|
|
|
|
|
|
|
all_packs = xrealloc(all_packs, sizeof(*all_packs) * (pack_id + 1));
|
|
|
|
all_packs[pack_id] = p;
|
2007-01-15 12:39:05 +03:00
|
|
|
}
|
|
|
|
|
2010-02-17 22:05:53 +03:00
|
|
|
static const char *create_index(void)
|
2007-01-15 12:39:05 +03:00
|
|
|
{
|
2010-02-17 22:05:53 +03:00
|
|
|
const char *tmpfile;
|
|
|
|
struct pack_idx_entry **idx, **c, **last;
|
|
|
|
struct object_entry *e;
|
2007-01-15 12:39:05 +03:00
|
|
|
struct object_entry_pool *o;
|
|
|
|
|
2010-02-17 22:05:53 +03:00
|
|
|
/* Build the table of object IDs. */
|
|
|
|
idx = xmalloc(object_count * sizeof(*idx));
|
2007-01-15 12:39:05 +03:00
|
|
|
c = idx;
|
|
|
|
for (o = blocks; o; o = o->next_pool)
|
2007-01-15 16:00:49 +03:00
|
|
|
for (e = o->next_free; e-- != o->entries;)
|
|
|
|
if (pack_id == e->pack_id)
|
2010-02-17 22:05:53 +03:00
|
|
|
*c++ = &e->idx;
|
2007-01-15 12:39:05 +03:00
|
|
|
last = idx + object_count;
|
2007-01-15 14:51:58 +03:00
|
|
|
if (c != last)
|
|
|
|
die("internal consistency error creating the index");
|
2007-01-15 12:39:05 +03:00
|
|
|
|
2011-02-26 02:43:25 +03:00
|
|
|
tmpfile = write_idx_file(NULL, idx, object_count, &pack_idx_opts, pack_data->sha1);
|
2007-01-15 12:39:05 +03:00
|
|
|
free(idx);
|
2007-01-16 09:15:31 +03:00
|
|
|
return tmpfile;
|
|
|
|
}
|
|
|
|
|
2010-02-17 22:05:53 +03:00
|
|
|
static char *keep_pack(const char *curr_index_name)
|
2007-01-16 09:15:31 +03:00
|
|
|
{
|
|
|
|
static char name[PATH_MAX];
|
2007-03-07 04:44:17 +03:00
|
|
|
static const char *keep_msg = "fast-import";
|
2007-01-16 09:15:31 +03:00
|
|
|
int keep_fd;
|
|
|
|
|
2009-02-25 10:11:29 +03:00
|
|
|
keep_fd = odb_pack_keep(name, sizeof(name), pack_data->sha1);
|
2007-01-16 09:15:31 +03:00
|
|
|
if (keep_fd < 0)
|
2009-06-27 19:58:47 +04:00
|
|
|
die_errno("cannot create keep file");
|
2008-01-10 11:54:25 +03:00
|
|
|
write_or_die(keep_fd, keep_msg, strlen(keep_msg));
|
|
|
|
if (close(keep_fd))
|
2009-06-27 19:58:47 +04:00
|
|
|
die_errno("failed to write keep file");
|
2007-01-16 09:15:31 +03:00
|
|
|
|
|
|
|
snprintf(name, sizeof(name), "%s/pack/pack-%s.pack",
|
|
|
|
get_object_directory(), sha1_to_hex(pack_data->sha1));
|
|
|
|
if (move_temp_to_file(pack_data->pack_name, name))
|
|
|
|
die("cannot store pack file");
|
|
|
|
|
|
|
|
snprintf(name, sizeof(name), "%s/pack/pack-%s.idx",
|
|
|
|
get_object_directory(), sha1_to_hex(pack_data->sha1));
|
|
|
|
if (move_temp_to_file(curr_index_name, name))
|
|
|
|
die("cannot store index file");
|
2010-02-17 22:05:53 +03:00
|
|
|
free((void *)curr_index_name);
|
2007-01-16 09:15:31 +03:00
|
|
|
return name;
|
|
|
|
}
|
|
|
|
|
2007-01-17 09:47:25 +03:00
|
|
|
static void unkeep_all_packs(void)
|
2007-01-16 09:15:31 +03:00
|
|
|
{
|
|
|
|
static char name[PATH_MAX];
|
|
|
|
int k;
|
|
|
|
|
|
|
|
for (k = 0; k < pack_id; k++) {
|
|
|
|
struct packed_git *p = all_packs[k];
|
|
|
|
snprintf(name, sizeof(name), "%s/pack/pack-%s.keep",
|
|
|
|
get_object_directory(), sha1_to_hex(p->sha1));
|
2009-04-30 01:22:56 +04:00
|
|
|
unlink_or_warn(name);
|
2007-01-16 09:15:31 +03:00
|
|
|
}
|
2007-01-15 12:39:05 +03:00
|
|
|
}
|
|
|
|
|
2007-01-17 09:47:25 +03:00
|
|
|
static void end_packfile(void)
|
2007-01-15 12:39:05 +03:00
|
|
|
{
|
2007-01-15 14:35:41 +03:00
|
|
|
struct packed_git *old_p = pack_data, *new_p;
|
|
|
|
|
2009-02-11 00:36:12 +03:00
|
|
|
clear_delta_base_cache();
|
2007-01-15 14:39:39 +03:00
|
|
|
if (object_count) {
|
2010-02-17 22:05:52 +03:00
|
|
|
unsigned char cur_pack_sha1[20];
|
2007-01-16 09:15:31 +03:00
|
|
|
char *idx_name;
|
2007-01-17 00:18:44 +03:00
|
|
|
int i;
|
|
|
|
struct branch *b;
|
|
|
|
struct tag *t;
|
2007-01-16 09:15:31 +03:00
|
|
|
|
2008-01-18 06:57:00 +03:00
|
|
|
close_pack_windows(pack_data);
|
2010-02-17 22:05:52 +03:00
|
|
|
sha1close(pack_file, cur_pack_sha1, 0);
|
2007-05-02 20:13:14 +04:00
|
|
|
fixup_pack_header_footer(pack_data->pack_fd, pack_data->sha1,
|
2008-08-30 00:07:59 +04:00
|
|
|
pack_data->pack_name, object_count,
|
2010-02-17 22:05:52 +03:00
|
|
|
cur_pack_sha1, pack_size);
|
2007-05-02 20:13:14 +04:00
|
|
|
close(pack_data->pack_fd);
|
2007-01-16 09:15:31 +03:00
|
|
|
idx_name = keep_pack(create_index());
|
2007-01-15 14:39:39 +03:00
|
|
|
|
2009-04-17 22:13:30 +04:00
|
|
|
/* Register the packfile with core git's machinery. */
|
2007-01-15 14:39:39 +03:00
|
|
|
new_p = add_packed_git(idx_name, strlen(idx_name), 1);
|
|
|
|
if (!new_p)
|
|
|
|
die("core git rejected index %s", idx_name);
|
2007-01-17 00:18:44 +03:00
|
|
|
all_packs[pack_id] = new_p;
|
2007-01-15 14:39:39 +03:00
|
|
|
install_packed_git(new_p);
|
2007-01-17 00:18:44 +03:00
|
|
|
|
|
|
|
/* Print the boundary */
|
2007-02-12 03:45:56 +03:00
|
|
|
if (pack_edges) {
|
|
|
|
fprintf(pack_edges, "%s:", new_p->pack_name);
|
|
|
|
for (i = 0; i < branch_table_sz; i++) {
|
|
|
|
for (b = branch_table[i]; b; b = b->table_next_branch) {
|
|
|
|
if (b->pack_id == pack_id)
|
|
|
|
fprintf(pack_edges, " %s", sha1_to_hex(b->sha1));
|
|
|
|
}
|
2007-01-17 00:18:44 +03:00
|
|
|
}
|
2007-02-12 03:45:56 +03:00
|
|
|
for (t = first_tag; t; t = t->next_tag) {
|
|
|
|
if (t->pack_id == pack_id)
|
|
|
|
fprintf(pack_edges, " %s", sha1_to_hex(t->sha1));
|
|
|
|
}
|
|
|
|
fputc('\n', pack_edges);
|
|
|
|
fflush(pack_edges);
|
2007-01-17 00:18:44 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
pack_id++;
|
2007-01-15 14:39:39 +03:00
|
|
|
}
|
2008-12-16 00:11:40 +03:00
|
|
|
else {
|
|
|
|
close(old_p->pack_fd);
|
2009-04-30 01:22:56 +04:00
|
|
|
unlink_or_warn(old_p->pack_name);
|
2008-12-16 00:11:40 +03:00
|
|
|
}
|
2007-01-15 14:35:41 +03:00
|
|
|
free(old_p);
|
|
|
|
|
|
|
|
/* We can't carry a delta across packfiles. */
|
2007-09-17 16:00:38 +04:00
|
|
|
strbuf_release(&last_blob.data);
|
2007-01-15 14:35:41 +03:00
|
|
|
last_blob.offset = 0;
|
|
|
|
last_blob.depth = 0;
|
2007-01-15 12:39:05 +03:00
|
|
|
}
|
|
|
|
|
2007-02-07 10:42:44 +03:00
|
|
|
static void cycle_packfile(void)
|
2007-01-15 16:00:49 +03:00
|
|
|
{
|
|
|
|
end_packfile();
|
|
|
|
start_packfile();
|
|
|
|
}
|
|
|
|
|
2006-08-08 08:46:13 +04:00
|
|
|
static int store_object(
|
|
|
|
enum object_type type,
|
2007-09-17 15:48:17 +04:00
|
|
|
struct strbuf *dat,
|
2006-08-08 11:36:45 +04:00
|
|
|
struct last_object *last,
|
2006-08-23 12:17:45 +04:00
|
|
|
unsigned char *sha1out,
|
2007-01-16 08:33:19 +03:00
|
|
|
uintmax_t mark)
|
2006-08-05 10:04:21 +04:00
|
|
|
{
|
|
|
|
void *out, *delta;
|
2006-08-08 08:46:13 +04:00
|
|
|
struct object_entry *e;
|
|
|
|
unsigned char hdr[96];
|
|
|
|
unsigned char sha1[20];
|
2006-08-05 10:04:21 +04:00
|
|
|
unsigned long hdrlen, deltalen;
|
2008-10-01 22:05:20 +04:00
|
|
|
git_SHA_CTX c;
|
2011-06-10 22:52:15 +04:00
|
|
|
git_zstream s;
|
2006-08-08 08:46:13 +04:00
|
|
|
|
2009-05-01 13:06:36 +04:00
|
|
|
hdrlen = sprintf((char *)hdr,"%s %lu", typename(type),
|
2007-09-17 15:48:17 +04:00
|
|
|
(unsigned long)dat->len) + 1;
|
2008-10-01 22:05:20 +04:00
|
|
|
git_SHA1_Init(&c);
|
|
|
|
git_SHA1_Update(&c, hdr, hdrlen);
|
|
|
|
git_SHA1_Update(&c, dat->buf, dat->len);
|
|
|
|
git_SHA1_Final(sha1, &c);
|
2006-08-08 11:36:45 +04:00
|
|
|
if (sha1out)
|
2006-08-28 18:46:58 +04:00
|
|
|
hashcpy(sha1out, sha1);
|
2006-08-08 08:46:13 +04:00
|
|
|
|
|
|
|
e = insert_object(sha1);
|
2006-08-23 12:17:45 +04:00
|
|
|
if (mark)
|
|
|
|
insert_mark(mark, e);
|
2010-02-17 22:05:51 +03:00
|
|
|
if (e->idx.offset) {
|
2006-08-08 09:14:21 +04:00
|
|
|
duplicate_count_by_type[type]++;
|
2006-08-14 08:58:19 +04:00
|
|
|
return 1;
|
2007-04-20 19:23:45 +04:00
|
|
|
} else if (find_sha1_pack(sha1, packed_git)) {
|
|
|
|
e->type = type;
|
|
|
|
e->pack_id = MAX_PACK_ID;
|
2010-02-17 22:05:51 +03:00
|
|
|
e->idx.offset = 1; /* just not zero! */
|
2007-04-20 19:23:45 +04:00
|
|
|
duplicate_count_by_type[type]++;
|
|
|
|
return 1;
|
2006-08-08 08:46:13 +04:00
|
|
|
}
|
2006-08-05 10:04:21 +04:00
|
|
|
|
2010-02-17 22:05:56 +03:00
|
|
|
if (last && last->data.buf && last->depth < max_depth && dat->len > 20) {
|
2011-08-20 23:04:11 +04:00
|
|
|
delta_count_attempts_by_type[type]++;
|
2007-09-17 16:00:38 +04:00
|
|
|
delta = diff_delta(last->data.buf, last->data.len,
|
2007-09-17 15:48:17 +04:00
|
|
|
dat->buf, dat->len,
|
2010-02-17 22:05:56 +03:00
|
|
|
&deltalen, dat->len - 20);
|
2007-01-15 16:00:49 +03:00
|
|
|
} else
|
|
|
|
delta = NULL;
|
2006-08-05 10:04:21 +04:00
|
|
|
|
|
|
|
memset(&s, 0, sizeof(s));
|
2011-06-10 21:55:10 +04:00
|
|
|
git_deflate_init(&s, pack_compression_level);
|
2007-01-15 16:00:49 +03:00
|
|
|
if (delta) {
|
|
|
|
s.next_in = delta;
|
|
|
|
s.avail_in = deltalen;
|
|
|
|
} else {
|
2007-09-17 15:48:17 +04:00
|
|
|
s.next_in = (void *)dat->buf;
|
|
|
|
s.avail_in = dat->len;
|
2007-01-15 16:00:49 +03:00
|
|
|
}
|
2011-06-10 22:18:17 +04:00
|
|
|
s.avail_out = git_deflate_bound(&s, s.avail_in);
|
2007-01-15 16:00:49 +03:00
|
|
|
s.next_out = out = xmalloc(s.avail_out);
|
2011-06-10 21:55:10 +04:00
|
|
|
while (git_deflate(&s, Z_FINISH) == Z_OK)
|
|
|
|
; /* nothing */
|
|
|
|
git_deflate_end(&s);
|
2007-01-15 16:00:49 +03:00
|
|
|
|
|
|
|
/* Determine if we should auto-checkpoint. */
|
2010-02-17 22:05:54 +03:00
|
|
|
if ((max_packsize && (pack_size + 60 + s.total_out) > max_packsize)
|
2007-01-15 16:00:49 +03:00
|
|
|
|| (pack_size + 60 + s.total_out) < pack_size) {
|
|
|
|
|
|
|
|
/* This new object needs to *not* have the current pack_id. */
|
|
|
|
e->pack_id = pack_id + 1;
|
2007-02-07 10:42:44 +03:00
|
|
|
cycle_packfile();
|
2007-01-15 16:00:49 +03:00
|
|
|
|
|
|
|
/* We cannot carry a delta into the new pack. */
|
|
|
|
if (delta) {
|
|
|
|
free(delta);
|
|
|
|
delta = NULL;
|
2007-01-16 07:40:27 +03:00
|
|
|
|
|
|
|
memset(&s, 0, sizeof(s));
|
2011-06-10 21:55:10 +04:00
|
|
|
git_deflate_init(&s, pack_compression_level);
|
2007-09-17 15:48:17 +04:00
|
|
|
s.next_in = (void *)dat->buf;
|
|
|
|
s.avail_in = dat->len;
|
2011-06-10 22:18:17 +04:00
|
|
|
s.avail_out = git_deflate_bound(&s, s.avail_in);
|
2007-01-16 07:40:27 +03:00
|
|
|
s.next_out = out = xrealloc(out, s.avail_out);
|
2011-06-10 21:55:10 +04:00
|
|
|
while (git_deflate(&s, Z_FINISH) == Z_OK)
|
|
|
|
; /* nothing */
|
|
|
|
git_deflate_end(&s);
|
2007-01-15 16:00:49 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
e->type = type;
|
|
|
|
e->pack_id = pack_id;
|
2010-02-17 22:05:51 +03:00
|
|
|
e->idx.offset = pack_size;
|
2007-01-15 16:00:49 +03:00
|
|
|
object_count++;
|
|
|
|
object_count_by_type[type]++;
|
2006-08-05 10:04:21 +04:00
|
|
|
|
2010-02-17 22:05:53 +03:00
|
|
|
crc32_begin(pack_file);
|
|
|
|
|
2006-08-05 10:04:21 +04:00
|
|
|
if (delta) {
|
2010-02-17 22:05:54 +03:00
|
|
|
off_t ofs = e->idx.offset - last->offset;
|
2007-01-14 14:20:23 +03:00
|
|
|
unsigned pos = sizeof(hdr) - 1;
|
|
|
|
|
2006-08-28 20:22:50 +04:00
|
|
|
delta_count_by_type[type]++;
|
2007-11-14 07:48:42 +03:00
|
|
|
e->depth = last->depth + 1;
|
2007-01-14 14:20:23 +03:00
|
|
|
|
2010-02-17 02:42:54 +03:00
|
|
|
hdrlen = encode_in_pack_object_header(OBJ_OFS_DELTA, deltalen, hdr);
|
2010-02-17 22:05:52 +03:00
|
|
|
sha1write(pack_file, hdr, hdrlen);
|
2007-01-14 14:20:23 +03:00
|
|
|
pack_size += hdrlen;
|
|
|
|
|
|
|
|
hdr[pos] = ofs & 127;
|
|
|
|
while (ofs >>= 7)
|
|
|
|
hdr[--pos] = 128 | (--ofs & 127);
|
2010-02-17 22:05:52 +03:00
|
|
|
sha1write(pack_file, hdr + pos, sizeof(hdr) - pos);
|
2007-01-14 14:20:23 +03:00
|
|
|
pack_size += sizeof(hdr) - pos;
|
2006-08-05 10:04:21 +04:00
|
|
|
} else {
|
2007-11-14 07:48:42 +03:00
|
|
|
e->depth = 0;
|
2010-02-17 02:42:54 +03:00
|
|
|
hdrlen = encode_in_pack_object_header(type, dat->len, hdr);
|
2010-02-17 22:05:52 +03:00
|
|
|
sha1write(pack_file, hdr, hdrlen);
|
2006-08-24 12:37:35 +04:00
|
|
|
pack_size += hdrlen;
|
2006-08-05 10:04:21 +04:00
|
|
|
}
|
|
|
|
|
2010-02-17 22:05:52 +03:00
|
|
|
sha1write(pack_file, out, s.total_out);
|
2006-08-24 12:37:35 +04:00
|
|
|
pack_size += s.total_out;
|
2006-08-05 10:04:21 +04:00
|
|
|
|
2010-02-17 22:05:53 +03:00
|
|
|
e->idx.crc32 = crc32_end(pack_file);
|
|
|
|
|
2006-08-05 10:04:21 +04:00
|
|
|
free(out);
|
2007-02-06 20:05:51 +03:00
|
|
|
free(delta);
|
2006-08-14 08:58:19 +04:00
|
|
|
if (last) {
|
2007-09-17 16:00:38 +04:00
|
|
|
if (last->no_swap) {
|
|
|
|
last->data = *dat;
|
|
|
|
} else {
|
2007-09-20 02:42:12 +04:00
|
|
|
strbuf_swap(&last->data, dat);
|
2007-09-17 16:00:38 +04:00
|
|
|
}
|
2010-02-17 22:05:51 +03:00
|
|
|
last->offset = e->idx.offset;
|
2007-11-14 07:48:42 +03:00
|
|
|
last->depth = e->depth;
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-11-18 04:26:54 +04:00
|
|
|
static void truncate_pack(struct sha1file_checkpoint *checkpoint)
|
2010-02-01 20:27:35 +03:00
|
|
|
{
|
2011-11-18 04:26:54 +04:00
|
|
|
if (sha1file_truncate(pack_file, checkpoint))
|
2010-02-01 20:27:35 +03:00
|
|
|
die_errno("cannot truncate pack to skip duplicate");
|
2011-11-18 04:26:54 +04:00
|
|
|
pack_size = checkpoint->offset;
|
2010-02-01 20:27:35 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void stream_blob(uintmax_t len, unsigned char *sha1out, uintmax_t mark)
|
|
|
|
{
|
|
|
|
size_t in_sz = 64 * 1024, out_sz = 64 * 1024;
|
|
|
|
unsigned char *in_buf = xmalloc(in_sz);
|
|
|
|
unsigned char *out_buf = xmalloc(out_sz);
|
|
|
|
struct object_entry *e;
|
|
|
|
unsigned char sha1[20];
|
|
|
|
unsigned long hdrlen;
|
|
|
|
off_t offset;
|
|
|
|
git_SHA_CTX c;
|
2011-06-10 22:52:15 +04:00
|
|
|
git_zstream s;
|
2011-11-18 04:26:54 +04:00
|
|
|
struct sha1file_checkpoint checkpoint;
|
2010-02-01 20:27:35 +03:00
|
|
|
int status = Z_OK;
|
|
|
|
|
|
|
|
/* Determine if we should auto-checkpoint. */
|
2010-02-17 22:05:54 +03:00
|
|
|
if ((max_packsize && (pack_size + 60 + len) > max_packsize)
|
2010-02-01 20:27:35 +03:00
|
|
|
|| (pack_size + 60 + len) < pack_size)
|
|
|
|
cycle_packfile();
|
|
|
|
|
2011-11-18 04:26:54 +04:00
|
|
|
sha1file_checkpoint(pack_file, &checkpoint);
|
|
|
|
offset = checkpoint.offset;
|
2010-02-17 22:05:52 +03:00
|
|
|
|
2010-02-01 20:27:35 +03:00
|
|
|
hdrlen = snprintf((char *)out_buf, out_sz, "blob %" PRIuMAX, len) + 1;
|
|
|
|
if (out_sz <= hdrlen)
|
|
|
|
die("impossibly large object header");
|
|
|
|
|
|
|
|
git_SHA1_Init(&c);
|
|
|
|
git_SHA1_Update(&c, out_buf, hdrlen);
|
|
|
|
|
2010-02-17 22:05:53 +03:00
|
|
|
crc32_begin(pack_file);
|
|
|
|
|
2010-02-01 20:27:35 +03:00
|
|
|
memset(&s, 0, sizeof(s));
|
2011-06-10 21:55:10 +04:00
|
|
|
git_deflate_init(&s, pack_compression_level);
|
2010-02-01 20:27:35 +03:00
|
|
|
|
2010-02-17 02:42:54 +03:00
|
|
|
hdrlen = encode_in_pack_object_header(OBJ_BLOB, len, out_buf);
|
2010-02-01 20:27:35 +03:00
|
|
|
if (out_sz <= hdrlen)
|
|
|
|
die("impossibly large object header");
|
|
|
|
|
|
|
|
s.next_out = out_buf + hdrlen;
|
|
|
|
s.avail_out = out_sz - hdrlen;
|
|
|
|
|
|
|
|
while (status != Z_STREAM_END) {
|
|
|
|
if (0 < len && !s.avail_in) {
|
|
|
|
size_t cnt = in_sz < len ? in_sz : (size_t)len;
|
|
|
|
size_t n = fread(in_buf, 1, cnt, stdin);
|
|
|
|
if (!n && feof(stdin))
|
|
|
|
die("EOF in data (%" PRIuMAX " bytes remaining)", len);
|
|
|
|
|
|
|
|
git_SHA1_Update(&c, in_buf, n);
|
|
|
|
s.next_in = in_buf;
|
|
|
|
s.avail_in = n;
|
|
|
|
len -= n;
|
|
|
|
}
|
|
|
|
|
2011-06-10 21:55:10 +04:00
|
|
|
status = git_deflate(&s, len ? 0 : Z_FINISH);
|
2010-02-01 20:27:35 +03:00
|
|
|
|
|
|
|
if (!s.avail_out || status == Z_STREAM_END) {
|
|
|
|
size_t n = s.next_out - out_buf;
|
2010-02-17 22:05:52 +03:00
|
|
|
sha1write(pack_file, out_buf, n);
|
2010-02-01 20:27:35 +03:00
|
|
|
pack_size += n;
|
|
|
|
s.next_out = out_buf;
|
|
|
|
s.avail_out = out_sz;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (status) {
|
|
|
|
case Z_OK:
|
|
|
|
case Z_BUF_ERROR:
|
|
|
|
case Z_STREAM_END:
|
|
|
|
continue;
|
|
|
|
default:
|
|
|
|
die("unexpected deflate failure: %d", status);
|
|
|
|
}
|
|
|
|
}
|
2011-06-10 21:55:10 +04:00
|
|
|
git_deflate_end(&s);
|
2010-02-01 20:27:35 +03:00
|
|
|
git_SHA1_Final(sha1, &c);
|
|
|
|
|
|
|
|
if (sha1out)
|
|
|
|
hashcpy(sha1out, sha1);
|
|
|
|
|
|
|
|
e = insert_object(sha1);
|
|
|
|
|
|
|
|
if (mark)
|
|
|
|
insert_mark(mark, e);
|
|
|
|
|
2010-02-17 22:05:51 +03:00
|
|
|
if (e->idx.offset) {
|
2010-02-01 20:27:35 +03:00
|
|
|
duplicate_count_by_type[OBJ_BLOB]++;
|
2011-11-18 04:26:54 +04:00
|
|
|
truncate_pack(&checkpoint);
|
2010-02-01 20:27:35 +03:00
|
|
|
|
|
|
|
} else if (find_sha1_pack(sha1, packed_git)) {
|
|
|
|
e->type = OBJ_BLOB;
|
|
|
|
e->pack_id = MAX_PACK_ID;
|
2010-02-17 22:05:51 +03:00
|
|
|
e->idx.offset = 1; /* just not zero! */
|
2010-02-01 20:27:35 +03:00
|
|
|
duplicate_count_by_type[OBJ_BLOB]++;
|
2011-11-18 04:26:54 +04:00
|
|
|
truncate_pack(&checkpoint);
|
2010-02-01 20:27:35 +03:00
|
|
|
|
|
|
|
} else {
|
|
|
|
e->depth = 0;
|
|
|
|
e->type = OBJ_BLOB;
|
|
|
|
e->pack_id = pack_id;
|
2010-02-17 22:05:51 +03:00
|
|
|
e->idx.offset = offset;
|
2010-02-17 22:05:53 +03:00
|
|
|
e->idx.crc32 = crc32_end(pack_file);
|
2010-02-01 20:27:35 +03:00
|
|
|
object_count++;
|
|
|
|
object_count_by_type[OBJ_BLOB]++;
|
|
|
|
}
|
|
|
|
|
|
|
|
free(in_buf);
|
|
|
|
free(out_buf);
|
|
|
|
}
|
|
|
|
|
2008-01-21 07:37:01 +03:00
|
|
|
/* All calls must be guarded by find_object() or find_mark() to
|
|
|
|
* ensure the 'struct object_entry' passed was written by this
|
|
|
|
* process instance. We unpack the entry by the offset, avoiding
|
|
|
|
* the need for the corresponding .idx file. This unpacking rule
|
|
|
|
* works because we only use OBJ_REF_DELTA within the packfiles
|
|
|
|
* created by fast-import.
|
|
|
|
*
|
|
|
|
* oe must not be NULL. Such an oe usually comes from giving
|
|
|
|
* an unknown SHA-1 to find_object() or an undefined mark to
|
|
|
|
* find_mark(). Callers must test for this condition and use
|
|
|
|
* the standard read_sha1_file() when it happens.
|
|
|
|
*
|
|
|
|
* oe->pack_id must not be MAX_PACK_ID. Such an oe is usually from
|
|
|
|
* find_mark(), where the mark was reloaded from an existing marks
|
|
|
|
* file and is referencing an object that this fast-import process
|
|
|
|
* instance did not write out to a packfile. Callers must test for
|
|
|
|
* this condition and use read_sha1_file() instead.
|
|
|
|
*/
|
2007-01-15 14:35:41 +03:00
|
|
|
static void *gfi_unpack_entry(
|
|
|
|
struct object_entry *oe,
|
|
|
|
unsigned long *sizep)
|
2006-08-24 12:37:35 +04:00
|
|
|
{
|
2007-02-26 22:55:59 +03:00
|
|
|
enum object_type type;
|
2007-01-15 14:35:41 +03:00
|
|
|
struct packed_git *p = all_packs[oe->pack_id];
|
2008-01-18 06:57:00 +03:00
|
|
|
if (p == pack_data && p->pack_size < (pack_size + 20)) {
|
2008-01-21 07:37:01 +03:00
|
|
|
/* The object is stored in the packfile we are writing to
|
|
|
|
* and we have modified it since the last time we scanned
|
|
|
|
* back to read a previously written object. If an old
|
|
|
|
* window covered [p->pack_size, p->pack_size + 20) its
|
|
|
|
* data is stale and is not valid. Closing all windows
|
|
|
|
* and updating the packfile length ensures we can read
|
|
|
|
* the newly written data.
|
|
|
|
*/
|
2008-01-18 06:57:00 +03:00
|
|
|
close_pack_windows(p);
|
2010-02-17 22:05:52 +03:00
|
|
|
sha1flush(pack_file);
|
2008-01-21 07:37:01 +03:00
|
|
|
|
|
|
|
/* We have to offer 20 bytes additional on the end of
|
|
|
|
* the packfile as the core unpacker code assumes the
|
|
|
|
* footer is present at the file end and must promise
|
|
|
|
* at least 20 bytes within any window it maps. But
|
|
|
|
* we don't actually create the footer here.
|
|
|
|
*/
|
2007-01-15 14:35:41 +03:00
|
|
|
p->pack_size = pack_size + 20;
|
2008-01-18 06:57:00 +03:00
|
|
|
}
|
2010-02-17 22:05:51 +03:00
|
|
|
return unpack_entry(p, oe->idx.offset, &type, sizep);
|
2006-08-24 12:37:35 +04:00
|
|
|
}
|
|
|
|
|
2007-02-06 00:34:56 +03:00
|
|
|
static const char *get_mode(const char *str, uint16_t *modep)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
|
|
|
unsigned char c;
|
2007-02-06 00:34:56 +03:00
|
|
|
uint16_t mode = 0;
|
2006-08-14 08:58:19 +04:00
|
|
|
|
|
|
|
while ((c = *str++) != ' ') {
|
|
|
|
if (c < '0' || c > '7')
|
|
|
|
return NULL;
|
|
|
|
mode = (mode << 3) + (c - '0');
|
|
|
|
}
|
|
|
|
*modep = mode;
|
|
|
|
return str;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void load_tree(struct tree_entry *root)
|
|
|
|
{
|
2009-05-01 13:06:36 +04:00
|
|
|
unsigned char *sha1 = root->versions[1].sha1;
|
2006-08-14 08:58:19 +04:00
|
|
|
struct object_entry *myoe;
|
|
|
|
struct tree_content *t;
|
|
|
|
unsigned long size;
|
|
|
|
char *buf;
|
|
|
|
const char *c;
|
|
|
|
|
|
|
|
root->tree = t = new_tree_content(8);
|
2006-08-28 20:22:50 +04:00
|
|
|
if (is_null_sha1(sha1))
|
2006-08-14 08:58:19 +04:00
|
|
|
return;
|
|
|
|
|
2006-08-28 20:22:50 +04:00
|
|
|
myoe = find_object(sha1);
|
2007-05-24 01:01:49 +04:00
|
|
|
if (myoe && myoe->pack_id != MAX_PACK_ID) {
|
2006-08-24 12:37:35 +04:00
|
|
|
if (myoe->type != OBJ_TREE)
|
2006-08-28 20:22:50 +04:00
|
|
|
die("Not a tree: %s", sha1_to_hex(sha1));
|
2007-11-14 07:48:42 +03:00
|
|
|
t->delta_depth = myoe->depth;
|
2007-01-15 14:35:41 +03:00
|
|
|
buf = gfi_unpack_entry(myoe, &size);
|
2008-02-14 09:34:34 +03:00
|
|
|
if (!buf)
|
|
|
|
die("Can't load tree %s", sha1_to_hex(sha1));
|
2006-08-14 08:58:19 +04:00
|
|
|
} else {
|
2007-02-26 22:55:59 +03:00
|
|
|
enum object_type type;
|
|
|
|
buf = read_sha1_file(sha1, &type, &size);
|
|
|
|
if (!buf || type != OBJ_TREE)
|
2006-08-28 20:22:50 +04:00
|
|
|
die("Can't load tree %s", sha1_to_hex(sha1));
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
c = buf;
|
|
|
|
while (c != (buf + size)) {
|
|
|
|
struct tree_entry *e = new_tree_entry();
|
|
|
|
|
|
|
|
if (t->entry_count == t->entry_capacity)
|
2007-03-11 05:39:17 +03:00
|
|
|
root->tree = t = grow_tree_content(t, t->entry_count);
|
2006-08-14 08:58:19 +04:00
|
|
|
t->entries[t->entry_count++] = e;
|
|
|
|
|
|
|
|
e->tree = NULL;
|
2006-08-28 20:22:50 +04:00
|
|
|
c = get_mode(c, &e->versions[1].mode);
|
2006-08-14 08:58:19 +04:00
|
|
|
if (!c)
|
2006-08-28 20:22:50 +04:00
|
|
|
die("Corrupt mode in %s", sha1_to_hex(sha1));
|
|
|
|
e->versions[0].mode = e->versions[1].mode;
|
2007-03-12 22:48:37 +03:00
|
|
|
e->name = to_atom(c, strlen(c));
|
2006-08-14 08:58:19 +04:00
|
|
|
c += e->name->str_len + 1;
|
2009-05-01 13:06:36 +04:00
|
|
|
hashcpy(e->versions[0].sha1, (unsigned char *)c);
|
|
|
|
hashcpy(e->versions[1].sha1, (unsigned char *)c);
|
2006-08-14 08:58:19 +04:00
|
|
|
c += 20;
|
|
|
|
}
|
|
|
|
free(buf);
|
|
|
|
}
|
|
|
|
|
2006-08-28 20:22:50 +04:00
|
|
|
static int tecmp0 (const void *_a, const void *_b)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
|
|
|
struct tree_entry *a = *((struct tree_entry**)_a);
|
|
|
|
struct tree_entry *b = *((struct tree_entry**)_b);
|
|
|
|
return base_name_compare(
|
2006-08-28 20:22:50 +04:00
|
|
|
a->name->str_dat, a->name->str_len, a->versions[0].mode,
|
|
|
|
b->name->str_dat, b->name->str_len, b->versions[0].mode);
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
|
|
|
|
2006-08-28 20:22:50 +04:00
|
|
|
static int tecmp1 (const void *_a, const void *_b)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
2006-08-28 20:22:50 +04:00
|
|
|
struct tree_entry *a = *((struct tree_entry**)_a);
|
|
|
|
struct tree_entry *b = *((struct tree_entry**)_b);
|
|
|
|
return base_name_compare(
|
|
|
|
a->name->str_dat, a->name->str_len, a->versions[1].mode,
|
|
|
|
b->name->str_dat, b->name->str_len, b->versions[1].mode);
|
|
|
|
}
|
|
|
|
|
2007-09-17 15:48:17 +04:00
|
|
|
static void mktree(struct tree_content *t, int v, struct strbuf *b)
|
2006-08-28 20:22:50 +04:00
|
|
|
{
|
|
|
|
size_t maxlen = 0;
|
2006-08-14 08:58:19 +04:00
|
|
|
unsigned int i;
|
|
|
|
|
2006-08-28 20:22:50 +04:00
|
|
|
if (!v)
|
|
|
|
qsort(t->entries,t->entry_count,sizeof(t->entries[0]),tecmp0);
|
|
|
|
else
|
|
|
|
qsort(t->entries,t->entry_count,sizeof(t->entries[0]),tecmp1);
|
2006-08-14 08:58:19 +04:00
|
|
|
|
|
|
|
for (i = 0; i < t->entry_count; i++) {
|
2006-08-28 20:22:50 +04:00
|
|
|
if (t->entries[i]->versions[v].mode)
|
|
|
|
maxlen += t->entries[i]->name->str_len + 34;
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
|
|
|
|
2007-09-17 15:48:17 +04:00
|
|
|
strbuf_reset(b);
|
|
|
|
strbuf_grow(b, maxlen);
|
2006-08-14 08:58:19 +04:00
|
|
|
for (i = 0; i < t->entry_count; i++) {
|
|
|
|
struct tree_entry *e = t->entries[i];
|
2006-08-28 20:22:50 +04:00
|
|
|
if (!e->versions[v].mode)
|
|
|
|
continue;
|
2011-08-14 22:32:24 +04:00
|
|
|
strbuf_addf(b, "%o %s%c",
|
|
|
|
(unsigned int)(e->versions[v].mode & ~NO_DELTA),
|
|
|
|
e->name->str_dat, '\0');
|
2007-09-17 15:48:17 +04:00
|
|
|
strbuf_add(b, e->versions[v].sha1, 20);
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
2006-08-28 20:22:50 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void store_tree(struct tree_entry *root)
|
|
|
|
{
|
|
|
|
struct tree_content *t = root->tree;
|
|
|
|
unsigned int i, j, del;
|
2007-09-17 16:00:38 +04:00
|
|
|
struct last_object lo = { STRBUF_INIT, 0, 0, /* no_swap */ 1 };
|
2011-08-14 22:32:24 +04:00
|
|
|
struct object_entry *le = NULL;
|
2006-08-28 20:22:50 +04:00
|
|
|
|
|
|
|
if (!is_null_sha1(root->versions[1].sha1))
|
|
|
|
return;
|
|
|
|
|
|
|
|
for (i = 0; i < t->entry_count; i++) {
|
|
|
|
if (t->entries[i]->tree)
|
|
|
|
store_tree(t->entries[i]);
|
|
|
|
}
|
|
|
|
|
2011-08-14 22:32:24 +04:00
|
|
|
if (!(root->versions[0].mode & NO_DELTA))
|
|
|
|
le = find_object(root->versions[0].sha1);
|
2007-09-17 16:00:38 +04:00
|
|
|
if (S_ISDIR(root->versions[0].mode) && le && le->pack_id == pack_id) {
|
2007-09-17 15:48:17 +04:00
|
|
|
mktree(t, 0, &old_tree);
|
2007-09-17 16:00:38 +04:00
|
|
|
lo.data = old_tree;
|
2010-02-17 22:05:51 +03:00
|
|
|
lo.offset = le->idx.offset;
|
2006-08-28 20:22:50 +04:00
|
|
|
lo.depth = t->delta_depth;
|
|
|
|
}
|
|
|
|
|
2007-09-17 15:48:17 +04:00
|
|
|
mktree(t, 1, &new_tree);
|
|
|
|
store_object(OBJ_TREE, &new_tree, &lo, root->versions[1].sha1, 0);
|
2006-08-28 20:22:50 +04:00
|
|
|
|
|
|
|
t->delta_depth = lo.depth;
|
|
|
|
for (i = 0, j = 0, del = 0; i < t->entry_count; i++) {
|
|
|
|
struct tree_entry *e = t->entries[i];
|
|
|
|
if (e->versions[1].mode) {
|
|
|
|
e->versions[0].mode = e->versions[1].mode;
|
|
|
|
hashcpy(e->versions[0].sha1, e->versions[1].sha1);
|
|
|
|
t->entries[j++] = e;
|
|
|
|
} else {
|
|
|
|
release_tree_entry(e);
|
|
|
|
del++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
t->entry_count -= del;
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
|
|
|
|
fast-import: tighten M 040000 syntax
When tree_content_set() is asked to modify the path "foo/bar/",
it first recurses like so:
tree_content_set(root, "foo/bar/", sha1, S_IFDIR) ->
tree_content_set(root:foo, "bar/", ...) ->
tree_content_set(root:foo/bar, "", ...)
And as a side-effect of 2794ad5 (fast-import: Allow filemodify to set
the root, 2010-10-10), this last call is accepted and changes
the tree entry for root:foo/bar to refer to the specified tree.
That seems safe enough but let's reject the new syntax (we never meant
to support it) and make it harder for frontends to introduce pointless
incompatibilities with git fast-import 1.7.3.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-10-18 05:08:53 +04:00
|
|
|
static void tree_content_replace(
|
|
|
|
struct tree_entry *root,
|
|
|
|
const unsigned char *sha1,
|
|
|
|
const uint16_t mode,
|
|
|
|
struct tree_content *newtree)
|
|
|
|
{
|
|
|
|
if (!S_ISDIR(mode))
|
|
|
|
die("Root cannot be a non-directory");
|
2011-08-14 22:32:24 +04:00
|
|
|
hashclr(root->versions[0].sha1);
|
fast-import: tighten M 040000 syntax
When tree_content_set() is asked to modify the path "foo/bar/",
it first recurses like so:
tree_content_set(root, "foo/bar/", sha1, S_IFDIR) ->
tree_content_set(root:foo, "bar/", ...) ->
tree_content_set(root:foo/bar, "", ...)
And as a side-effect of 2794ad5 (fast-import: Allow filemodify to set
the root, 2010-10-10), this last call is accepted and changes
the tree entry for root:foo/bar to refer to the specified tree.
That seems safe enough but let's reject the new syntax (we never meant
to support it) and make it harder for frontends to introduce pointless
incompatibilities with git fast-import 1.7.3.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-10-18 05:08:53 +04:00
|
|
|
hashcpy(root->versions[1].sha1, sha1);
|
|
|
|
if (root->tree)
|
|
|
|
release_tree_content_recursive(root->tree);
|
|
|
|
root->tree = newtree;
|
|
|
|
}
|
|
|
|
|
2006-08-14 08:58:19 +04:00
|
|
|
static int tree_content_set(
|
|
|
|
struct tree_entry *root,
|
|
|
|
const char *p,
|
|
|
|
const unsigned char *sha1,
|
2007-07-10 06:58:23 +04:00
|
|
|
const uint16_t mode,
|
|
|
|
struct tree_content *subtree)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
2010-10-18 05:03:38 +04:00
|
|
|
struct tree_content *t;
|
2006-08-14 08:58:19 +04:00
|
|
|
const char *slash1;
|
|
|
|
unsigned int i, n;
|
|
|
|
struct tree_entry *e;
|
|
|
|
|
|
|
|
slash1 = strchr(p, '/');
|
|
|
|
if (slash1)
|
|
|
|
n = slash1 - p;
|
|
|
|
else
|
|
|
|
n = strlen(p);
|
2007-04-29 04:01:27 +04:00
|
|
|
if (!n)
|
|
|
|
die("Empty path component found in input");
|
2007-07-10 06:58:23 +04:00
|
|
|
if (!slash1 && !S_ISDIR(mode) && subtree)
|
|
|
|
die("Non-directories cannot have subtrees");
|
2006-08-14 08:58:19 +04:00
|
|
|
|
2010-10-18 05:03:38 +04:00
|
|
|
if (!root->tree)
|
|
|
|
load_tree(root);
|
|
|
|
t = root->tree;
|
2006-08-14 08:58:19 +04:00
|
|
|
for (i = 0; i < t->entry_count; i++) {
|
|
|
|
e = t->entries[i];
|
2010-10-03 13:56:46 +04:00
|
|
|
if (e->name->str_len == n && !strncmp_icase(p, e->name->str_dat, n)) {
|
2006-08-14 08:58:19 +04:00
|
|
|
if (!slash1) {
|
2007-07-10 06:58:23 +04:00
|
|
|
if (!S_ISDIR(mode)
|
|
|
|
&& e->versions[1].mode == mode
|
2006-08-28 20:22:50 +04:00
|
|
|
&& !hashcmp(e->versions[1].sha1, sha1))
|
2006-08-14 08:58:19 +04:00
|
|
|
return 0;
|
2006-08-28 20:22:50 +04:00
|
|
|
e->versions[1].mode = mode;
|
|
|
|
hashcpy(e->versions[1].sha1, sha1);
|
2007-07-10 06:58:23 +04:00
|
|
|
if (e->tree)
|
2006-08-23 09:33:47 +04:00
|
|
|
release_tree_content_recursive(e->tree);
|
2007-07-10 06:58:23 +04:00
|
|
|
e->tree = subtree;
|
2011-08-14 22:32:24 +04:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We need to leave e->versions[0].sha1 alone
|
|
|
|
* to avoid modifying the preimage tree used
|
|
|
|
* when writing out the parent directory.
|
|
|
|
* But after replacing the subdir with a
|
|
|
|
* completely different one, it's not a good
|
|
|
|
* delta base any more, and besides, we've
|
|
|
|
* thrown away the tree entries needed to
|
|
|
|
* make a delta against it.
|
|
|
|
*
|
|
|
|
* So let's just explicitly disable deltas
|
|
|
|
* for the subtree.
|
|
|
|
*/
|
|
|
|
if (S_ISDIR(e->versions[0].mode))
|
|
|
|
e->versions[0].mode |= NO_DELTA;
|
|
|
|
|
2006-08-28 20:22:50 +04:00
|
|
|
hashclr(root->versions[1].sha1);
|
2006-08-14 08:58:19 +04:00
|
|
|
return 1;
|
|
|
|
}
|
2006-08-28 20:22:50 +04:00
|
|
|
if (!S_ISDIR(e->versions[1].mode)) {
|
2006-08-14 08:58:19 +04:00
|
|
|
e->tree = new_tree_content(8);
|
2006-08-28 20:22:50 +04:00
|
|
|
e->versions[1].mode = S_IFDIR;
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
|
|
|
if (!e->tree)
|
|
|
|
load_tree(e);
|
2007-07-10 06:58:23 +04:00
|
|
|
if (tree_content_set(e, slash1 + 1, sha1, mode, subtree)) {
|
2006-08-28 20:22:50 +04:00
|
|
|
hashclr(root->versions[1].sha1);
|
2006-08-14 08:58:19 +04:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (t->entry_count == t->entry_capacity)
|
2007-03-11 05:39:17 +03:00
|
|
|
root->tree = t = grow_tree_content(t, t->entry_count);
|
2006-08-14 08:58:19 +04:00
|
|
|
e = new_tree_entry();
|
2007-03-12 22:48:37 +03:00
|
|
|
e->name = to_atom(p, n);
|
2006-08-28 20:22:50 +04:00
|
|
|
e->versions[0].mode = 0;
|
|
|
|
hashclr(e->versions[0].sha1);
|
2006-08-14 08:58:19 +04:00
|
|
|
t->entries[t->entry_count++] = e;
|
|
|
|
if (slash1) {
|
|
|
|
e->tree = new_tree_content(8);
|
2006-08-28 20:22:50 +04:00
|
|
|
e->versions[1].mode = S_IFDIR;
|
2007-07-10 06:58:23 +04:00
|
|
|
tree_content_set(e, slash1 + 1, sha1, mode, subtree);
|
2006-08-14 08:58:19 +04:00
|
|
|
} else {
|
2007-07-10 06:58:23 +04:00
|
|
|
e->tree = subtree;
|
2006-08-28 20:22:50 +04:00
|
|
|
e->versions[1].mode = mode;
|
|
|
|
hashcpy(e->versions[1].sha1, sha1);
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
2006-08-28 20:22:50 +04:00
|
|
|
hashclr(root->versions[1].sha1);
|
2006-08-14 08:58:19 +04:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2007-07-10 06:58:23 +04:00
|
|
|
static int tree_content_remove(
|
|
|
|
struct tree_entry *root,
|
|
|
|
const char *p,
|
2013-06-23 18:58:22 +04:00
|
|
|
struct tree_entry *backup_leaf,
|
|
|
|
int allow_root)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
2010-10-18 05:03:38 +04:00
|
|
|
struct tree_content *t;
|
2006-08-14 08:58:19 +04:00
|
|
|
const char *slash1;
|
|
|
|
unsigned int i, n;
|
|
|
|
struct tree_entry *e;
|
|
|
|
|
|
|
|
slash1 = strchr(p, '/');
|
|
|
|
if (slash1)
|
|
|
|
n = slash1 - p;
|
|
|
|
else
|
|
|
|
n = strlen(p);
|
|
|
|
|
2010-10-18 05:03:38 +04:00
|
|
|
if (!root->tree)
|
|
|
|
load_tree(root);
|
2013-06-23 18:58:22 +04:00
|
|
|
|
|
|
|
if (!*p && allow_root) {
|
|
|
|
e = root;
|
|
|
|
goto del_entry;
|
|
|
|
}
|
|
|
|
|
2010-10-18 05:03:38 +04:00
|
|
|
t = root->tree;
|
2006-08-14 08:58:19 +04:00
|
|
|
for (i = 0; i < t->entry_count; i++) {
|
|
|
|
e = t->entries[i];
|
2010-10-03 13:56:46 +04:00
|
|
|
if (e->name->str_len == n && !strncmp_icase(p, e->name->str_dat, n)) {
|
2010-07-09 17:10:56 +04:00
|
|
|
if (slash1 && !S_ISDIR(e->versions[1].mode))
|
|
|
|
/*
|
|
|
|
* If p names a file in some subdirectory, and a
|
|
|
|
* file or symlink matching the name of the
|
|
|
|
* parent directory of p exists, then p cannot
|
|
|
|
* exist and need not be deleted.
|
|
|
|
*/
|
|
|
|
return 1;
|
2006-08-28 20:22:50 +04:00
|
|
|
if (!slash1 || !S_ISDIR(e->versions[1].mode))
|
2006-08-14 08:58:19 +04:00
|
|
|
goto del_entry;
|
|
|
|
if (!e->tree)
|
|
|
|
load_tree(e);
|
2013-06-23 18:58:22 +04:00
|
|
|
if (tree_content_remove(e, slash1 + 1, backup_leaf, 0)) {
|
2006-08-29 05:43:04 +04:00
|
|
|
for (n = 0; n < e->tree->entry_count; n++) {
|
|
|
|
if (e->tree->entries[n]->versions[1].mode) {
|
|
|
|
hashclr(root->versions[1].sha1);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
}
|
2007-07-10 06:58:23 +04:00
|
|
|
backup_leaf = NULL;
|
2006-08-29 05:43:04 +04:00
|
|
|
goto del_entry;
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
del_entry:
|
2007-07-10 06:58:23 +04:00
|
|
|
if (backup_leaf)
|
|
|
|
memcpy(backup_leaf, e, sizeof(*backup_leaf));
|
|
|
|
else if (e->tree)
|
2006-08-28 20:22:50 +04:00
|
|
|
release_tree_content_recursive(e->tree);
|
2007-07-10 06:58:23 +04:00
|
|
|
e->tree = NULL;
|
2006-08-28 20:22:50 +04:00
|
|
|
e->versions[1].mode = 0;
|
|
|
|
hashclr(e->versions[1].sha1);
|
|
|
|
hashclr(root->versions[1].sha1);
|
2006-08-08 08:46:13 +04:00
|
|
|
return 1;
|
2006-08-05 10:04:21 +04:00
|
|
|
}
|
|
|
|
|
2007-07-15 09:40:37 +04:00
|
|
|
static int tree_content_get(
|
|
|
|
struct tree_entry *root,
|
|
|
|
const char *p,
|
2013-06-23 18:58:21 +04:00
|
|
|
struct tree_entry *leaf,
|
|
|
|
int allow_root)
|
2007-07-15 09:40:37 +04:00
|
|
|
{
|
2010-10-18 05:03:38 +04:00
|
|
|
struct tree_content *t;
|
2007-07-15 09:40:37 +04:00
|
|
|
const char *slash1;
|
|
|
|
unsigned int i, n;
|
|
|
|
struct tree_entry *e;
|
|
|
|
|
|
|
|
slash1 = strchr(p, '/');
|
|
|
|
if (slash1)
|
|
|
|
n = slash1 - p;
|
|
|
|
else
|
|
|
|
n = strlen(p);
|
2013-06-23 18:58:21 +04:00
|
|
|
if (!n && !allow_root)
|
2012-03-10 08:07:22 +04:00
|
|
|
die("Empty path component found in input");
|
2007-07-15 09:40:37 +04:00
|
|
|
|
2010-10-18 05:03:38 +04:00
|
|
|
if (!root->tree)
|
|
|
|
load_tree(root);
|
2013-06-23 18:58:21 +04:00
|
|
|
|
|
|
|
if (!n) {
|
|
|
|
e = root;
|
|
|
|
goto found_entry;
|
|
|
|
}
|
|
|
|
|
2010-10-18 05:03:38 +04:00
|
|
|
t = root->tree;
|
2007-07-15 09:40:37 +04:00
|
|
|
for (i = 0; i < t->entry_count; i++) {
|
|
|
|
e = t->entries[i];
|
2010-10-03 13:56:46 +04:00
|
|
|
if (e->name->str_len == n && !strncmp_icase(p, e->name->str_dat, n)) {
|
2013-06-23 18:58:21 +04:00
|
|
|
if (!slash1)
|
|
|
|
goto found_entry;
|
2007-07-15 09:40:37 +04:00
|
|
|
if (!S_ISDIR(e->versions[1].mode))
|
|
|
|
return 0;
|
|
|
|
if (!e->tree)
|
|
|
|
load_tree(e);
|
2013-06-23 18:58:21 +04:00
|
|
|
return tree_content_get(e, slash1 + 1, leaf, 0);
|
2007-07-15 09:40:37 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
2013-06-23 18:58:21 +04:00
|
|
|
|
|
|
|
found_entry:
|
|
|
|
memcpy(leaf, e, sizeof(*leaf));
|
|
|
|
if (e->tree && is_null_sha1(e->versions[1].sha1))
|
|
|
|
leaf->tree = dup_tree_content(e->tree);
|
|
|
|
else
|
|
|
|
leaf->tree = NULL;
|
|
|
|
return 1;
|
2007-07-15 09:40:37 +04:00
|
|
|
}
|
|
|
|
|
2007-02-07 00:08:06 +03:00
|
|
|
static int update_branch(struct branch *b)
|
2006-08-14 08:58:19 +04:00
|
|
|
{
|
|
|
|
static const char *msg = "fast-import";
|
2007-02-07 00:08:06 +03:00
|
|
|
struct ref_lock *lock;
|
|
|
|
unsigned char old_sha1[20];
|
|
|
|
|
2008-03-16 22:49:09 +03:00
|
|
|
if (is_null_sha1(b->sha1))
|
|
|
|
return 0;
|
2007-02-07 00:08:06 +03:00
|
|
|
if (read_ref(b->name, old_sha1))
|
|
|
|
hashclr(old_sha1);
|
2013-08-30 22:12:00 +04:00
|
|
|
lock = lock_any_ref_for_update(b->name, old_sha1, 0, NULL);
|
2007-02-07 00:08:06 +03:00
|
|
|
if (!lock)
|
|
|
|
return error("Unable to lock %s", b->name);
|
|
|
|
if (!force_update && !is_null_sha1(old_sha1)) {
|
|
|
|
struct commit *old_cmit, *new_cmit;
|
|
|
|
|
|
|
|
old_cmit = lookup_commit_reference_gently(old_sha1, 0);
|
|
|
|
new_cmit = lookup_commit_reference_gently(b->sha1, 0);
|
|
|
|
if (!old_cmit || !new_cmit) {
|
|
|
|
unlock_ref(lock);
|
|
|
|
return error("Branch %s is missing commits.", b->name);
|
|
|
|
}
|
|
|
|
|
2012-08-28 01:46:01 +04:00
|
|
|
if (!in_merge_bases(old_cmit, new_cmit)) {
|
2007-02-07 00:08:06 +03:00
|
|
|
unlock_ref(lock);
|
2007-03-31 03:07:05 +04:00
|
|
|
warning("Not updating %s"
|
2007-02-07 00:08:06 +03:00
|
|
|
" (new tip %s does not contain %s)",
|
|
|
|
b->name, sha1_to_hex(b->sha1), sha1_to_hex(old_sha1));
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (write_ref_sha1(lock, b->sha1, msg) < 0)
|
|
|
|
return error("Unable to update %s", b->name);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void dump_branches(void)
|
|
|
|
{
|
2006-08-14 08:58:19 +04:00
|
|
|
unsigned int i;
|
|
|
|
struct branch *b;
|
|
|
|
|
|
|
|
for (i = 0; i < branch_table_sz; i++) {
|
2007-02-07 00:08:06 +03:00
|
|
|
for (b = branch_table[i]; b; b = b->table_next_branch)
|
|
|
|
failure |= update_branch(b);
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-01-17 09:47:25 +03:00
|
|
|
static void dump_tags(void)
|
2006-08-24 11:12:13 +04:00
|
|
|
{
|
|
|
|
static const char *msg = "fast-import";
|
|
|
|
struct tag *t;
|
|
|
|
struct ref_lock *lock;
|
2007-02-07 00:08:06 +03:00
|
|
|
char ref_name[PATH_MAX];
|
2006-08-24 11:12:13 +04:00
|
|
|
|
|
|
|
for (t = first_tag; t; t = t->next_tag) {
|
2007-02-07 00:08:06 +03:00
|
|
|
sprintf(ref_name, "tags/%s", t->name);
|
|
|
|
lock = lock_ref_sha1(ref_name, NULL);
|
2006-08-24 11:12:13 +04:00
|
|
|
if (!lock || write_ref_sha1(lock, t->sha1, msg) < 0)
|
2007-02-07 00:08:06 +03:00
|
|
|
failure |= error("Unable to update %s", ref_name);
|
2006-08-24 11:12:13 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-08-26 00:03:04 +04:00
|
|
|
static void dump_marks_helper(FILE *f,
|
2007-01-16 08:33:19 +03:00
|
|
|
uintmax_t base,
|
2006-08-26 00:03:04 +04:00
|
|
|
struct mark_set *m)
|
|
|
|
{
|
2007-01-16 08:33:19 +03:00
|
|
|
uintmax_t k;
|
2006-08-26 00:03:04 +04:00
|
|
|
if (m->shift) {
|
|
|
|
for (k = 0; k < 1024; k++) {
|
|
|
|
if (m->data.sets[k])
|
2010-07-13 15:51:48 +04:00
|
|
|
dump_marks_helper(f, base + (k << m->shift),
|
2006-08-26 00:03:04 +04:00
|
|
|
m->data.sets[k]);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
for (k = 0; k < 1024; k++) {
|
|
|
|
if (m->data.marked[k])
|
2007-02-21 04:34:56 +03:00
|
|
|
fprintf(f, ":%" PRIuMAX " %s\n", base + k,
|
2010-02-17 22:05:51 +03:00
|
|
|
sha1_to_hex(m->data.marked[k]->idx.sha1));
|
2006-08-26 00:03:04 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-01-17 09:47:25 +03:00
|
|
|
static void dump_marks(void)
|
2006-08-26 00:03:04 +04:00
|
|
|
{
|
2007-03-08 02:05:38 +03:00
|
|
|
static struct lock_file mark_lock;
|
|
|
|
int mark_fd;
|
|
|
|
FILE *f;
|
|
|
|
|
2009-12-04 20:06:55 +03:00
|
|
|
if (!export_marks_file)
|
2007-03-08 02:05:38 +03:00
|
|
|
return;
|
|
|
|
|
2009-12-04 20:06:55 +03:00
|
|
|
mark_fd = hold_lock_file_for_update(&mark_lock, export_marks_file, 0);
|
2007-03-08 02:05:38 +03:00
|
|
|
if (mark_fd < 0) {
|
|
|
|
failure |= error("Unable to write marks file %s: %s",
|
2009-12-04 20:06:55 +03:00
|
|
|
export_marks_file, strerror(errno));
|
2007-03-08 02:05:38 +03:00
|
|
|
return;
|
2006-08-26 00:03:04 +04:00
|
|
|
}
|
2007-03-08 02:05:38 +03:00
|
|
|
|
|
|
|
f = fdopen(mark_fd, "w");
|
|
|
|
if (!f) {
|
2008-01-18 21:35:49 +03:00
|
|
|
int saved_errno = errno;
|
2007-03-08 02:05:38 +03:00
|
|
|
rollback_lock_file(&mark_lock);
|
|
|
|
failure |= error("Unable to write marks file %s: %s",
|
2009-12-04 20:06:55 +03:00
|
|
|
export_marks_file, strerror(saved_errno));
|
2007-03-08 02:05:38 +03:00
|
|
|
return;
|
2006-08-26 00:03:04 +04:00
|
|
|
}
|
2007-03-08 02:05:38 +03:00
|
|
|
|
2008-01-16 22:12:46 +03:00
|
|
|
/*
|
2008-01-17 19:58:34 +03:00
|
|
|
* Since the lock file was fdopen()'ed, it should not be close()'ed.
|
|
|
|
* Assign -1 to the lock file descriptor so that commit_lock_file()
|
2008-01-16 22:12:46 +03:00
|
|
|
* won't try to close() it.
|
|
|
|
*/
|
|
|
|
mark_lock.fd = -1;
|
2008-01-17 19:58:34 +03:00
|
|
|
|
|
|
|
dump_marks_helper(f, 0, marks);
|
|
|
|
if (ferror(f) || fclose(f)) {
|
2008-01-18 21:35:49 +03:00
|
|
|
int saved_errno = errno;
|
2008-01-17 19:58:34 +03:00
|
|
|
rollback_lock_file(&mark_lock);
|
|
|
|
failure |= error("Unable to write marks file %s: %s",
|
2009-12-04 20:06:55 +03:00
|
|
|
export_marks_file, strerror(saved_errno));
|
2008-01-17 19:58:34 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (commit_lock_file(&mark_lock)) {
|
2008-01-18 21:35:49 +03:00
|
|
|
int saved_errno = errno;
|
2008-01-17 19:58:34 +03:00
|
|
|
rollback_lock_file(&mark_lock);
|
|
|
|
failure |= error("Unable to commit marks file %s: %s",
|
2009-12-04 20:06:55 +03:00
|
|
|
export_marks_file, strerror(saved_errno));
|
2008-01-17 19:58:34 +03:00
|
|
|
return;
|
|
|
|
}
|
2006-08-26 00:03:04 +04:00
|
|
|
}
|
|
|
|
|
2009-12-04 20:06:55 +03:00
|
|
|
static void read_marks(void)
|
|
|
|
{
|
|
|
|
char line[512];
|
|
|
|
FILE *f = fopen(import_marks_file, "r");
|
2011-01-15 09:31:46 +03:00
|
|
|
if (f)
|
|
|
|
;
|
|
|
|
else if (import_marks_file_ignore_missing && errno == ENOENT)
|
|
|
|
return; /* Marks file does not exist */
|
|
|
|
else
|
2009-12-04 20:06:55 +03:00
|
|
|
die_errno("cannot read '%s'", import_marks_file);
|
|
|
|
while (fgets(line, sizeof(line), f)) {
|
|
|
|
uintmax_t mark;
|
|
|
|
char *end;
|
|
|
|
unsigned char sha1[20];
|
|
|
|
struct object_entry *e;
|
|
|
|
|
|
|
|
end = strchr(line, '\n');
|
|
|
|
if (line[0] != ':' || !end)
|
|
|
|
die("corrupt mark line: %s", line);
|
|
|
|
*end = 0;
|
|
|
|
mark = strtoumax(line + 1, &end, 10);
|
|
|
|
if (!mark || end == line + 1
|
2013-05-06 02:38:52 +04:00
|
|
|
|| *end != ' ' || get_sha1_hex(end + 1, sha1))
|
2009-12-04 20:06:55 +03:00
|
|
|
die("corrupt mark line: %s", line);
|
|
|
|
e = find_object(sha1);
|
|
|
|
if (!e) {
|
|
|
|
enum object_type type = sha1_object_info(sha1, NULL);
|
|
|
|
if (type < 0)
|
|
|
|
die("object not found: %s", sha1_to_hex(sha1));
|
|
|
|
e = insert_object(sha1);
|
|
|
|
e->type = type;
|
|
|
|
e->pack_id = MAX_PACK_ID;
|
2010-02-17 22:05:51 +03:00
|
|
|
e->idx.offset = 1; /* just not zero! */
|
2009-12-04 20:06:55 +03:00
|
|
|
}
|
|
|
|
insert_mark(mark, e);
|
|
|
|
}
|
|
|
|
fclose(f);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2007-09-17 13:19:04 +04:00
|
|
|
static int read_next_command(void)
|
2006-08-15 04:16:28 +04:00
|
|
|
{
|
2007-09-17 13:19:04 +04:00
|
|
|
static int stdin_eof = 0;
|
|
|
|
|
|
|
|
if (stdin_eof) {
|
|
|
|
unread_command_buf = 0;
|
|
|
|
return EOF;
|
|
|
|
}
|
|
|
|
|
2010-11-28 22:45:58 +03:00
|
|
|
for (;;) {
|
2007-08-03 12:47:04 +04:00
|
|
|
if (unread_command_buf) {
|
2007-08-01 10:22:53 +04:00
|
|
|
unread_command_buf = 0;
|
2007-08-03 12:47:04 +04:00
|
|
|
} else {
|
|
|
|
struct recent_command *rc;
|
|
|
|
|
2007-09-27 14:58:23 +04:00
|
|
|
strbuf_detach(&command_buf, NULL);
|
2007-09-17 13:19:04 +04:00
|
|
|
stdin_eof = strbuf_getline(&command_buf, stdin, '\n');
|
|
|
|
if (stdin_eof)
|
|
|
|
return EOF;
|
2007-08-03 12:47:04 +04:00
|
|
|
|
2009-12-04 20:06:56 +03:00
|
|
|
if (!seen_data_command
|
2009-12-04 20:06:57 +03:00
|
|
|
&& prefixcmp(command_buf.buf, "feature ")
|
|
|
|
&& prefixcmp(command_buf.buf, "option ")) {
|
|
|
|
parse_argv();
|
2009-12-04 20:06:56 +03:00
|
|
|
}
|
|
|
|
|
2007-08-03 12:47:04 +04:00
|
|
|
rc = rc_free;
|
|
|
|
if (rc)
|
|
|
|
rc_free = rc->next;
|
|
|
|
else {
|
|
|
|
rc = cmd_hist.next;
|
|
|
|
cmd_hist.next = rc->next;
|
|
|
|
cmd_hist.next->prev = &cmd_hist;
|
|
|
|
free(rc->buf);
|
|
|
|
}
|
|
|
|
|
|
|
|
rc->buf = command_buf.buf;
|
|
|
|
rc->prev = cmd_tail;
|
|
|
|
rc->next = cmd_hist.prev;
|
|
|
|
rc->prev->next = rc;
|
|
|
|
cmd_tail = rc;
|
|
|
|
}
|
2010-11-28 22:45:58 +03:00
|
|
|
if (!prefixcmp(command_buf.buf, "cat-blob ")) {
|
|
|
|
parse_cat_blob();
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (command_buf.buf[0] == '#')
|
|
|
|
continue;
|
|
|
|
return 0;
|
|
|
|
}
|
2006-08-15 04:16:28 +04:00
|
|
|
}
|
|
|
|
|
2007-08-19 13:50:18 +04:00
|
|
|
static void skip_optional_lf(void)
|
2007-08-01 08:24:25 +04:00
|
|
|
{
|
|
|
|
int term_char = fgetc(stdin);
|
|
|
|
if (term_char != '\n' && term_char != EOF)
|
|
|
|
ungetc(term_char, stdin);
|
|
|
|
}
|
|
|
|
|
2008-05-16 02:35:56 +04:00
|
|
|
static void parse_mark(void)
|
2006-08-15 04:16:28 +04:00
|
|
|
{
|
2007-02-20 12:54:00 +03:00
|
|
|
if (!prefixcmp(command_buf.buf, "mark :")) {
|
2007-01-16 08:33:19 +03:00
|
|
|
next_mark = strtoumax(command_buf.buf + 6, NULL, 10);
|
2006-08-15 04:16:28 +04:00
|
|
|
read_next_command();
|
|
|
|
}
|
|
|
|
else
|
2006-08-23 12:17:45 +04:00
|
|
|
next_mark = 0;
|
2006-08-15 04:16:28 +04:00
|
|
|
}
|
|
|
|
|
2010-02-01 20:27:35 +03:00
|
|
|
static int parse_data(struct strbuf *sb, uintmax_t limit, uintmax_t *len_res)
|
2006-08-15 04:16:28 +04:00
|
|
|
{
|
2007-09-17 15:48:17 +04:00
|
|
|
strbuf_reset(sb);
|
2006-08-15 04:16:28 +04:00
|
|
|
|
2007-02-20 12:54:00 +03:00
|
|
|
if (prefixcmp(command_buf.buf, "data "))
|
2006-08-15 04:16:28 +04:00
|
|
|
die("Expected 'data n' command, found: %s", command_buf.buf);
|
|
|
|
|
2007-02-20 12:54:00 +03:00
|
|
|
if (!prefixcmp(command_buf.buf + 5, "<<")) {
|
2007-01-18 21:14:27 +03:00
|
|
|
char *term = xstrdup(command_buf.buf + 5 + 2);
|
2007-09-06 15:20:07 +04:00
|
|
|
size_t term_len = command_buf.len - 5 - 2;
|
|
|
|
|
2007-10-26 11:59:12 +04:00
|
|
|
strbuf_detach(&command_buf, NULL);
|
2007-01-18 21:14:27 +03:00
|
|
|
for (;;) {
|
2007-09-17 13:19:04 +04:00
|
|
|
if (strbuf_getline(&command_buf, stdin, '\n') == EOF)
|
2007-01-18 21:14:27 +03:00
|
|
|
die("EOF in data (terminator '%s' not found)", term);
|
|
|
|
if (term_len == command_buf.len
|
|
|
|
&& !strcmp(term, command_buf.buf))
|
|
|
|
break;
|
2007-09-17 15:48:17 +04:00
|
|
|
strbuf_addbuf(sb, &command_buf);
|
|
|
|
strbuf_addch(sb, '\n');
|
2007-01-18 21:14:27 +03:00
|
|
|
}
|
|
|
|
free(term);
|
|
|
|
}
|
|
|
|
else {
|
2010-02-01 20:27:35 +03:00
|
|
|
uintmax_t len = strtoumax(command_buf.buf + 5, NULL, 10);
|
|
|
|
size_t n = 0, length = (size_t)len;
|
2007-09-06 15:20:07 +04:00
|
|
|
|
2010-02-01 20:27:35 +03:00
|
|
|
if (limit && limit < len) {
|
|
|
|
*len_res = len;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
if (length < len)
|
|
|
|
die("data is too large to use in this context");
|
2007-09-06 15:20:07 +04:00
|
|
|
|
2007-01-18 21:14:27 +03:00
|
|
|
while (n < length) {
|
2007-09-17 15:48:17 +04:00
|
|
|
size_t s = strbuf_fread(sb, length - n, stdin);
|
2007-01-18 21:14:27 +03:00
|
|
|
if (!s && feof(stdin))
|
2007-02-07 14:38:21 +03:00
|
|
|
die("EOF in data (%lu bytes remaining)",
|
|
|
|
(unsigned long)(length - n));
|
2007-01-18 21:14:27 +03:00
|
|
|
n += s;
|
|
|
|
}
|
2006-08-15 04:16:28 +04:00
|
|
|
}
|
|
|
|
|
2007-08-01 08:24:25 +04:00
|
|
|
skip_optional_lf();
|
2010-02-01 20:27:35 +03:00
|
|
|
return 1;
|
2006-08-15 04:16:28 +04:00
|
|
|
}
|
|
|
|
|
2007-02-06 22:58:30 +03:00
|
|
|
static int validate_raw_date(const char *src, char *result, int maxlen)
|
|
|
|
{
|
|
|
|
const char *orig_src = src;
|
2009-03-07 23:02:10 +03:00
|
|
|
char *endp;
|
2009-09-29 10:40:09 +04:00
|
|
|
unsigned long num;
|
2007-02-06 22:58:30 +03:00
|
|
|
|
2008-12-21 04:28:48 +03:00
|
|
|
errno = 0;
|
|
|
|
|
2009-09-29 10:40:09 +04:00
|
|
|
num = strtoul(src, &endp, 10);
|
|
|
|
/* NEEDSWORK: perhaps check for reasonable values? */
|
2008-12-21 04:28:48 +03:00
|
|
|
if (errno || endp == src || *endp != ' ')
|
2007-02-06 22:58:30 +03:00
|
|
|
return -1;
|
|
|
|
|
|
|
|
src = endp + 1;
|
|
|
|
if (*src != '-' && *src != '+')
|
|
|
|
return -1;
|
|
|
|
|
2009-09-29 10:40:09 +04:00
|
|
|
num = strtoul(src + 1, &endp, 10);
|
|
|
|
if (errno || endp == src + 1 || *endp || (endp - orig_src) >= maxlen ||
|
|
|
|
1400 < num)
|
2007-02-06 22:58:30 +03:00
|
|
|
return -1;
|
|
|
|
|
|
|
|
strcpy(result, orig_src);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static char *parse_ident(const char *buf)
|
|
|
|
{
|
2011-08-11 14:21:08 +04:00
|
|
|
const char *ltgt;
|
2007-02-06 22:58:30 +03:00
|
|
|
size_t name_len;
|
|
|
|
char *ident;
|
|
|
|
|
2011-08-11 14:21:07 +04:00
|
|
|
/* ensure there is a space delimiter even if there is no name */
|
|
|
|
if (*buf == '<')
|
|
|
|
--buf;
|
|
|
|
|
2011-08-11 14:21:08 +04:00
|
|
|
ltgt = buf + strcspn(buf, "<>");
|
|
|
|
if (*ltgt != '<')
|
|
|
|
die("Missing < in ident string: %s", buf);
|
|
|
|
if (ltgt != buf && ltgt[-1] != ' ')
|
|
|
|
die("Missing space before < in ident string: %s", buf);
|
|
|
|
ltgt = ltgt + 1 + strcspn(ltgt + 1, "<>");
|
|
|
|
if (*ltgt != '>')
|
2007-02-06 22:58:30 +03:00
|
|
|
die("Missing > in ident string: %s", buf);
|
2011-08-11 14:21:08 +04:00
|
|
|
ltgt++;
|
|
|
|
if (*ltgt != ' ')
|
2007-02-06 22:58:30 +03:00
|
|
|
die("Missing space after > in ident string: %s", buf);
|
2011-08-11 14:21:08 +04:00
|
|
|
ltgt++;
|
|
|
|
name_len = ltgt - buf;
|
2007-02-06 22:58:30 +03:00
|
|
|
ident = xmalloc(name_len + 24);
|
|
|
|
strncpy(ident, buf, name_len);
|
|
|
|
|
|
|
|
switch (whenspec) {
|
|
|
|
case WHENSPEC_RAW:
|
2011-08-11 14:21:08 +04:00
|
|
|
if (validate_raw_date(ltgt, ident + name_len, 24) < 0)
|
|
|
|
die("Invalid raw date \"%s\" in ident: %s", ltgt, buf);
|
2007-02-06 22:58:30 +03:00
|
|
|
break;
|
|
|
|
case WHENSPEC_RFC2822:
|
2011-08-11 14:21:08 +04:00
|
|
|
if (parse_date(ltgt, ident + name_len, 24) < 0)
|
|
|
|
die("Invalid rfc2822 date \"%s\" in ident: %s", ltgt, buf);
|
2007-02-06 22:58:30 +03:00
|
|
|
break;
|
|
|
|
case WHENSPEC_NOW:
|
2011-08-11 14:21:08 +04:00
|
|
|
if (strcmp("now", ltgt))
|
2007-02-06 22:58:30 +03:00
|
|
|
die("Date in ident must be 'now': %s", buf);
|
|
|
|
datestamp(ident + name_len, 24);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ident;
|
|
|
|
}
|
|
|
|
|
2010-02-01 20:27:35 +03:00
|
|
|
static void parse_and_store_blob(
|
|
|
|
struct last_object *last,
|
|
|
|
unsigned char *sha1out,
|
|
|
|
uintmax_t mark)
|
2006-08-08 09:14:21 +04:00
|
|
|
{
|
2007-09-17 16:00:38 +04:00
|
|
|
static struct strbuf buf = STRBUF_INIT;
|
2010-02-01 20:27:35 +03:00
|
|
|
uintmax_t len;
|
2006-08-15 04:16:28 +04:00
|
|
|
|
2010-02-01 20:27:35 +03:00
|
|
|
if (parse_data(&buf, big_file_threshold, &len))
|
|
|
|
store_object(OBJ_BLOB, &buf, last, sha1out, mark);
|
|
|
|
else {
|
|
|
|
if (last) {
|
|
|
|
strbuf_release(&last->data);
|
|
|
|
last->offset = 0;
|
|
|
|
last->depth = 0;
|
|
|
|
}
|
|
|
|
stream_blob(len, sha1out, mark);
|
|
|
|
skip_optional_lf();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void parse_new_blob(void)
|
|
|
|
{
|
2006-08-15 04:16:28 +04:00
|
|
|
read_next_command();
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_mark();
|
2010-02-01 20:27:35 +03:00
|
|
|
parse_and_store_blob(&last_blob, NULL, next_mark);
|
2006-08-08 09:14:21 +04:00
|
|
|
}
|
|
|
|
|
2007-01-17 09:47:25 +03:00
|
|
|
static void unload_one_branch(void)
|
2006-08-08 11:36:45 +04:00
|
|
|
{
|
2006-08-24 12:37:35 +04:00
|
|
|
while (cur_active_branches
|
|
|
|
&& cur_active_branches >= max_active_branches) {
|
2007-03-07 04:44:34 +03:00
|
|
|
uintmax_t min_commit = ULONG_MAX;
|
2006-08-14 08:58:19 +04:00
|
|
|
struct branch *e, *l = NULL, *p = NULL;
|
|
|
|
|
|
|
|
for (e = active_branches; e; e = e->active_next_branch) {
|
|
|
|
if (e->last_commit < min_commit) {
|
|
|
|
p = l;
|
|
|
|
min_commit = e->last_commit;
|
|
|
|
}
|
|
|
|
l = e;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (p) {
|
|
|
|
e = p->active_next_branch;
|
|
|
|
p->active_next_branch = e->active_next_branch;
|
|
|
|
} else {
|
|
|
|
e = active_branches;
|
|
|
|
active_branches = e->active_next_branch;
|
|
|
|
}
|
2007-03-05 20:31:09 +03:00
|
|
|
e->active = 0;
|
2006-08-14 08:58:19 +04:00
|
|
|
e->active_next_branch = NULL;
|
|
|
|
if (e->branch_tree.tree) {
|
2006-08-23 09:33:47 +04:00
|
|
|
release_tree_content_recursive(e->branch_tree.tree);
|
2006-08-14 08:58:19 +04:00
|
|
|
e->branch_tree.tree = NULL;
|
|
|
|
}
|
|
|
|
cur_active_branches--;
|
2006-08-08 11:36:45 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2006-08-14 08:58:19 +04:00
|
|
|
static void load_branch(struct branch *b)
|
2006-08-08 11:36:45 +04:00
|
|
|
{
|
2006-08-14 08:58:19 +04:00
|
|
|
load_tree(&b->branch_tree);
|
2007-03-05 20:31:09 +03:00
|
|
|
if (!b->active) {
|
|
|
|
b->active = 1;
|
|
|
|
b->active_next_branch = active_branches;
|
|
|
|
active_branches = b;
|
|
|
|
cur_active_branches++;
|
|
|
|
branch_load_count++;
|
|
|
|
}
|
2006-08-08 11:36:45 +04:00
|
|
|
}
|
|
|
|
|
2009-12-07 14:27:24 +03:00
|
|
|
static unsigned char convert_num_notes_to_fanout(uintmax_t num_notes)
|
|
|
|
{
|
|
|
|
unsigned char fanout = 0;
|
|
|
|
while ((num_notes >>= 8))
|
|
|
|
fanout++;
|
|
|
|
return fanout;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void construct_path_with_fanout(const char *hex_sha1,
|
|
|
|
unsigned char fanout, char *path)
|
|
|
|
{
|
|
|
|
unsigned int i = 0, j = 0;
|
|
|
|
if (fanout >= 20)
|
|
|
|
die("Too large fanout (%u)", fanout);
|
|
|
|
while (fanout) {
|
|
|
|
path[i++] = hex_sha1[j++];
|
|
|
|
path[i++] = hex_sha1[j++];
|
|
|
|
path[i++] = '/';
|
|
|
|
fanout--;
|
|
|
|
}
|
|
|
|
memcpy(path + i, hex_sha1 + j, 40 - j);
|
|
|
|
path[i + 40 - j] = '\0';
|
|
|
|
}
|
|
|
|
|
|
|
|
static uintmax_t do_change_note_fanout(
|
|
|
|
struct tree_entry *orig_root, struct tree_entry *root,
|
|
|
|
char *hex_sha1, unsigned int hex_sha1_len,
|
|
|
|
char *fullpath, unsigned int fullpath_len,
|
|
|
|
unsigned char fanout)
|
|
|
|
{
|
|
|
|
struct tree_content *t = root->tree;
|
|
|
|
struct tree_entry *e, leaf;
|
|
|
|
unsigned int i, tmp_hex_sha1_len, tmp_fullpath_len;
|
|
|
|
uintmax_t num_notes = 0;
|
|
|
|
unsigned char sha1[20];
|
|
|
|
char realpath[60];
|
|
|
|
|
|
|
|
for (i = 0; t && i < t->entry_count; i++) {
|
|
|
|
e = t->entries[i];
|
|
|
|
tmp_hex_sha1_len = hex_sha1_len + e->name->str_len;
|
|
|
|
tmp_fullpath_len = fullpath_len;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We're interested in EITHER existing note entries (entries
|
|
|
|
* with exactly 40 hex chars in path, not including directory
|
|
|
|
* separators), OR directory entries that may contain note
|
|
|
|
* entries (with < 40 hex chars in path).
|
|
|
|
* Also, each path component in a note entry must be a multiple
|
|
|
|
* of 2 chars.
|
|
|
|
*/
|
|
|
|
if (!e->versions[1].mode ||
|
|
|
|
tmp_hex_sha1_len > 40 ||
|
|
|
|
e->name->str_len % 2)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* This _may_ be a note entry, or a subdir containing notes */
|
|
|
|
memcpy(hex_sha1 + hex_sha1_len, e->name->str_dat,
|
|
|
|
e->name->str_len);
|
|
|
|
if (tmp_fullpath_len)
|
|
|
|
fullpath[tmp_fullpath_len++] = '/';
|
|
|
|
memcpy(fullpath + tmp_fullpath_len, e->name->str_dat,
|
|
|
|
e->name->str_len);
|
|
|
|
tmp_fullpath_len += e->name->str_len;
|
|
|
|
fullpath[tmp_fullpath_len] = '\0';
|
|
|
|
|
|
|
|
if (tmp_hex_sha1_len == 40 && !get_sha1_hex(hex_sha1, sha1)) {
|
|
|
|
/* This is a note entry */
|
2011-11-25 04:09:47 +04:00
|
|
|
if (fanout == 0xff) {
|
|
|
|
/* Counting mode, no rename */
|
|
|
|
num_notes++;
|
|
|
|
continue;
|
|
|
|
}
|
2009-12-07 14:27:24 +03:00
|
|
|
construct_path_with_fanout(hex_sha1, fanout, realpath);
|
|
|
|
if (!strcmp(fullpath, realpath)) {
|
|
|
|
/* Note entry is in correct location */
|
|
|
|
num_notes++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Rename fullpath to realpath */
|
2013-06-23 18:58:22 +04:00
|
|
|
if (!tree_content_remove(orig_root, fullpath, &leaf, 0))
|
2009-12-07 14:27:24 +03:00
|
|
|
die("Failed to remove path %s", fullpath);
|
|
|
|
tree_content_set(orig_root, realpath,
|
|
|
|
leaf.versions[1].sha1,
|
|
|
|
leaf.versions[1].mode,
|
|
|
|
leaf.tree);
|
|
|
|
} else if (S_ISDIR(e->versions[1].mode)) {
|
|
|
|
/* This is a subdir that may contain note entries */
|
|
|
|
if (!e->tree)
|
|
|
|
load_tree(e);
|
|
|
|
num_notes += do_change_note_fanout(orig_root, e,
|
|
|
|
hex_sha1, tmp_hex_sha1_len,
|
|
|
|
fullpath, tmp_fullpath_len, fanout);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* The above may have reallocated the current tree_content */
|
|
|
|
t = root->tree;
|
|
|
|
}
|
|
|
|
return num_notes;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uintmax_t change_note_fanout(struct tree_entry *root,
|
|
|
|
unsigned char fanout)
|
|
|
|
{
|
|
|
|
char hex_sha1[40], path[60];
|
|
|
|
return do_change_note_fanout(root, root, hex_sha1, 0, path, 0, fanout);
|
|
|
|
}
|
|
|
|
|
2012-04-08 02:59:20 +04:00
|
|
|
/*
|
|
|
|
* Given a pointer into a string, parse a mark reference:
|
|
|
|
*
|
|
|
|
* idnum ::= ':' bigint;
|
|
|
|
*
|
|
|
|
* Return the first character after the value in *endptr.
|
|
|
|
*
|
|
|
|
* Complain if the following character is not what is expected,
|
|
|
|
* either a space or end of the string.
|
|
|
|
*/
|
|
|
|
static uintmax_t parse_mark_ref(const char *p, char **endptr)
|
|
|
|
{
|
|
|
|
uintmax_t mark;
|
|
|
|
|
|
|
|
assert(*p == ':');
|
|
|
|
p++;
|
|
|
|
mark = strtoumax(p, endptr, 10);
|
|
|
|
if (*endptr == p)
|
|
|
|
die("No value after ':' in mark: %s", command_buf.buf);
|
|
|
|
return mark;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Parse the mark reference, and complain if this is not the end of
|
|
|
|
* the string.
|
|
|
|
*/
|
|
|
|
static uintmax_t parse_mark_ref_eol(const char *p)
|
|
|
|
{
|
|
|
|
char *end;
|
|
|
|
uintmax_t mark;
|
|
|
|
|
|
|
|
mark = parse_mark_ref(p, &end);
|
|
|
|
if (*end != '\0')
|
|
|
|
die("Garbage after mark: %s", command_buf.buf);
|
|
|
|
return mark;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Parse the mark reference, demanding a trailing space. Return a
|
|
|
|
* pointer to the space.
|
|
|
|
*/
|
|
|
|
static uintmax_t parse_mark_ref_space(const char **p)
|
|
|
|
{
|
|
|
|
uintmax_t mark;
|
|
|
|
char *end;
|
|
|
|
|
|
|
|
mark = parse_mark_ref(*p, &end);
|
|
|
|
if (*end != ' ')
|
|
|
|
die("Missing space after mark: %s", command_buf.buf);
|
|
|
|
*p = end;
|
|
|
|
return mark;
|
|
|
|
}
|
|
|
|
|
2006-08-14 08:58:19 +04:00
|
|
|
static void file_change_m(struct branch *b)
|
2006-08-08 11:36:45 +04:00
|
|
|
{
|
2006-08-15 04:16:28 +04:00
|
|
|
const char *p = command_buf.buf + 2;
|
2007-09-20 02:42:14 +04:00
|
|
|
static struct strbuf uq = STRBUF_INIT;
|
2006-08-15 04:16:28 +04:00
|
|
|
const char *endp;
|
2013-03-21 19:44:39 +04:00
|
|
|
struct object_entry *oe;
|
2006-08-14 08:58:19 +04:00
|
|
|
unsigned char sha1[20];
|
2007-02-06 00:34:56 +03:00
|
|
|
uint16_t mode, inline_data = 0;
|
2006-08-08 11:36:45 +04:00
|
|
|
|
2006-08-15 04:16:28 +04:00
|
|
|
p = get_mode(p, &mode);
|
|
|
|
if (!p)
|
|
|
|
die("Corrupt mode: %s", command_buf.buf);
|
|
|
|
switch (mode) {
|
2009-01-14 04:37:07 +03:00
|
|
|
case 0644:
|
|
|
|
case 0755:
|
|
|
|
mode |= S_IFREG;
|
2006-08-15 04:16:28 +04:00
|
|
|
case S_IFREG | 0644:
|
|
|
|
case S_IFREG | 0755:
|
2006-08-21 11:29:13 +04:00
|
|
|
case S_IFLNK:
|
2010-07-01 07:18:19 +04:00
|
|
|
case S_IFDIR:
|
2008-07-19 16:21:24 +04:00
|
|
|
case S_IFGITLINK:
|
2006-08-15 04:16:28 +04:00
|
|
|
/* ok */
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
die("Corrupt mode: %s", command_buf.buf);
|
|
|
|
}
|
|
|
|
|
2006-08-23 12:17:45 +04:00
|
|
|
if (*p == ':') {
|
2012-04-08 02:59:20 +04:00
|
|
|
oe = find_mark(parse_mark_ref_space(&p));
|
2010-02-17 22:05:51 +03:00
|
|
|
hashcpy(sha1, oe->idx.sha1);
|
2012-04-08 02:59:20 +04:00
|
|
|
} else if (!prefixcmp(p, "inline ")) {
|
2007-01-18 23:17:58 +03:00
|
|
|
inline_data = 1;
|
2013-03-21 19:44:39 +04:00
|
|
|
oe = NULL; /* not used with inline_data, but makes gcc happy */
|
2012-04-08 02:59:20 +04:00
|
|
|
p += strlen("inline"); /* advance to space */
|
2006-08-23 12:17:45 +04:00
|
|
|
} else {
|
|
|
|
if (get_sha1_hex(p, sha1))
|
2012-04-08 02:59:20 +04:00
|
|
|
die("Invalid dataref: %s", command_buf.buf);
|
2006-08-23 12:17:45 +04:00
|
|
|
oe = find_object(sha1);
|
|
|
|
p += 40;
|
2012-04-08 02:59:20 +04:00
|
|
|
if (*p != ' ')
|
|
|
|
die("Missing space after SHA1: %s", command_buf.buf);
|
2006-08-23 12:17:45 +04:00
|
|
|
}
|
2012-04-08 02:59:20 +04:00
|
|
|
assert(*p == ' ');
|
|
|
|
p++; /* skip space */
|
2006-08-15 04:16:28 +04:00
|
|
|
|
2007-09-20 02:42:14 +04:00
|
|
|
strbuf_reset(&uq);
|
|
|
|
if (!unquote_c_style(&uq, p, &endp)) {
|
2006-08-15 04:16:28 +04:00
|
|
|
if (*endp)
|
|
|
|
die("Garbage after path in: %s", command_buf.buf);
|
2007-09-20 02:42:14 +04:00
|
|
|
p = uq.buf;
|
2006-08-15 04:16:28 +04:00
|
|
|
}
|
2006-08-08 11:36:45 +04:00
|
|
|
|
2011-01-27 09:07:49 +03:00
|
|
|
/* Git does not track empty, non-toplevel directories. */
|
|
|
|
if (S_ISDIR(mode) && !memcmp(sha1, EMPTY_TREE_SHA1_BIN, 20) && *p) {
|
2013-06-23 18:58:22 +04:00
|
|
|
tree_content_remove(&b->branch_tree, p, NULL, 0);
|
2011-01-27 09:07:49 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2008-07-19 16:21:24 +04:00
|
|
|
if (S_ISGITLINK(mode)) {
|
|
|
|
if (inline_data)
|
|
|
|
die("Git links cannot be specified 'inline': %s",
|
|
|
|
command_buf.buf);
|
|
|
|
else if (oe) {
|
|
|
|
if (oe->type != OBJ_COMMIT)
|
|
|
|
die("Not a commit (actually a %s): %s",
|
|
|
|
typename(oe->type), command_buf.buf);
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Accept the sha1 without checking; it expected to be in
|
|
|
|
* another repository.
|
|
|
|
*/
|
|
|
|
} else if (inline_data) {
|
2010-07-01 07:18:19 +04:00
|
|
|
if (S_ISDIR(mode))
|
|
|
|
die("Directories cannot be specified 'inline': %s",
|
|
|
|
command_buf.buf);
|
2007-09-20 02:42:14 +04:00
|
|
|
if (p != uq.buf) {
|
|
|
|
strbuf_addstr(&uq, p);
|
|
|
|
p = uq.buf;
|
|
|
|
}
|
2007-01-18 23:17:58 +03:00
|
|
|
read_next_command();
|
2010-02-01 20:27:35 +03:00
|
|
|
parse_and_store_blob(&last_blob, sha1, 0);
|
2006-08-14 10:50:18 +04:00
|
|
|
} else {
|
2010-07-01 07:18:19 +04:00
|
|
|
enum object_type expected = S_ISDIR(mode) ?
|
|
|
|
OBJ_TREE: OBJ_BLOB;
|
|
|
|
enum object_type type = oe ? oe->type :
|
|
|
|
sha1_object_info(sha1, NULL);
|
2007-02-26 22:55:59 +03:00
|
|
|
if (type < 0)
|
2010-07-01 07:18:19 +04:00
|
|
|
die("%s not found: %s",
|
|
|
|
S_ISDIR(mode) ? "Tree" : "Blob",
|
|
|
|
command_buf.buf);
|
|
|
|
if (type != expected)
|
|
|
|
die("Not a %s (actually a %s): %s",
|
|
|
|
typename(expected), typename(type),
|
|
|
|
command_buf.buf);
|
2006-08-14 10:50:18 +04:00
|
|
|
}
|
2006-08-08 11:36:45 +04:00
|
|
|
|
fast-import: tighten M 040000 syntax
When tree_content_set() is asked to modify the path "foo/bar/",
it first recurses like so:
tree_content_set(root, "foo/bar/", sha1, S_IFDIR) ->
tree_content_set(root:foo, "bar/", ...) ->
tree_content_set(root:foo/bar, "", ...)
And as a side-effect of 2794ad5 (fast-import: Allow filemodify to set
the root, 2010-10-10), this last call is accepted and changes
the tree entry for root:foo/bar to refer to the specified tree.
That seems safe enough but let's reject the new syntax (we never meant
to support it) and make it harder for frontends to introduce pointless
incompatibilities with git fast-import 1.7.3.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-10-18 05:08:53 +04:00
|
|
|
if (!*p) {
|
|
|
|
tree_content_replace(&b->branch_tree, sha1, mode, NULL);
|
|
|
|
return;
|
|
|
|
}
|
2009-01-14 04:37:07 +03:00
|
|
|
tree_content_set(&b->branch_tree, p, sha1, mode, NULL);
|
2006-08-14 08:58:19 +04:00
|
|
|
}
|
2006-08-08 11:36:45 +04:00
|
|
|
|
2006-08-14 08:58:19 +04:00
|
|
|
static void file_change_d(struct branch *b)
|
|
|
|
{
|
2006-08-15 04:16:28 +04:00
|
|
|
const char *p = command_buf.buf + 2;
|
2007-09-20 02:42:14 +04:00
|
|
|
static struct strbuf uq = STRBUF_INIT;
|
2006-08-15 04:16:28 +04:00
|
|
|
const char *endp;
|
|
|
|
|
2007-09-20 02:42:14 +04:00
|
|
|
strbuf_reset(&uq);
|
|
|
|
if (!unquote_c_style(&uq, p, &endp)) {
|
2006-08-15 04:16:28 +04:00
|
|
|
if (*endp)
|
|
|
|
die("Garbage after path in: %s", command_buf.buf);
|
2007-09-20 02:42:14 +04:00
|
|
|
p = uq.buf;
|
2006-08-15 04:16:28 +04:00
|
|
|
}
|
2013-06-23 18:58:22 +04:00
|
|
|
tree_content_remove(&b->branch_tree, p, NULL, 1);
|
2006-08-08 11:36:45 +04:00
|
|
|
}
|
|
|
|
|
2007-07-15 09:40:37 +04:00
|
|
|
static void file_change_cr(struct branch *b, int rename)
|
2007-07-10 06:58:23 +04:00
|
|
|
{
|
|
|
|
const char *s, *d;
|
2007-09-20 02:42:14 +04:00
|
|
|
static struct strbuf s_uq = STRBUF_INIT;
|
|
|
|
static struct strbuf d_uq = STRBUF_INIT;
|
2007-07-10 06:58:23 +04:00
|
|
|
const char *endp;
|
|
|
|
struct tree_entry leaf;
|
|
|
|
|
|
|
|
s = command_buf.buf + 2;
|
2007-09-20 02:42:14 +04:00
|
|
|
strbuf_reset(&s_uq);
|
|
|
|
if (!unquote_c_style(&s_uq, s, &endp)) {
|
2007-07-10 06:58:23 +04:00
|
|
|
if (*endp != ' ')
|
|
|
|
die("Missing space after source: %s", command_buf.buf);
|
2007-09-20 02:42:14 +04:00
|
|
|
} else {
|
2007-07-10 06:58:23 +04:00
|
|
|
endp = strchr(s, ' ');
|
|
|
|
if (!endp)
|
|
|
|
die("Missing space after source: %s", command_buf.buf);
|
2007-09-20 02:42:14 +04:00
|
|
|
strbuf_add(&s_uq, s, endp - s);
|
2007-07-10 06:58:23 +04:00
|
|
|
}
|
2007-09-20 02:42:14 +04:00
|
|
|
s = s_uq.buf;
|
2007-07-10 06:58:23 +04:00
|
|
|
|
|
|
|
endp++;
|
|
|
|
if (!*endp)
|
|
|
|
die("Missing dest: %s", command_buf.buf);
|
|
|
|
|
|
|
|
d = endp;
|
2007-09-20 02:42:14 +04:00
|
|
|
strbuf_reset(&d_uq);
|
|
|
|
if (!unquote_c_style(&d_uq, d, &endp)) {
|
2007-07-10 06:58:23 +04:00
|
|
|
if (*endp)
|
|
|
|
die("Garbage after dest in: %s", command_buf.buf);
|
2007-09-20 02:42:14 +04:00
|
|
|
d = d_uq.buf;
|
2007-07-10 06:58:23 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
memset(&leaf, 0, sizeof(leaf));
|
2007-07-15 09:40:37 +04:00
|
|
|
if (rename)
|
2013-06-23 18:58:22 +04:00
|
|
|
tree_content_remove(&b->branch_tree, s, &leaf, 1);
|
2007-07-15 09:40:37 +04:00
|
|
|
else
|
2013-06-23 18:58:21 +04:00
|
|
|
tree_content_get(&b->branch_tree, s, &leaf, 1);
|
2007-07-10 06:58:23 +04:00
|
|
|
if (!leaf.versions[1].mode)
|
|
|
|
die("Path %s not in branch", s);
|
fast-import: tighten M 040000 syntax
When tree_content_set() is asked to modify the path "foo/bar/",
it first recurses like so:
tree_content_set(root, "foo/bar/", sha1, S_IFDIR) ->
tree_content_set(root:foo, "bar/", ...) ->
tree_content_set(root:foo/bar, "", ...)
And as a side-effect of 2794ad5 (fast-import: Allow filemodify to set
the root, 2010-10-10), this last call is accepted and changes
the tree entry for root:foo/bar to refer to the specified tree.
That seems safe enough but let's reject the new syntax (we never meant
to support it) and make it harder for frontends to introduce pointless
incompatibilities with git fast-import 1.7.3.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-10-18 05:08:53 +04:00
|
|
|
if (!*d) { /* C "path/to/subdir" "" */
|
|
|
|
tree_content_replace(&b->branch_tree,
|
|
|
|
leaf.versions[1].sha1,
|
|
|
|
leaf.versions[1].mode,
|
|
|
|
leaf.tree);
|
|
|
|
return;
|
|
|
|
}
|
2007-07-10 06:58:23 +04:00
|
|
|
tree_content_set(&b->branch_tree, d,
|
|
|
|
leaf.versions[1].sha1,
|
|
|
|
leaf.versions[1].mode,
|
|
|
|
leaf.tree);
|
|
|
|
}
|
|
|
|
|
2011-11-25 04:09:47 +04:00
|
|
|
static void note_change_n(struct branch *b, unsigned char *old_fanout)
|
2009-10-09 14:22:02 +04:00
|
|
|
{
|
|
|
|
const char *p = command_buf.buf + 2;
|
|
|
|
static struct strbuf uq = STRBUF_INIT;
|
2013-03-21 15:10:28 +04:00
|
|
|
struct object_entry *oe;
|
2009-10-09 14:22:02 +04:00
|
|
|
struct branch *s;
|
|
|
|
unsigned char sha1[20], commit_sha1[20];
|
2009-12-07 14:27:24 +03:00
|
|
|
char path[60];
|
2009-10-09 14:22:02 +04:00
|
|
|
uint16_t inline_data = 0;
|
2009-12-07 14:27:24 +03:00
|
|
|
unsigned char new_fanout;
|
2009-10-09 14:22:02 +04:00
|
|
|
|
2011-11-25 04:09:47 +04:00
|
|
|
/*
|
|
|
|
* When loading a branch, we don't traverse its tree to count the real
|
|
|
|
* number of notes (too expensive to do this for all non-note refs).
|
|
|
|
* This means that recently loaded notes refs might incorrectly have
|
|
|
|
* b->num_notes == 0, and consequently, old_fanout might be wrong.
|
|
|
|
*
|
|
|
|
* Fix this by traversing the tree and counting the number of notes
|
|
|
|
* when b->num_notes == 0. If the notes tree is truly empty, the
|
|
|
|
* calculation should not take long.
|
|
|
|
*/
|
|
|
|
if (b->num_notes == 0 && *old_fanout == 0) {
|
|
|
|
/* Invoke change_note_fanout() in "counting mode". */
|
|
|
|
b->num_notes = change_note_fanout(&b->branch_tree, 0xff);
|
|
|
|
*old_fanout = convert_num_notes_to_fanout(b->num_notes);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now parse the notemodify command. */
|
2009-10-09 14:22:02 +04:00
|
|
|
/* <dataref> or 'inline' */
|
|
|
|
if (*p == ':') {
|
2012-04-08 02:59:20 +04:00
|
|
|
oe = find_mark(parse_mark_ref_space(&p));
|
2010-02-17 22:05:51 +03:00
|
|
|
hashcpy(sha1, oe->idx.sha1);
|
2012-04-08 02:59:20 +04:00
|
|
|
} else if (!prefixcmp(p, "inline ")) {
|
2009-10-09 14:22:02 +04:00
|
|
|
inline_data = 1;
|
2013-03-26 23:09:44 +04:00
|
|
|
oe = NULL; /* not used with inline_data, but makes gcc happy */
|
2012-04-08 02:59:20 +04:00
|
|
|
p += strlen("inline"); /* advance to space */
|
2009-10-09 14:22:02 +04:00
|
|
|
} else {
|
|
|
|
if (get_sha1_hex(p, sha1))
|
2012-04-08 02:59:20 +04:00
|
|
|
die("Invalid dataref: %s", command_buf.buf);
|
2009-10-09 14:22:02 +04:00
|
|
|
oe = find_object(sha1);
|
|
|
|
p += 40;
|
2012-04-08 02:59:20 +04:00
|
|
|
if (*p != ' ')
|
|
|
|
die("Missing space after SHA1: %s", command_buf.buf);
|
2009-10-09 14:22:02 +04:00
|
|
|
}
|
2012-04-08 02:59:20 +04:00
|
|
|
assert(*p == ' ');
|
|
|
|
p++; /* skip space */
|
2009-10-09 14:22:02 +04:00
|
|
|
|
2013-09-04 23:04:31 +04:00
|
|
|
/* <commit-ish> */
|
2009-10-09 14:22:02 +04:00
|
|
|
s = lookup_branch(p);
|
|
|
|
if (s) {
|
2011-09-22 23:47:05 +04:00
|
|
|
if (is_null_sha1(s->sha1))
|
|
|
|
die("Can't add a note on empty branch.");
|
2009-10-09 14:22:02 +04:00
|
|
|
hashcpy(commit_sha1, s->sha1);
|
|
|
|
} else if (*p == ':') {
|
2012-04-08 02:59:20 +04:00
|
|
|
uintmax_t commit_mark = parse_mark_ref_eol(p);
|
2009-10-09 14:22:02 +04:00
|
|
|
struct object_entry *commit_oe = find_mark(commit_mark);
|
|
|
|
if (commit_oe->type != OBJ_COMMIT)
|
|
|
|
die("Mark :%" PRIuMAX " not a commit", commit_mark);
|
2010-02-17 22:05:51 +03:00
|
|
|
hashcpy(commit_sha1, commit_oe->idx.sha1);
|
2009-10-09 14:22:02 +04:00
|
|
|
} else if (!get_sha1(p, commit_sha1)) {
|
|
|
|
unsigned long size;
|
|
|
|
char *buf = read_object_with_reference(commit_sha1,
|
|
|
|
commit_type, &size, commit_sha1);
|
|
|
|
if (!buf || size < 46)
|
|
|
|
die("Not a valid commit: %s", p);
|
|
|
|
free(buf);
|
|
|
|
} else
|
|
|
|
die("Invalid ref name or SHA1 expression: %s", p);
|
|
|
|
|
|
|
|
if (inline_data) {
|
|
|
|
if (p != uq.buf) {
|
|
|
|
strbuf_addstr(&uq, p);
|
|
|
|
p = uq.buf;
|
|
|
|
}
|
|
|
|
read_next_command();
|
2010-02-01 20:27:35 +03:00
|
|
|
parse_and_store_blob(&last_blob, sha1, 0);
|
2009-10-09 14:22:02 +04:00
|
|
|
} else if (oe) {
|
|
|
|
if (oe->type != OBJ_BLOB)
|
|
|
|
die("Not a blob (actually a %s): %s",
|
|
|
|
typename(oe->type), command_buf.buf);
|
2009-12-07 14:27:24 +03:00
|
|
|
} else if (!is_null_sha1(sha1)) {
|
2009-10-09 14:22:02 +04:00
|
|
|
enum object_type type = sha1_object_info(sha1, NULL);
|
|
|
|
if (type < 0)
|
|
|
|
die("Blob not found: %s", command_buf.buf);
|
|
|
|
if (type != OBJ_BLOB)
|
|
|
|
die("Not a blob (actually a %s): %s",
|
|
|
|
typename(type), command_buf.buf);
|
|
|
|
}
|
|
|
|
|
2011-11-25 04:09:47 +04:00
|
|
|
construct_path_with_fanout(sha1_to_hex(commit_sha1), *old_fanout, path);
|
2013-06-23 18:58:22 +04:00
|
|
|
if (tree_content_remove(&b->branch_tree, path, NULL, 0))
|
2009-12-07 14:27:24 +03:00
|
|
|
b->num_notes--;
|
|
|
|
|
|
|
|
if (is_null_sha1(sha1))
|
|
|
|
return; /* nothing to insert */
|
|
|
|
|
|
|
|
b->num_notes++;
|
|
|
|
new_fanout = convert_num_notes_to_fanout(b->num_notes);
|
|
|
|
construct_path_with_fanout(sha1_to_hex(commit_sha1), new_fanout, path);
|
|
|
|
tree_content_set(&b->branch_tree, path, sha1, S_IFREG | 0644, NULL);
|
2009-10-09 14:22:02 +04:00
|
|
|
}
|
|
|
|
|
2007-02-07 10:03:03 +03:00
|
|
|
static void file_change_deleteall(struct branch *b)
|
|
|
|
{
|
|
|
|
release_tree_content_recursive(b->branch_tree.tree);
|
|
|
|
hashclr(b->branch_tree.versions[0].sha1);
|
|
|
|
hashclr(b->branch_tree.versions[1].sha1);
|
|
|
|
load_tree(&b->branch_tree);
|
2009-12-07 14:27:24 +03:00
|
|
|
b->num_notes = 0;
|
2007-02-07 10:03:03 +03:00
|
|
|
}
|
|
|
|
|
2008-05-16 02:35:56 +04:00
|
|
|
static void parse_from_commit(struct branch *b, char *buf, unsigned long size)
|
2007-05-24 08:05:19 +04:00
|
|
|
{
|
|
|
|
if (!buf || size < 46)
|
|
|
|
die("Not a valid commit: %s", sha1_to_hex(b->sha1));
|
|
|
|
if (memcmp("tree ", buf, 5)
|
|
|
|
|| get_sha1_hex(buf + 5, b->branch_tree.versions[1].sha1))
|
|
|
|
die("The commit %s is corrupt", sha1_to_hex(b->sha1));
|
|
|
|
hashcpy(b->branch_tree.versions[0].sha1,
|
|
|
|
b->branch_tree.versions[1].sha1);
|
|
|
|
}
|
|
|
|
|
2008-05-16 02:35:56 +04:00
|
|
|
static void parse_from_existing(struct branch *b)
|
2007-05-24 08:05:19 +04:00
|
|
|
{
|
|
|
|
if (is_null_sha1(b->sha1)) {
|
|
|
|
hashclr(b->branch_tree.versions[0].sha1);
|
|
|
|
hashclr(b->branch_tree.versions[1].sha1);
|
|
|
|
} else {
|
|
|
|
unsigned long size;
|
|
|
|
char *buf;
|
|
|
|
|
|
|
|
buf = read_object_with_reference(b->sha1,
|
|
|
|
commit_type, &size, b->sha1);
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_from_commit(b, buf, size);
|
2007-05-24 08:05:19 +04:00
|
|
|
free(buf);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-05-16 02:35:56 +04:00
|
|
|
static int parse_from(struct branch *b)
|
2006-08-25 02:45:26 +04:00
|
|
|
{
|
2007-02-06 04:30:37 +03:00
|
|
|
const char *from;
|
2006-08-25 02:45:26 +04:00
|
|
|
struct branch *s;
|
|
|
|
|
2007-02-20 12:54:00 +03:00
|
|
|
if (prefixcmp(command_buf.buf, "from "))
|
2007-08-01 10:22:53 +04:00
|
|
|
return 0;
|
2006-08-25 02:45:26 +04:00
|
|
|
|
2007-02-12 12:08:43 +03:00
|
|
|
if (b->branch_tree.tree) {
|
|
|
|
release_tree_content_recursive(b->branch_tree.tree);
|
|
|
|
b->branch_tree.tree = NULL;
|
|
|
|
}
|
2006-08-25 02:45:26 +04:00
|
|
|
|
|
|
|
from = strchr(command_buf.buf, ' ') + 1;
|
|
|
|
s = lookup_branch(from);
|
|
|
|
if (b == s)
|
|
|
|
die("Can't create a branch from itself: %s", b->name);
|
|
|
|
else if (s) {
|
2006-08-28 20:22:50 +04:00
|
|
|
unsigned char *t = s->branch_tree.versions[1].sha1;
|
2006-08-28 18:46:58 +04:00
|
|
|
hashcpy(b->sha1, s->sha1);
|
2006-08-28 20:22:50 +04:00
|
|
|
hashcpy(b->branch_tree.versions[0].sha1, t);
|
|
|
|
hashcpy(b->branch_tree.versions[1].sha1, t);
|
2006-08-25 02:45:26 +04:00
|
|
|
} else if (*from == ':') {
|
2012-04-08 02:59:20 +04:00
|
|
|
uintmax_t idnum = parse_mark_ref_eol(from);
|
2006-08-25 02:45:26 +04:00
|
|
|
struct object_entry *oe = find_mark(idnum);
|
|
|
|
if (oe->type != OBJ_COMMIT)
|
2007-02-21 04:34:56 +03:00
|
|
|
die("Mark :%" PRIuMAX " not a commit", idnum);
|
2010-02-17 22:05:51 +03:00
|
|
|
hashcpy(b->sha1, oe->idx.sha1);
|
2007-05-24 08:32:31 +04:00
|
|
|
if (oe->pack_id != MAX_PACK_ID) {
|
2006-08-25 02:45:26 +04:00
|
|
|
unsigned long size;
|
2007-05-24 08:32:31 +04:00
|
|
|
char *buf = gfi_unpack_entry(oe, &size);
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_from_commit(b, buf, size);
|
2006-08-25 02:45:26 +04:00
|
|
|
free(buf);
|
2007-05-24 08:32:31 +04:00
|
|
|
} else
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_from_existing(b);
|
2007-05-24 08:05:19 +04:00
|
|
|
} else if (!get_sha1(from, b->sha1))
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_from_existing(b);
|
2007-05-24 08:05:19 +04:00
|
|
|
else
|
2006-08-25 02:45:26 +04:00
|
|
|
die("Invalid ref name or SHA1 expression: %s", from);
|
|
|
|
|
|
|
|
read_next_command();
|
2007-08-01 10:22:53 +04:00
|
|
|
return 1;
|
2006-08-25 02:45:26 +04:00
|
|
|
}
|
|
|
|
|
2008-05-16 02:35:56 +04:00
|
|
|
static struct hash_list *parse_merge(unsigned int *count)
|
2007-01-12 06:21:38 +03:00
|
|
|
{
|
2013-03-21 15:08:17 +04:00
|
|
|
struct hash_list *list = NULL, **tail = &list, *n;
|
2007-02-06 04:30:37 +03:00
|
|
|
const char *from;
|
2007-01-12 06:21:38 +03:00
|
|
|
struct branch *s;
|
|
|
|
|
|
|
|
*count = 0;
|
2007-02-20 12:54:00 +03:00
|
|
|
while (!prefixcmp(command_buf.buf, "merge ")) {
|
2007-01-12 06:21:38 +03:00
|
|
|
from = strchr(command_buf.buf, ' ') + 1;
|
|
|
|
n = xmalloc(sizeof(*n));
|
|
|
|
s = lookup_branch(from);
|
|
|
|
if (s)
|
|
|
|
hashcpy(n->sha1, s->sha1);
|
|
|
|
else if (*from == ':') {
|
2012-04-08 02:59:20 +04:00
|
|
|
uintmax_t idnum = parse_mark_ref_eol(from);
|
2007-01-12 06:21:38 +03:00
|
|
|
struct object_entry *oe = find_mark(idnum);
|
|
|
|
if (oe->type != OBJ_COMMIT)
|
2007-02-21 04:34:56 +03:00
|
|
|
die("Mark :%" PRIuMAX " not a commit", idnum);
|
2010-02-17 22:05:51 +03:00
|
|
|
hashcpy(n->sha1, oe->idx.sha1);
|
2007-03-05 20:43:14 +03:00
|
|
|
} else if (!get_sha1(from, n->sha1)) {
|
|
|
|
unsigned long size;
|
|
|
|
char *buf = read_object_with_reference(n->sha1,
|
2007-03-05 20:49:02 +03:00
|
|
|
commit_type, &size, n->sha1);
|
2007-03-05 20:43:14 +03:00
|
|
|
if (!buf || size < 46)
|
|
|
|
die("Not a valid commit: %s", from);
|
|
|
|
free(buf);
|
|
|
|
} else
|
2007-01-12 06:21:38 +03:00
|
|
|
die("Invalid ref name or SHA1 expression: %s", from);
|
|
|
|
|
|
|
|
n->next = NULL;
|
2013-03-21 15:08:17 +04:00
|
|
|
*tail = n;
|
|
|
|
tail = &n->next;
|
|
|
|
|
2007-02-06 08:26:49 +03:00
|
|
|
(*count)++;
|
2007-01-12 06:21:38 +03:00
|
|
|
read_next_command();
|
|
|
|
}
|
|
|
|
return list;
|
|
|
|
}
|
|
|
|
|
2008-05-16 02:35:56 +04:00
|
|
|
static void parse_new_commit(void)
|
2006-08-08 11:36:45 +04:00
|
|
|
{
|
2007-09-17 15:48:17 +04:00
|
|
|
static struct strbuf msg = STRBUF_INIT;
|
2006-08-15 04:16:28 +04:00
|
|
|
struct branch *b;
|
|
|
|
char *sp;
|
|
|
|
char *author = NULL;
|
|
|
|
char *committer = NULL;
|
2007-01-12 06:21:38 +03:00
|
|
|
struct hash_list *merge_list = NULL;
|
|
|
|
unsigned int merge_count;
|
2009-12-07 14:27:24 +03:00
|
|
|
unsigned char prev_fanout, new_fanout;
|
2006-08-15 04:16:28 +04:00
|
|
|
|
|
|
|
/* Obtain the branch name from the rest of our command */
|
|
|
|
sp = strchr(command_buf.buf, ' ') + 1;
|
|
|
|
b = lookup_branch(sp);
|
2006-08-14 08:58:19 +04:00
|
|
|
if (!b)
|
2006-08-25 02:45:26 +04:00
|
|
|
b = new_branch(sp);
|
2006-08-15 04:16:28 +04:00
|
|
|
|
|
|
|
read_next_command();
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_mark();
|
2007-02-20 12:54:00 +03:00
|
|
|
if (!prefixcmp(command_buf.buf, "author ")) {
|
2007-02-06 22:58:30 +03:00
|
|
|
author = parse_ident(command_buf.buf + 7);
|
2006-08-15 04:16:28 +04:00
|
|
|
read_next_command();
|
|
|
|
}
|
2007-02-20 12:54:00 +03:00
|
|
|
if (!prefixcmp(command_buf.buf, "committer ")) {
|
2007-02-06 22:58:30 +03:00
|
|
|
committer = parse_ident(command_buf.buf + 10);
|
2006-08-15 04:16:28 +04:00
|
|
|
read_next_command();
|
|
|
|
}
|
|
|
|
if (!committer)
|
|
|
|
die("Expected committer but didn't get one");
|
2010-02-01 20:27:35 +03:00
|
|
|
parse_data(&msg, 0, NULL);
|
2006-08-25 06:38:13 +04:00
|
|
|
read_next_command();
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_from(b);
|
|
|
|
merge_list = parse_merge(&merge_count);
|
2006-08-15 04:16:28 +04:00
|
|
|
|
|
|
|
/* ensure the branch is active/loaded */
|
2006-08-24 12:37:35 +04:00
|
|
|
if (!b->branch_tree.tree || !max_active_branches) {
|
2006-08-14 08:58:19 +04:00
|
|
|
unload_one_branch();
|
|
|
|
load_branch(b);
|
|
|
|
}
|
2006-08-08 11:36:45 +04:00
|
|
|
|
2009-12-07 14:27:24 +03:00
|
|
|
prev_fanout = convert_num_notes_to_fanout(b->num_notes);
|
|
|
|
|
2006-08-14 08:58:19 +04:00
|
|
|
/* file_change* */
|
2007-09-17 13:19:04 +04:00
|
|
|
while (command_buf.len > 0) {
|
2007-08-01 10:22:53 +04:00
|
|
|
if (!prefixcmp(command_buf.buf, "M "))
|
2006-08-14 08:58:19 +04:00
|
|
|
file_change_m(b);
|
2007-02-20 12:54:00 +03:00
|
|
|
else if (!prefixcmp(command_buf.buf, "D "))
|
2006-08-14 08:58:19 +04:00
|
|
|
file_change_d(b);
|
2007-07-10 06:58:23 +04:00
|
|
|
else if (!prefixcmp(command_buf.buf, "R "))
|
2007-07-15 09:40:37 +04:00
|
|
|
file_change_cr(b, 1);
|
|
|
|
else if (!prefixcmp(command_buf.buf, "C "))
|
|
|
|
file_change_cr(b, 0);
|
2009-10-09 14:22:02 +04:00
|
|
|
else if (!prefixcmp(command_buf.buf, "N "))
|
2011-11-25 04:09:47 +04:00
|
|
|
note_change_n(b, &prev_fanout);
|
2007-02-07 10:03:03 +03:00
|
|
|
else if (!strcmp("deleteall", command_buf.buf))
|
|
|
|
file_change_deleteall(b);
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
else if (!prefixcmp(command_buf.buf, "ls "))
|
|
|
|
parse_ls(b);
|
2007-08-01 10:22:53 +04:00
|
|
|
else {
|
|
|
|
unread_command_buf = 1;
|
|
|
|
break;
|
|
|
|
}
|
2007-09-17 13:19:04 +04:00
|
|
|
if (read_next_command() == EOF)
|
|
|
|
break;
|
2006-08-08 11:36:45 +04:00
|
|
|
}
|
|
|
|
|
2009-12-07 14:27:24 +03:00
|
|
|
new_fanout = convert_num_notes_to_fanout(b->num_notes);
|
|
|
|
if (new_fanout != prev_fanout)
|
|
|
|
b->num_notes = change_note_fanout(&b->branch_tree, new_fanout);
|
|
|
|
|
2006-08-15 04:16:28 +04:00
|
|
|
/* build the tree and the commit */
|
2006-08-14 08:58:19 +04:00
|
|
|
store_tree(&b->branch_tree);
|
2006-08-29 06:06:13 +04:00
|
|
|
hashcpy(b->branch_tree.versions[0].sha1,
|
|
|
|
b->branch_tree.versions[1].sha1);
|
2007-09-17 15:48:17 +04:00
|
|
|
|
|
|
|
strbuf_reset(&new_data);
|
|
|
|
strbuf_addf(&new_data, "tree %s\n",
|
2006-08-28 20:22:50 +04:00
|
|
|
sha1_to_hex(b->branch_tree.versions[1].sha1));
|
2006-08-28 18:46:58 +04:00
|
|
|
if (!is_null_sha1(b->sha1))
|
2007-09-17 15:48:17 +04:00
|
|
|
strbuf_addf(&new_data, "parent %s\n", sha1_to_hex(b->sha1));
|
2007-01-12 06:21:38 +03:00
|
|
|
while (merge_list) {
|
|
|
|
struct hash_list *next = merge_list->next;
|
2007-09-17 15:48:17 +04:00
|
|
|
strbuf_addf(&new_data, "parent %s\n", sha1_to_hex(merge_list->sha1));
|
2007-01-12 06:21:38 +03:00
|
|
|
free(merge_list);
|
|
|
|
merge_list = next;
|
|
|
|
}
|
2007-09-17 15:48:17 +04:00
|
|
|
strbuf_addf(&new_data,
|
|
|
|
"author %s\n"
|
|
|
|
"committer %s\n"
|
|
|
|
"\n",
|
|
|
|
author ? author : committer, committer);
|
|
|
|
strbuf_addbuf(&new_data, &msg);
|
2007-02-06 20:05:51 +03:00
|
|
|
free(author);
|
2006-08-15 04:16:28 +04:00
|
|
|
free(committer);
|
|
|
|
|
2007-09-17 15:48:17 +04:00
|
|
|
if (!store_object(OBJ_COMMIT, &new_data, NULL, b->sha1, next_mark))
|
2007-01-17 10:42:43 +03:00
|
|
|
b->pack_id = pack_id;
|
2006-08-14 08:58:19 +04:00
|
|
|
b->last_commit = object_count_by_type[OBJ_COMMIT];
|
2006-08-08 11:36:45 +04:00
|
|
|
}
|
|
|
|
|
2008-05-16 02:35:56 +04:00
|
|
|
static void parse_new_tag(void)
|
2006-08-24 11:12:13 +04:00
|
|
|
{
|
2007-09-17 15:48:17 +04:00
|
|
|
static struct strbuf msg = STRBUF_INIT;
|
2006-08-24 11:12:13 +04:00
|
|
|
char *sp;
|
|
|
|
const char *from;
|
|
|
|
char *tagger;
|
|
|
|
struct branch *s;
|
|
|
|
struct tag *t;
|
2007-01-16 08:33:19 +03:00
|
|
|
uintmax_t from_mark = 0;
|
2006-08-24 11:12:13 +04:00
|
|
|
unsigned char sha1[20];
|
2010-01-14 07:44:19 +03:00
|
|
|
enum object_type type;
|
2006-08-24 11:12:13 +04:00
|
|
|
|
|
|
|
/* Obtain the new tag name from the rest of our command */
|
|
|
|
sp = strchr(command_buf.buf, ' ') + 1;
|
|
|
|
t = pool_alloc(sizeof(struct tag));
|
2012-03-05 17:48:49 +04:00
|
|
|
memset(t, 0, sizeof(struct tag));
|
2006-08-24 11:12:13 +04:00
|
|
|
t->name = pool_strdup(sp);
|
|
|
|
if (last_tag)
|
|
|
|
last_tag->next_tag = t;
|
|
|
|
else
|
|
|
|
first_tag = t;
|
|
|
|
last_tag = t;
|
|
|
|
read_next_command();
|
|
|
|
|
|
|
|
/* from ... */
|
2007-02-20 12:54:00 +03:00
|
|
|
if (prefixcmp(command_buf.buf, "from "))
|
2006-08-24 11:12:13 +04:00
|
|
|
die("Expected from command, got %s", command_buf.buf);
|
|
|
|
from = strchr(command_buf.buf, ' ') + 1;
|
|
|
|
s = lookup_branch(from);
|
|
|
|
if (s) {
|
2011-09-22 23:47:04 +04:00
|
|
|
if (is_null_sha1(s->sha1))
|
|
|
|
die("Can't tag an empty branch.");
|
2006-08-28 18:46:58 +04:00
|
|
|
hashcpy(sha1, s->sha1);
|
2010-01-14 07:44:19 +03:00
|
|
|
type = OBJ_COMMIT;
|
2006-08-24 11:12:13 +04:00
|
|
|
} else if (*from == ':') {
|
2007-02-06 08:26:49 +03:00
|
|
|
struct object_entry *oe;
|
2012-04-08 02:59:20 +04:00
|
|
|
from_mark = parse_mark_ref_eol(from);
|
2007-02-06 08:26:49 +03:00
|
|
|
oe = find_mark(from_mark);
|
2010-01-14 07:44:19 +03:00
|
|
|
type = oe->type;
|
2010-02-17 22:05:51 +03:00
|
|
|
hashcpy(sha1, oe->idx.sha1);
|
2006-08-24 11:12:13 +04:00
|
|
|
} else if (!get_sha1(from, sha1)) {
|
2011-08-22 16:10:19 +04:00
|
|
|
struct object_entry *oe = find_object(sha1);
|
|
|
|
if (!oe) {
|
|
|
|
type = sha1_object_info(sha1, NULL);
|
|
|
|
if (type < 0)
|
|
|
|
die("Not a valid object: %s", from);
|
|
|
|
} else
|
|
|
|
type = oe->type;
|
2006-08-24 11:12:13 +04:00
|
|
|
} else
|
|
|
|
die("Invalid ref name or SHA1 expression: %s", from);
|
|
|
|
read_next_command();
|
|
|
|
|
|
|
|
/* tagger ... */
|
2008-12-20 01:41:21 +03:00
|
|
|
if (!prefixcmp(command_buf.buf, "tagger ")) {
|
|
|
|
tagger = parse_ident(command_buf.buf + 7);
|
|
|
|
read_next_command();
|
|
|
|
} else
|
|
|
|
tagger = NULL;
|
2006-08-24 11:12:13 +04:00
|
|
|
|
|
|
|
/* tag payload/message */
|
2010-02-01 20:27:35 +03:00
|
|
|
parse_data(&msg, 0, NULL);
|
2006-08-24 11:12:13 +04:00
|
|
|
|
|
|
|
/* build the tag object */
|
2007-09-17 15:48:17 +04:00
|
|
|
strbuf_reset(&new_data);
|
2008-12-20 01:41:21 +03:00
|
|
|
|
2007-09-17 15:48:17 +04:00
|
|
|
strbuf_addf(&new_data,
|
2008-12-20 01:41:21 +03:00
|
|
|
"object %s\n"
|
|
|
|
"type %s\n"
|
|
|
|
"tag %s\n",
|
2010-01-14 07:44:19 +03:00
|
|
|
sha1_to_hex(sha1), typename(type), t->name);
|
2008-12-20 01:41:21 +03:00
|
|
|
if (tagger)
|
|
|
|
strbuf_addf(&new_data,
|
|
|
|
"tagger %s\n", tagger);
|
|
|
|
strbuf_addch(&new_data, '\n');
|
2007-09-17 15:48:17 +04:00
|
|
|
strbuf_addbuf(&new_data, &msg);
|
2006-08-24 11:12:13 +04:00
|
|
|
free(tagger);
|
|
|
|
|
2007-09-17 15:48:17 +04:00
|
|
|
if (store_object(OBJ_TAG, &new_data, NULL, t->sha1, 0))
|
2007-01-17 10:42:43 +03:00
|
|
|
t->pack_id = MAX_PACK_ID;
|
|
|
|
else
|
|
|
|
t->pack_id = pack_id;
|
2006-08-24 11:12:13 +04:00
|
|
|
}
|
|
|
|
|
2008-05-16 02:35:56 +04:00
|
|
|
static void parse_reset_branch(void)
|
2006-08-27 14:20:49 +04:00
|
|
|
{
|
|
|
|
struct branch *b;
|
|
|
|
char *sp;
|
|
|
|
|
|
|
|
/* Obtain the branch name from the rest of our command */
|
|
|
|
sp = strchr(command_buf.buf, ' ') + 1;
|
|
|
|
b = lookup_branch(sp);
|
|
|
|
if (b) {
|
2007-02-12 12:08:43 +03:00
|
|
|
hashclr(b->sha1);
|
|
|
|
hashclr(b->branch_tree.versions[0].sha1);
|
|
|
|
hashclr(b->branch_tree.versions[1].sha1);
|
2006-08-27 14:20:49 +04:00
|
|
|
if (b->branch_tree.tree) {
|
|
|
|
release_tree_content_recursive(b->branch_tree.tree);
|
|
|
|
b->branch_tree.tree = NULL;
|
|
|
|
}
|
|
|
|
}
|
2007-01-12 06:28:39 +03:00
|
|
|
else
|
|
|
|
b = new_branch(sp);
|
|
|
|
read_next_command();
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_from(b);
|
2008-03-07 23:22:17 +03:00
|
|
|
if (command_buf.len > 0)
|
2007-08-01 10:22:53 +04:00
|
|
|
unread_command_buf = 1;
|
2006-08-27 14:20:49 +04:00
|
|
|
}
|
|
|
|
|
2010-11-28 22:45:01 +03:00
|
|
|
static void cat_blob_write(const char *buf, unsigned long size)
|
|
|
|
{
|
|
|
|
if (write_in_full(cat_blob_fd, buf, size) != size)
|
|
|
|
die_errno("Write to frontend failed");
|
|
|
|
}
|
|
|
|
|
|
|
|
static void cat_blob(struct object_entry *oe, unsigned char sha1[20])
|
|
|
|
{
|
|
|
|
struct strbuf line = STRBUF_INIT;
|
|
|
|
unsigned long size;
|
|
|
|
enum object_type type = 0;
|
|
|
|
char *buf;
|
|
|
|
|
|
|
|
if (!oe || oe->pack_id == MAX_PACK_ID) {
|
|
|
|
buf = read_sha1_file(sha1, &type, &size);
|
|
|
|
} else {
|
|
|
|
type = oe->type;
|
|
|
|
buf = gfi_unpack_entry(oe, &size);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Output based on batch_one_object() from cat-file.c.
|
|
|
|
*/
|
|
|
|
if (type <= 0) {
|
|
|
|
strbuf_reset(&line);
|
|
|
|
strbuf_addf(&line, "%s missing\n", sha1_to_hex(sha1));
|
|
|
|
cat_blob_write(line.buf, line.len);
|
|
|
|
strbuf_release(&line);
|
|
|
|
free(buf);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (!buf)
|
|
|
|
die("Can't read object %s", sha1_to_hex(sha1));
|
|
|
|
if (type != OBJ_BLOB)
|
|
|
|
die("Object %s is a %s but a blob was expected.",
|
|
|
|
sha1_to_hex(sha1), typename(type));
|
|
|
|
strbuf_reset(&line);
|
|
|
|
strbuf_addf(&line, "%s %s %lu\n", sha1_to_hex(sha1),
|
|
|
|
typename(type), size);
|
|
|
|
cat_blob_write(line.buf, line.len);
|
|
|
|
strbuf_release(&line);
|
|
|
|
cat_blob_write(buf, size);
|
|
|
|
cat_blob_write("\n", 1);
|
fast-import: treat cat-blob as a delta base hint for next blob
Delta base for blobs is chosen as a previously saved blob. If we
treat cat-blob's blob as a delta base for the next blob, nothing
is likely to become worse.
For fast-import stream producer like svn-fe cat-blob is used like
following:
- svn-fe reads file delta in svn format
- to apply it, svn-fe asks cat-blob 'svn delta base'
- applies 'svn delta' to the response
- produces a blob command to store the result
Currently there is no way for svn-fe to give fast-import a hint on
object delta base. While what's requested in cat-blob is most of
the time a best delta base possible. Of course, it could be not a
good delta base, but we don't know any better one anyway.
So do treat cat-blob's result as a delta base for next blob. The
profit is nice: 2x to 7x reduction in pack size AND 1.2x to 3x
time speedup due to diff_delta being faster on good deltas. git gc
--aggressive can compress it even more, by 10% to 70%, utilizing
more cpu time, real time and 3 cpu cores.
Tested on 213M and 2.7G fast-import streams, resulting packs are 22M
and 113M, import time is 7s and 60s, both streams are produced by
svn-fe, sniffed and then used as raw input for fast-import.
For git-fast-export produced streams there is no change as it doesn't
use cat-blob and doesn't try to reorder blobs in some smart way to
make successive deltas small.
Signed-off-by: Dmitry Ivankov <divanorama@gmail.com>
Acked-by: David Barr <davidbarr@google.com>
Acked-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-20 23:04:12 +04:00
|
|
|
if (oe && oe->pack_id == pack_id) {
|
|
|
|
last_blob.offset = oe->idx.offset;
|
|
|
|
strbuf_attach(&last_blob.data, buf, size, size);
|
|
|
|
last_blob.depth = oe->depth;
|
|
|
|
} else
|
|
|
|
free(buf);
|
2010-11-28 22:45:01 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void parse_cat_blob(void)
|
|
|
|
{
|
|
|
|
const char *p;
|
|
|
|
struct object_entry *oe = oe;
|
|
|
|
unsigned char sha1[20];
|
|
|
|
|
|
|
|
/* cat-blob SP <object> LF */
|
|
|
|
p = command_buf.buf + strlen("cat-blob ");
|
|
|
|
if (*p == ':') {
|
2012-04-08 02:59:20 +04:00
|
|
|
oe = find_mark(parse_mark_ref_eol(p));
|
2010-11-28 22:45:01 +03:00
|
|
|
if (!oe)
|
|
|
|
die("Unknown mark: %s", command_buf.buf);
|
|
|
|
hashcpy(sha1, oe->idx.sha1);
|
|
|
|
} else {
|
|
|
|
if (get_sha1_hex(p, sha1))
|
2012-04-08 02:59:20 +04:00
|
|
|
die("Invalid dataref: %s", command_buf.buf);
|
2010-11-28 22:45:01 +03:00
|
|
|
if (p[40])
|
|
|
|
die("Garbage after SHA1: %s", command_buf.buf);
|
|
|
|
oe = find_object(sha1);
|
|
|
|
}
|
|
|
|
|
|
|
|
cat_blob(oe, sha1);
|
|
|
|
}
|
|
|
|
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
static struct object_entry *dereference(struct object_entry *oe,
|
|
|
|
unsigned char sha1[20])
|
|
|
|
{
|
|
|
|
unsigned long size;
|
2011-03-01 00:16:59 +03:00
|
|
|
char *buf = NULL;
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
if (!oe) {
|
|
|
|
enum object_type type = sha1_object_info(sha1, NULL);
|
|
|
|
if (type < 0)
|
|
|
|
die("object not found: %s", sha1_to_hex(sha1));
|
|
|
|
/* cache it! */
|
|
|
|
oe = insert_object(sha1);
|
|
|
|
oe->type = type;
|
|
|
|
oe->pack_id = MAX_PACK_ID;
|
|
|
|
oe->idx.offset = 1;
|
|
|
|
}
|
|
|
|
switch (oe->type) {
|
|
|
|
case OBJ_TREE: /* easy case. */
|
|
|
|
return oe;
|
|
|
|
case OBJ_COMMIT:
|
|
|
|
case OBJ_TAG:
|
|
|
|
break;
|
|
|
|
default:
|
2013-09-04 23:04:30 +04:00
|
|
|
die("Not a tree-ish: %s", command_buf.buf);
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (oe->pack_id != MAX_PACK_ID) { /* in a pack being written */
|
|
|
|
buf = gfi_unpack_entry(oe, &size);
|
|
|
|
} else {
|
|
|
|
enum object_type unused;
|
|
|
|
buf = read_sha1_file(sha1, &unused, &size);
|
|
|
|
}
|
|
|
|
if (!buf)
|
|
|
|
die("Can't load object %s", sha1_to_hex(sha1));
|
|
|
|
|
|
|
|
/* Peel one layer. */
|
|
|
|
switch (oe->type) {
|
|
|
|
case OBJ_TAG:
|
|
|
|
if (size < 40 + strlen("object ") ||
|
|
|
|
get_sha1_hex(buf + strlen("object "), sha1))
|
|
|
|
die("Invalid SHA1 in tag: %s", command_buf.buf);
|
|
|
|
break;
|
|
|
|
case OBJ_COMMIT:
|
|
|
|
if (size < 40 + strlen("tree ") ||
|
|
|
|
get_sha1_hex(buf + strlen("tree "), sha1))
|
|
|
|
die("Invalid SHA1 in commit: %s", command_buf.buf);
|
|
|
|
}
|
|
|
|
|
|
|
|
free(buf);
|
|
|
|
return find_object(sha1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct object_entry *parse_treeish_dataref(const char **p)
|
|
|
|
{
|
|
|
|
unsigned char sha1[20];
|
|
|
|
struct object_entry *e;
|
|
|
|
|
|
|
|
if (**p == ':') { /* <mark> */
|
2012-04-08 02:59:20 +04:00
|
|
|
e = find_mark(parse_mark_ref_space(p));
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
if (!e)
|
|
|
|
die("Unknown mark: %s", command_buf.buf);
|
|
|
|
hashcpy(sha1, e->idx.sha1);
|
|
|
|
} else { /* <sha1> */
|
|
|
|
if (get_sha1_hex(*p, sha1))
|
2012-04-08 02:59:20 +04:00
|
|
|
die("Invalid dataref: %s", command_buf.buf);
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
e = find_object(sha1);
|
|
|
|
*p += 40;
|
|
|
|
}
|
|
|
|
|
|
|
|
while (!e || e->type != OBJ_TREE)
|
|
|
|
e = dereference(e, sha1);
|
|
|
|
return e;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void print_ls(int mode, const unsigned char *sha1, const char *path)
|
|
|
|
{
|
|
|
|
static struct strbuf line = STRBUF_INIT;
|
|
|
|
|
|
|
|
/* See show_tree(). */
|
|
|
|
const char *type =
|
|
|
|
S_ISGITLINK(mode) ? commit_type :
|
|
|
|
S_ISDIR(mode) ? tree_type :
|
|
|
|
blob_type;
|
|
|
|
|
|
|
|
if (!mode) {
|
|
|
|
/* missing SP path LF */
|
|
|
|
strbuf_reset(&line);
|
|
|
|
strbuf_addstr(&line, "missing ");
|
|
|
|
quote_c_style(path, &line, NULL, 0);
|
|
|
|
strbuf_addch(&line, '\n');
|
|
|
|
} else {
|
|
|
|
/* mode SP type SP object_name TAB path LF */
|
|
|
|
strbuf_reset(&line);
|
|
|
|
strbuf_addf(&line, "%06o %s %s\t",
|
2011-08-14 22:32:24 +04:00
|
|
|
mode & ~NO_DELTA, type, sha1_to_hex(sha1));
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
quote_c_style(path, &line, NULL, 0);
|
|
|
|
strbuf_addch(&line, '\n');
|
|
|
|
}
|
|
|
|
cat_blob_write(line.buf, line.len);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void parse_ls(struct branch *b)
|
|
|
|
{
|
|
|
|
const char *p;
|
|
|
|
struct tree_entry *root = NULL;
|
Fix sparse warnings
Fix warnings from 'make check'.
- These files don't include 'builtin.h' causing sparse to complain that
cmd_* isn't declared:
builtin/clone.c:364, builtin/fetch-pack.c:797,
builtin/fmt-merge-msg.c:34, builtin/hash-object.c:78,
builtin/merge-index.c:69, builtin/merge-recursive.c:22
builtin/merge-tree.c:341, builtin/mktag.c:156, builtin/notes.c:426
builtin/notes.c:822, builtin/pack-redundant.c:596,
builtin/pack-refs.c:10, builtin/patch-id.c:60, builtin/patch-id.c:149,
builtin/remote.c:1512, builtin/remote-ext.c:240,
builtin/remote-fd.c:53, builtin/reset.c:236, builtin/send-pack.c:384,
builtin/unpack-file.c:25, builtin/var.c:75
- These files have symbols which should be marked static since they're
only file scope:
submodule.c:12, diff.c:631, replace_object.c:92, submodule.c:13,
submodule.c:14, trace.c:78, transport.c:195, transport-helper.c:79,
unpack-trees.c:19, url.c:3, url.c:18, url.c:104, url.c:117, url.c:123,
url.c:129, url.c:136, thread-utils.c:21, thread-utils.c:48
- These files redeclare symbols to be different types:
builtin/index-pack.c:210, parse-options.c:564, parse-options.c:571,
usage.c:49, usage.c:58, usage.c:63, usage.c:72
- These files use a literal integer 0 when they really should use a NULL
pointer:
daemon.c:663, fast-import.c:2942, imap-send.c:1072, notes-merge.c:362
While we're in the area, clean up some unused #includes in builtin files
(mostly exec_cmd.h).
Signed-off-by: Stephen Boyd <bebarino@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-22 10:51:05 +03:00
|
|
|
struct tree_entry leaf = {NULL};
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
|
2013-09-04 23:04:30 +04:00
|
|
|
/* ls SP (<tree-ish> SP)? <path> */
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
p = command_buf.buf + strlen("ls ");
|
|
|
|
if (*p == '"') {
|
|
|
|
if (!b)
|
|
|
|
die("Not in a commit: %s", command_buf.buf);
|
|
|
|
root = &b->branch_tree;
|
|
|
|
} else {
|
|
|
|
struct object_entry *e = parse_treeish_dataref(&p);
|
|
|
|
root = new_tree_entry();
|
|
|
|
hashcpy(root->versions[1].sha1, e->idx.sha1);
|
2013-06-23 18:58:20 +04:00
|
|
|
if (!is_null_sha1(root->versions[1].sha1))
|
|
|
|
root->versions[1].mode = S_IFDIR;
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
load_tree(root);
|
|
|
|
if (*p++ != ' ')
|
|
|
|
die("Missing space after tree-ish: %s", command_buf.buf);
|
|
|
|
}
|
|
|
|
if (*p == '"') {
|
|
|
|
static struct strbuf uq = STRBUF_INIT;
|
|
|
|
const char *endp;
|
|
|
|
strbuf_reset(&uq);
|
|
|
|
if (unquote_c_style(&uq, p, &endp))
|
|
|
|
die("Invalid path: %s", command_buf.buf);
|
|
|
|
if (*endp)
|
|
|
|
die("Garbage after path in: %s", command_buf.buf);
|
|
|
|
p = uq.buf;
|
|
|
|
}
|
2013-06-23 18:58:21 +04:00
|
|
|
tree_content_get(root, p, &leaf, 1);
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
/*
|
|
|
|
* A directory in preparation would have a sha1 of zero
|
|
|
|
* until it is saved. Save, for simplicity.
|
|
|
|
*/
|
|
|
|
if (S_ISDIR(leaf.versions[1].mode))
|
|
|
|
store_tree(&leaf);
|
|
|
|
|
|
|
|
print_ls(leaf.versions[1].mode, leaf.versions[1].sha1, p);
|
2012-03-10 07:20:34 +04:00
|
|
|
if (leaf.tree)
|
|
|
|
release_tree_content_recursive(leaf.tree);
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
if (!b || root != &b->branch_tree)
|
|
|
|
release_tree_entry(root);
|
|
|
|
}
|
|
|
|
|
2010-11-22 11:16:02 +03:00
|
|
|
static void checkpoint(void)
|
2007-01-15 14:35:41 +03:00
|
|
|
{
|
2010-11-22 11:16:02 +03:00
|
|
|
checkpoint_requested = 0;
|
2007-02-07 10:42:44 +03:00
|
|
|
if (object_count) {
|
|
|
|
cycle_packfile();
|
|
|
|
dump_branches();
|
|
|
|
dump_tags();
|
|
|
|
dump_marks();
|
|
|
|
}
|
2010-11-22 11:16:02 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void parse_checkpoint(void)
|
|
|
|
{
|
|
|
|
checkpoint_requested = 1;
|
2007-08-01 10:22:53 +04:00
|
|
|
skip_optional_lf();
|
2007-01-15 14:35:41 +03:00
|
|
|
}
|
|
|
|
|
2008-05-16 02:35:56 +04:00
|
|
|
static void parse_progress(void)
|
2007-08-01 18:23:08 +04:00
|
|
|
{
|
2007-09-06 15:20:05 +04:00
|
|
|
fwrite(command_buf.buf, 1, command_buf.len, stdout);
|
2007-08-01 18:23:08 +04:00
|
|
|
fputc('\n', stdout);
|
|
|
|
fflush(stdout);
|
|
|
|
skip_optional_lf();
|
|
|
|
}
|
|
|
|
|
2009-12-04 20:07:00 +03:00
|
|
|
static char* make_fast_import_path(const char *path)
|
2007-03-08 02:07:26 +03:00
|
|
|
{
|
2009-12-04 20:07:00 +03:00
|
|
|
struct strbuf abs_path = STRBUF_INIT;
|
2007-03-08 02:07:26 +03:00
|
|
|
|
2009-12-04 20:07:00 +03:00
|
|
|
if (!relative_marks_paths || is_absolute_path(path))
|
|
|
|
return xstrdup(path);
|
|
|
|
strbuf_addf(&abs_path, "%s/info/fast-import/%s", get_git_dir(), path);
|
|
|
|
return strbuf_detach(&abs_path, NULL);
|
|
|
|
}
|
|
|
|
|
2011-01-15 09:31:46 +03:00
|
|
|
static void option_import_marks(const char *marks,
|
|
|
|
int from_stream, int ignore_missing)
|
2007-03-08 02:07:26 +03:00
|
|
|
{
|
2009-12-04 20:06:59 +03:00
|
|
|
if (import_marks_file) {
|
|
|
|
if (from_stream)
|
|
|
|
die("Only one import-marks command allowed per stream");
|
|
|
|
|
|
|
|
/* read previous mark file */
|
|
|
|
if(!import_marks_file_from_stream)
|
|
|
|
read_marks();
|
2007-03-08 02:07:26 +03:00
|
|
|
}
|
2009-12-04 20:06:59 +03:00
|
|
|
|
2009-12-04 20:07:00 +03:00
|
|
|
import_marks_file = make_fast_import_path(marks);
|
2010-03-29 20:48:25 +04:00
|
|
|
safe_create_leading_directories_const(import_marks_file);
|
2009-12-04 20:06:59 +03:00
|
|
|
import_marks_file_from_stream = from_stream;
|
2011-01-15 09:31:46 +03:00
|
|
|
import_marks_file_ignore_missing = ignore_missing;
|
2007-03-08 02:07:26 +03:00
|
|
|
}
|
|
|
|
|
2009-12-04 20:06:54 +03:00
|
|
|
static void option_date_format(const char *fmt)
|
|
|
|
{
|
|
|
|
if (!strcmp(fmt, "raw"))
|
|
|
|
whenspec = WHENSPEC_RAW;
|
|
|
|
else if (!strcmp(fmt, "rfc2822"))
|
|
|
|
whenspec = WHENSPEC_RFC2822;
|
|
|
|
else if (!strcmp(fmt, "now"))
|
|
|
|
whenspec = WHENSPEC_NOW;
|
|
|
|
else
|
|
|
|
die("unknown --date-format argument %s", fmt);
|
|
|
|
}
|
|
|
|
|
2010-11-28 22:42:46 +03:00
|
|
|
static unsigned long ulong_arg(const char *option, const char *arg)
|
|
|
|
{
|
|
|
|
char *endptr;
|
|
|
|
unsigned long rv = strtoul(arg, &endptr, 0);
|
|
|
|
if (strchr(arg, '-') || endptr == arg || *endptr)
|
|
|
|
die("%s: argument must be a non-negative integer", option);
|
|
|
|
return rv;
|
|
|
|
}
|
|
|
|
|
2009-12-04 20:06:54 +03:00
|
|
|
static void option_depth(const char *depth)
|
|
|
|
{
|
2010-11-28 22:42:46 +03:00
|
|
|
max_depth = ulong_arg("--depth", depth);
|
2009-12-04 20:06:54 +03:00
|
|
|
if (max_depth > MAX_DEPTH)
|
|
|
|
die("--depth cannot exceed %u", MAX_DEPTH);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void option_active_branches(const char *branches)
|
|
|
|
{
|
2010-11-28 22:42:46 +03:00
|
|
|
max_active_branches = ulong_arg("--active-branches", branches);
|
2009-12-04 20:06:54 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void option_export_marks(const char *marks)
|
|
|
|
{
|
2009-12-04 20:07:00 +03:00
|
|
|
export_marks_file = make_fast_import_path(marks);
|
2010-03-29 20:48:25 +04:00
|
|
|
safe_create_leading_directories_const(export_marks_file);
|
2009-12-04 20:06:54 +03:00
|
|
|
}
|
|
|
|
|
2010-11-28 22:45:01 +03:00
|
|
|
static void option_cat_blob_fd(const char *fd)
|
|
|
|
{
|
|
|
|
unsigned long n = ulong_arg("--cat-blob-fd", fd);
|
|
|
|
if (n > (unsigned long) INT_MAX)
|
|
|
|
die("--cat-blob-fd cannot exceed %d", INT_MAX);
|
|
|
|
cat_blob_fd = (int) n;
|
|
|
|
}
|
|
|
|
|
2009-12-04 20:06:54 +03:00
|
|
|
static void option_export_pack_edges(const char *edges)
|
|
|
|
{
|
|
|
|
if (pack_edges)
|
|
|
|
fclose(pack_edges);
|
|
|
|
pack_edges = fopen(edges, "a");
|
|
|
|
if (!pack_edges)
|
|
|
|
die_errno("Cannot open '%s'", edges);
|
|
|
|
}
|
|
|
|
|
2009-12-04 20:06:57 +03:00
|
|
|
static int parse_one_option(const char *option)
|
2009-12-04 20:06:54 +03:00
|
|
|
{
|
2009-12-04 20:06:57 +03:00
|
|
|
if (!prefixcmp(option, "max-pack-size=")) {
|
2010-02-04 22:10:44 +03:00
|
|
|
unsigned long v;
|
|
|
|
if (!git_parse_ulong(option + 14, &v))
|
|
|
|
return 0;
|
|
|
|
if (v < 8192) {
|
|
|
|
warning("max-pack-size is now in bytes, assuming --max-pack-size=%lum", v);
|
|
|
|
v *= 1024 * 1024;
|
|
|
|
} else if (v < 1024 * 1024) {
|
|
|
|
warning("minimum max-pack-size is 1 MiB");
|
|
|
|
v = 1024 * 1024;
|
|
|
|
}
|
|
|
|
max_packsize = v;
|
2010-02-01 23:41:31 +03:00
|
|
|
} else if (!prefixcmp(option, "big-file-threshold=")) {
|
2010-02-04 05:27:08 +03:00
|
|
|
unsigned long v;
|
|
|
|
if (!git_parse_ulong(option + 19, &v))
|
|
|
|
return 0;
|
|
|
|
big_file_threshold = v;
|
2009-12-04 20:06:54 +03:00
|
|
|
} else if (!prefixcmp(option, "depth=")) {
|
|
|
|
option_depth(option + 6);
|
|
|
|
} else if (!prefixcmp(option, "active-branches=")) {
|
|
|
|
option_active_branches(option + 16);
|
|
|
|
} else if (!prefixcmp(option, "export-pack-edges=")) {
|
|
|
|
option_export_pack_edges(option + 18);
|
|
|
|
} else if (!prefixcmp(option, "quiet")) {
|
|
|
|
show_stats = 0;
|
|
|
|
} else if (!prefixcmp(option, "stats")) {
|
|
|
|
show_stats = 1;
|
|
|
|
} else {
|
2009-12-04 20:06:57 +03:00
|
|
|
return 0;
|
2009-12-04 20:06:54 +03:00
|
|
|
}
|
2009-12-04 20:06:57 +03:00
|
|
|
|
|
|
|
return 1;
|
2009-12-04 20:06:54 +03:00
|
|
|
}
|
|
|
|
|
2009-12-04 20:06:59 +03:00
|
|
|
static int parse_one_feature(const char *feature, int from_stream)
|
2009-12-04 20:06:56 +03:00
|
|
|
{
|
|
|
|
if (!prefixcmp(feature, "date-format=")) {
|
|
|
|
option_date_format(feature + 12);
|
|
|
|
} else if (!prefixcmp(feature, "import-marks=")) {
|
2011-01-15 09:31:46 +03:00
|
|
|
option_import_marks(feature + 13, from_stream, 0);
|
|
|
|
} else if (!prefixcmp(feature, "import-marks-if-exists=")) {
|
|
|
|
option_import_marks(feature + strlen("import-marks-if-exists="),
|
|
|
|
from_stream, 1);
|
2009-12-04 20:06:56 +03:00
|
|
|
} else if (!prefixcmp(feature, "export-marks=")) {
|
|
|
|
option_export_marks(feature + 13);
|
2010-11-28 22:45:01 +03:00
|
|
|
} else if (!strcmp(feature, "cat-blob")) {
|
|
|
|
; /* Don't die - this feature is supported */
|
2011-05-05 22:56:00 +04:00
|
|
|
} else if (!strcmp(feature, "relative-marks")) {
|
2009-12-04 20:07:00 +03:00
|
|
|
relative_marks_paths = 1;
|
2011-05-05 22:56:00 +04:00
|
|
|
} else if (!strcmp(feature, "no-relative-marks")) {
|
2009-12-04 20:07:00 +03:00
|
|
|
relative_marks_paths = 0;
|
2011-07-16 17:03:32 +04:00
|
|
|
} else if (!strcmp(feature, "done")) {
|
|
|
|
require_explicit_termination = 1;
|
2011-05-05 22:56:00 +04:00
|
|
|
} else if (!strcmp(feature, "force")) {
|
2009-12-04 20:06:56 +03:00
|
|
|
force_update = 1;
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
} else if (!strcmp(feature, "notes") || !strcmp(feature, "ls")) {
|
2011-02-10 01:43:57 +03:00
|
|
|
; /* do nothing; we have the feature */
|
2009-12-04 20:06:56 +03:00
|
|
|
} else {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void parse_feature(void)
|
|
|
|
{
|
|
|
|
char *feature = command_buf.buf + 8;
|
|
|
|
|
|
|
|
if (seen_data_command)
|
|
|
|
die("Got feature command '%s' after data command", feature);
|
|
|
|
|
2009-12-04 20:06:59 +03:00
|
|
|
if (parse_one_feature(feature, 1))
|
2009-12-04 20:06:56 +03:00
|
|
|
return;
|
|
|
|
|
|
|
|
die("This version of fast-import does not support feature %s.", feature);
|
|
|
|
}
|
|
|
|
|
2009-12-04 20:06:57 +03:00
|
|
|
static void parse_option(void)
|
|
|
|
{
|
|
|
|
char *option = command_buf.buf + 11;
|
|
|
|
|
|
|
|
if (seen_data_command)
|
|
|
|
die("Got option command '%s' after data command", option);
|
|
|
|
|
|
|
|
if (parse_one_option(option))
|
|
|
|
return;
|
|
|
|
|
|
|
|
die("This version of fast-import does not support option: %s", option);
|
2007-03-08 02:07:26 +03:00
|
|
|
}
|
|
|
|
|
2008-05-14 21:46:53 +04:00
|
|
|
static int git_pack_config(const char *k, const char *v, void *cb)
|
2008-01-21 07:36:54 +03:00
|
|
|
{
|
|
|
|
if (!strcmp(k, "pack.depth")) {
|
|
|
|
max_depth = git_config_int(k, v);
|
|
|
|
if (max_depth > MAX_DEPTH)
|
|
|
|
max_depth = MAX_DEPTH;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
if (!strcmp(k, "pack.compression")) {
|
|
|
|
int level = git_config_int(k, v);
|
|
|
|
if (level == -1)
|
|
|
|
level = Z_DEFAULT_COMPRESSION;
|
|
|
|
else if (level < 0 || level > Z_BEST_COMPRESSION)
|
|
|
|
die("bad pack compression level %d", level);
|
|
|
|
pack_compression_level = level;
|
|
|
|
pack_compression_seen = 1;
|
|
|
|
return 0;
|
|
|
|
}
|
2010-02-17 22:05:55 +03:00
|
|
|
if (!strcmp(k, "pack.indexversion")) {
|
2011-02-26 02:43:25 +03:00
|
|
|
pack_idx_opts.version = git_config_int(k, v);
|
|
|
|
if (pack_idx_opts.version > 2)
|
2010-02-17 22:05:55 +03:00
|
|
|
die("bad pack.indexversion=%"PRIu32,
|
2011-02-26 02:43:25 +03:00
|
|
|
pack_idx_opts.version);
|
2010-02-17 22:05:55 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
if (!strcmp(k, "pack.packsizelimit")) {
|
|
|
|
max_packsize = git_config_ulong(k, v);
|
|
|
|
return 0;
|
|
|
|
}
|
2008-05-14 21:46:53 +04:00
|
|
|
return git_default_config(k, v, cb);
|
2008-01-21 07:36:54 +03:00
|
|
|
}
|
|
|
|
|
2006-08-23 10:00:31 +04:00
|
|
|
static const char fast_import_usage[] =
|
2010-10-08 21:31:15 +04:00
|
|
|
"git fast-import [--date-format=<f>] [--max-pack-size=<n>] [--big-file-threshold=<n>] [--depth=<n>] [--active-branches=<n>] [--export-marks=<marks.file>]";
|
2006-08-23 10:00:31 +04:00
|
|
|
|
2009-12-04 20:06:57 +03:00
|
|
|
static void parse_argv(void)
|
|
|
|
{
|
|
|
|
unsigned int i;
|
|
|
|
|
|
|
|
for (i = 1; i < global_argc; i++) {
|
|
|
|
const char *a = global_argv[i];
|
|
|
|
|
|
|
|
if (*a != '-' || !strcmp(a, "--"))
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (parse_one_option(a + 2))
|
|
|
|
continue;
|
|
|
|
|
2009-12-04 20:06:59 +03:00
|
|
|
if (parse_one_feature(a + 2, 0))
|
2009-12-04 20:06:57 +03:00
|
|
|
continue;
|
|
|
|
|
2010-11-28 22:45:01 +03:00
|
|
|
if (!prefixcmp(a + 2, "cat-blob-fd=")) {
|
|
|
|
option_cat_blob_fd(a + 2 + strlen("cat-blob-fd="));
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2009-12-04 20:06:57 +03:00
|
|
|
die("unknown option %s", a);
|
|
|
|
}
|
|
|
|
if (i != global_argc)
|
|
|
|
usage(fast_import_usage);
|
|
|
|
|
|
|
|
seen_data_command = 1;
|
|
|
|
if (import_marks_file)
|
|
|
|
read_marks();
|
|
|
|
}
|
|
|
|
|
2013-04-27 23:19:47 +04:00
|
|
|
int main(int argc, char **argv)
|
2006-08-06 21:51:39 +04:00
|
|
|
{
|
2009-12-04 20:06:54 +03:00
|
|
|
unsigned int i;
|
2006-08-06 21:51:39 +04:00
|
|
|
|
2009-01-18 15:00:12 +03:00
|
|
|
git_extract_argv0_path(argv[0]);
|
|
|
|
|
i18n: add infrastructure for translating Git with gettext
Change the skeleton implementation of i18n in Git to one that can show
localized strings to users for our C, Shell and Perl programs using
either GNU libintl or the Solaris gettext implementation.
This new internationalization support is enabled by default. If
gettext isn't available, or if Git is compiled with
NO_GETTEXT=YesPlease, Git falls back on its current behavior of
showing interface messages in English. When using the autoconf script
we'll auto-detect if the gettext libraries are installed and act
appropriately.
This change is somewhat large because as well as adding a C, Shell and
Perl i18n interface we're adding a lot of tests for them, and for
those tests to work we need a skeleton PO file to actually test
translations. A minimal Icelandic translation is included for this
purpose. Icelandic includes multi-byte characters which makes it easy
to test various edge cases, and it's a language I happen to
understand.
The rest of the commit message goes into detail about various
sub-parts of this commit.
= Installation
Gettext .mo files will be installed and looked for in the standard
$(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to
override that, but that's only intended to be used to test Git itself.
= Perl
Perl code that's to be localized should use the new Git::I18n
module. It imports a __ function into the caller's package by default.
Instead of using the high level Locale::TextDomain interface I've
opted to use the low-level (equivalent to the C interface)
Locale::Messages module, which Locale::TextDomain itself uses.
Locale::TextDomain does a lot of redundant work we don't need, and
some of it would potentially introduce bugs. It tries to set the
$TEXTDOMAIN based on package of the caller, and has its own
hardcoded paths where it'll search for messages.
I found it easier just to completely avoid it rather than try to
circumvent its behavior. In any case, this is an issue wholly
internal Git::I18N. Its guts can be changed later if that's deemed
necessary.
See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for
a further elaboration on this topic.
= Shell
Shell code that's to be localized should use the git-sh-i18n
library. It's basically just a wrapper for the system's gettext.sh.
If gettext.sh isn't available we'll fall back on gettext(1) if it's
available. The latter is available without the former on Solaris,
which has its own non-GNU gettext implementation. We also need to
emulate eval_gettext() there.
If neither are present we'll use a dumb printf(1) fall-through
wrapper.
= About libcharset.h and langinfo.h
We use libcharset to query the character set of the current locale if
it's available. I.e. we'll use it instead of nl_langinfo if
HAVE_LIBCHARSET_H is set.
The GNU gettext manual recommends using langinfo.h's
nl_langinfo(CODESET) to acquire the current character set, but on
systems that have libcharset.h's locale_charset() using the latter is
either saner, or the only option on those systems.
GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either,
but MinGW and some others need to use libcharset.h's locale_charset()
instead.
=Credits
This patch is based on work by Jeff Epler <jepler@unpythonic.net> who
did the initial Makefile / C work, and a lot of comments from the Git
mailing list, including Jonathan Nieder, Jakub Narebski, Johannes
Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and
others.
[jc: squashed a small Makefile fix from Ramsay]
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-18 03:14:42 +04:00
|
|
|
git_setup_gettext();
|
|
|
|
|
2009-11-09 18:04:49 +03:00
|
|
|
if (argc == 2 && !strcmp(argv[1], "-h"))
|
|
|
|
usage(fast_import_usage);
|
|
|
|
|
2008-02-29 01:29:54 +03:00
|
|
|
setup_git_directory();
|
2011-02-26 02:43:25 +03:00
|
|
|
reset_pack_idx_option(&pack_idx_opts);
|
2008-05-14 21:46:53 +04:00
|
|
|
git_config(git_pack_config, NULL);
|
2008-01-21 07:36:54 +03:00
|
|
|
if (!pack_compression_seen && core_compression_seen)
|
|
|
|
pack_compression_level = core_compression_level;
|
|
|
|
|
2007-03-08 01:09:21 +03:00
|
|
|
alloc_objects(object_entry_alloc);
|
2007-09-10 14:35:04 +04:00
|
|
|
strbuf_init(&command_buf, 0);
|
2007-03-08 01:09:21 +03:00
|
|
|
atom_table = xcalloc(atom_table_sz, sizeof(struct atom_str*));
|
|
|
|
branch_table = xcalloc(branch_table_sz, sizeof(struct branch*));
|
|
|
|
avail_tree_table = xcalloc(avail_tree_table_sz, sizeof(struct avail_tree_content*));
|
|
|
|
marks = pool_calloc(1, sizeof(struct mark_set));
|
2006-08-14 08:58:19 +04:00
|
|
|
|
2009-12-04 20:06:57 +03:00
|
|
|
global_argc = argc;
|
|
|
|
global_argv = argv;
|
2006-08-23 10:00:31 +04:00
|
|
|
|
2007-08-03 12:47:04 +04:00
|
|
|
rc_free = pool_alloc(cmd_save * sizeof(*rc_free));
|
|
|
|
for (i = 0; i < (cmd_save - 1); i++)
|
|
|
|
rc_free[i].next = &rc_free[i + 1];
|
|
|
|
rc_free[cmd_save - 1].next = NULL;
|
|
|
|
|
2007-04-20 19:23:45 +04:00
|
|
|
prepare_packed_git();
|
2007-01-15 12:39:05 +03:00
|
|
|
start_packfile();
|
2007-08-03 10:00:37 +04:00
|
|
|
set_die_routine(die_nicely);
|
2010-11-22 11:16:02 +03:00
|
|
|
set_checkpoint_signal();
|
2007-09-17 13:19:04 +04:00
|
|
|
while (read_next_command() != EOF) {
|
|
|
|
if (!strcmp("blob", command_buf.buf))
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_new_blob();
|
fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since
(1) cat-blob can be used to retrieve the content of files, but
not their mode, and
(2) using cat-blob requires the frontend to keep track of a name
(mark number or object id) for each blob to be retrieved
Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs. The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):
'ls' SP <dataref> SP <path> LF
or in fast-import's active commit:
'ls' SP <path> LF
The response is a single line sent through the cat-blob channel,
imitating ls-tree output. So for example:
FE> ls :1 Documentation
gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9 Documentation
FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870 git-fast-import.txt
FE> ls :1 RelNotes
gfi> 120000 blob b942e499449d97aeb50c73ca2bdc1c6e6d528743 RelNotes
FE> cat-blob b942e499449d97aeb50c73ca2bdc1c6e6d528743
gfi> b942e499449d97aeb50c73ca2bdc1c6e6d528743 blob 32
gfi> Documentation/RelNotes/1.7.4.txt
The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.
If there is nothing there, the response is "missing some/path".
The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.
For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2010-12-02 13:40:20 +03:00
|
|
|
else if (!prefixcmp(command_buf.buf, "ls "))
|
|
|
|
parse_ls(NULL);
|
2007-02-20 12:54:00 +03:00
|
|
|
else if (!prefixcmp(command_buf.buf, "commit "))
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_new_commit();
|
2007-02-20 12:54:00 +03:00
|
|
|
else if (!prefixcmp(command_buf.buf, "tag "))
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_new_tag();
|
2007-02-20 12:54:00 +03:00
|
|
|
else if (!prefixcmp(command_buf.buf, "reset "))
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_reset_branch();
|
2007-01-15 14:35:41 +03:00
|
|
|
else if (!strcmp("checkpoint", command_buf.buf))
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_checkpoint();
|
2011-07-16 17:03:32 +04:00
|
|
|
else if (!strcmp("done", command_buf.buf))
|
|
|
|
break;
|
2007-08-01 18:23:08 +04:00
|
|
|
else if (!prefixcmp(command_buf.buf, "progress "))
|
2008-05-16 02:35:56 +04:00
|
|
|
parse_progress();
|
2009-12-04 20:06:56 +03:00
|
|
|
else if (!prefixcmp(command_buf.buf, "feature "))
|
|
|
|
parse_feature();
|
2009-12-04 20:06:57 +03:00
|
|
|
else if (!prefixcmp(command_buf.buf, "option git "))
|
|
|
|
parse_option();
|
|
|
|
else if (!prefixcmp(command_buf.buf, "option "))
|
|
|
|
/* ignore non-git options*/;
|
2006-08-15 04:16:28 +04:00
|
|
|
else
|
|
|
|
die("Unsupported command: %s", command_buf.buf);
|
2010-11-22 11:16:02 +03:00
|
|
|
|
|
|
|
if (checkpoint_requested)
|
|
|
|
checkpoint();
|
2006-08-05 10:04:21 +04:00
|
|
|
}
|
2009-12-04 20:06:57 +03:00
|
|
|
|
|
|
|
/* argv hasn't been parsed yet, do so */
|
|
|
|
if (!seen_data_command)
|
|
|
|
parse_argv();
|
|
|
|
|
2011-07-16 17:03:32 +04:00
|
|
|
if (require_explicit_termination && feof(stdin))
|
|
|
|
die("stream ends early");
|
|
|
|
|
2007-01-15 12:39:05 +03:00
|
|
|
end_packfile();
|
2006-08-15 04:16:28 +04:00
|
|
|
|
2006-08-14 08:58:19 +04:00
|
|
|
dump_branches();
|
2006-08-24 11:12:13 +04:00
|
|
|
dump_tags();
|
2007-01-16 09:15:31 +03:00
|
|
|
unkeep_all_packs();
|
2006-08-26 00:03:04 +04:00
|
|
|
dump_marks();
|
2006-08-06 21:51:39 +04:00
|
|
|
|
2007-02-12 03:45:56 +03:00
|
|
|
if (pack_edges)
|
|
|
|
fclose(pack_edges);
|
|
|
|
|
2007-02-07 10:19:31 +03:00
|
|
|
if (show_stats) {
|
|
|
|
uintmax_t total_count = 0, duplicate_count = 0;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(object_count_by_type); i++)
|
|
|
|
total_count += object_count_by_type[i];
|
|
|
|
for (i = 0; i < ARRAY_SIZE(duplicate_count_by_type); i++)
|
|
|
|
duplicate_count += duplicate_count_by_type[i];
|
|
|
|
|
|
|
|
fprintf(stderr, "%s statistics:\n", argv[0]);
|
|
|
|
fprintf(stderr, "---------------------------------------------------------------------\n");
|
2007-02-21 04:34:56 +03:00
|
|
|
fprintf(stderr, "Alloc'd objects: %10" PRIuMAX "\n", alloc_count);
|
|
|
|
fprintf(stderr, "Total objects: %10" PRIuMAX " (%10" PRIuMAX " duplicates )\n", total_count, duplicate_count);
|
2011-08-20 23:04:11 +04:00
|
|
|
fprintf(stderr, " blobs : %10" PRIuMAX " (%10" PRIuMAX " duplicates %10" PRIuMAX " deltas of %10" PRIuMAX" attempts)\n", object_count_by_type[OBJ_BLOB], duplicate_count_by_type[OBJ_BLOB], delta_count_by_type[OBJ_BLOB], delta_count_attempts_by_type[OBJ_BLOB]);
|
|
|
|
fprintf(stderr, " trees : %10" PRIuMAX " (%10" PRIuMAX " duplicates %10" PRIuMAX " deltas of %10" PRIuMAX" attempts)\n", object_count_by_type[OBJ_TREE], duplicate_count_by_type[OBJ_TREE], delta_count_by_type[OBJ_TREE], delta_count_attempts_by_type[OBJ_TREE]);
|
|
|
|
fprintf(stderr, " commits: %10" PRIuMAX " (%10" PRIuMAX " duplicates %10" PRIuMAX " deltas of %10" PRIuMAX" attempts)\n", object_count_by_type[OBJ_COMMIT], duplicate_count_by_type[OBJ_COMMIT], delta_count_by_type[OBJ_COMMIT], delta_count_attempts_by_type[OBJ_COMMIT]);
|
|
|
|
fprintf(stderr, " tags : %10" PRIuMAX " (%10" PRIuMAX " duplicates %10" PRIuMAX " deltas of %10" PRIuMAX" attempts)\n", object_count_by_type[OBJ_TAG], duplicate_count_by_type[OBJ_TAG], delta_count_by_type[OBJ_TAG], delta_count_attempts_by_type[OBJ_TAG]);
|
2007-02-07 10:19:31 +03:00
|
|
|
fprintf(stderr, "Total branches: %10lu (%10lu loads )\n", branch_count, branch_load_count);
|
2007-02-21 04:34:56 +03:00
|
|
|
fprintf(stderr, " marks: %10" PRIuMAX " (%10" PRIuMAX " unique )\n", (((uintmax_t)1) << marks->shift) * 1024, marks_set_count);
|
2007-02-07 10:19:31 +03:00
|
|
|
fprintf(stderr, " atoms: %10u\n", atom_cnt);
|
2007-02-21 04:34:56 +03:00
|
|
|
fprintf(stderr, "Memory total: %10" PRIuMAX " KiB\n", (total_allocd + alloc_count*sizeof(struct object_entry))/1024);
|
2007-02-07 14:38:21 +03:00
|
|
|
fprintf(stderr, " pools: %10lu KiB\n", (unsigned long)(total_allocd/1024));
|
2007-02-21 04:34:56 +03:00
|
|
|
fprintf(stderr, " objects: %10" PRIuMAX " KiB\n", (alloc_count*sizeof(struct object_entry))/1024);
|
2007-02-07 10:19:31 +03:00
|
|
|
fprintf(stderr, "---------------------------------------------------------------------\n");
|
|
|
|
pack_report();
|
|
|
|
fprintf(stderr, "---------------------------------------------------------------------\n");
|
|
|
|
fprintf(stderr, "\n");
|
|
|
|
}
|
2006-08-05 10:04:21 +04:00
|
|
|
|
2007-02-07 00:08:06 +03:00
|
|
|
return failure ? 1 : 0;
|
2006-08-05 10:04:21 +04:00
|
|
|
}
|